




免费预览已结束,剩余22页可下载查看
下载本文档
版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领
文档简介
专业译文:The Java heapThe Java heap, where every Java object is allocated, is the area of memory youre most intimately connected with when writing Java applications. The JVM was designed to insulate us from the host machines peculiarities, so its natural to think about the heap when you think about memory. Youve no doubt encountered a Java heapOutOfMemoryError caused by an object leak or by not making the heap big enough to store all your data and have probably learned a few tricks to debug these scenarios. But as your Java applications handle more data and more concurrent load, you may start to experienceOutOfMemoryErrors that cant be fixed using your normal bag of tricks scenarios in which the errors are thrown even though the Java heap isnt full. When this happens, you need to understand what is going on inside your Java Runtime Environment (JRE).Java applications run in the virtualized environment of the Java runtime, but the runtime itself is a native program written in a language (such as C) that consumes native resources, includingnative memory. Native memory is the memory available to the runtime process, as distinguished from the Java heap memory that a Java application uses. Every virtualized resource including the Java heap and Java threads must be stored in native memory, along with the data used by the virtual machine as it runs. This means that the limitations on native memory imposed by the host machines hardware and operating system (OS) affect what you can do with your Java application.This article is one of two covering the same topic on different platforms. In both, youll learn what native memory is, how the Java runtime uses it, what running out of it looks like, and how to debug a nativeOutOfMemoryError. This article covers AIX and focuses on the IBM Developer Kit for Java. Many of the restrictions that a native process experiences are imposed by the hardware, not the OS. Every computer has a processor and some random-access memory (RAM), also known as physical memory. A processor interprets a stream of data as instructions to execute; it has one or more processing units that perform integer and floating-point arithmetic as well as more advanced computations. A processor has a number ofregisters very fast memory elements that are used as working storage for the calculations that are performed; the register size determines the largest number that a single calculation can use.The processor is connected to physical memory by the memory bus. The size of the physical address (the address used by the processor to index physical RAM) limits the amount of memory that can be addressed. For example, a 16-bit physical address can address from 0x0000 to 0xFFFF, which gives 216 = 65536 unique memory locations. If each address references a byte of storage, a 16-bit physical address would allow a processor to address 64KB of memory.Processors are described as being a certain number of bits. This normally refers to the size of the registers, although there are exceptions such as 390 31-bit where it refers to the physical address size. For desktop and server platforms, this number is 31, 32, or 64; for embedded devices and microprocessors, it can be as low as 4. The physical address size can be the same as the register width but could be larger or smaller. Most 64-bit processors can run 32-bit programs when running a suitable OS.Operating systems and virtual memoryIf you were writing applications to run directly on the processor without an OS, you could use all memory that the processor can address (assuming enough physical RAM is connected). But to enjoy features such as multitasking and hardware abstraction, nearly everybody uses an OS of some kind to run their programs.In multitasking OSs, including AIX, more than one program uses system resources, including memory. Each program needs to be allocated regions of physical memory to work in. Its possible to design an OS such that every program works directly with physical memory and is trusted to use only the memory it has been given. Some embedded OSs work like this, but its not practical in an environment consisting of many programs that are not tested together because any program could corrupt the memory of other programs or the OS itself.Virtual memoryallows multiple processes to share physical memory without being able to corrupt one anothers data. In an OS with virtual memory (such as AIX and many others), each program has its own virtual address space a logical region of addresses whose size is dictated by the address size on that system (so 31, 32, or 64 bits for desktop and server platforms). Regions in a processs virtual address space can be mapped to physical memory, to a file, or to any other addressable storage. The OS can move data held in physical memory to and from a swap area when it isnt being used, to make the best use of physical memory. When a program tries to access memory using a virtual address, the OS in combination with on-chip hardware maps that virtual address to the physical location. That location could be physical RAM, a file, or the swap partition. If a region of memory has been moved to swap space, then its loaded back into physical memory before being used.Each instance of a native program runs as a process. On AIX a process is a collection of information about OS-controlled resources (such as file and socket information), a virtual address space, and at least one thread of execution.Although a 32-bit address can reference 4GB of data, a program is not given the entire 4GB address space for its own use. As with other OS the address space is divided up into sections, only some of which are available for a program to use; the OS uses the rest. Compared to Windows and Linux, the AIX memory model is more complicated and can be tuned more precisely.The AIX 32-bit memory model is divided and managed as 16 256MB segments. Figure 2 shows the layout of the default 32-bit AIX memory model.The user program can only directly control 12 out of 16 segments 3 out of 4GB. The most significant restriction is that the native heap and all thread stacks are held in segment 2. To accommodate programs with larger data requirements, AIX provides thelarge memory model.The large memory model allows a programmer or a user to annex some of the shared/mapped segments for use as native heap either by supplying a linker option when the executable is built or by setting theLDR_CNTRLenvironment variable before the program is started. To enable the large memory model at run time, setLDR_CNTRL=MAXDATA=0xN0000000whereNis between1and8. Any value outside this range will cause the default memory model to be used. In the large memory model, the native heap starts at segment 3; segment 2 is only used for the primordial (initial) thread stack.When you use the large memory model, the segment allocation is static; that is, if you request four data segments (for 1GB of native heap) but then only allocate one segment (256MB) of native heap, the other three data segments are unavailable for memory mapping.If you want a native heap larger than 2GB and you are running AIX 5.1 or later, you can use the AIXvery large memory model. The very large memory model, like the large memory model, can be enabled for an executable at compile time with a linker option or at run time using theLDR_CNTRLenvironment variable. To enable the very large memory model at run time, setLDR_CNTRL=MAXDATA=0xN0000000DSAwhereNis between0andDif you use AIX 5.2 or greater, or between1andAif you are using AIX 5.1. The value ofNspecifies the number of segments that can be used for native heap but, unlike in the large memory model, these segments can be used for mmapping if necessary.The IBM Java runtime uses the very large memory model unless its overridden with theLDR_CNTRLenvironment variable.SettingNbetween1andAwill use the segments between 3 and C for native storage as you would expect. From AIX 5.2, settingNtoBor higher changes the memory layout it no longer uses segments D and F for shared libraries and allows them to be used for native storage or mmapping. SettingNtoDgives the maximum 13 segments (3.25GB) of native heap. SettingNto0allows segments 3 through F to be used for mmapping the native heap is held in segment 2.A native memory leak or excessive native memory use will cause different problems depending on whether you exhaust the address space or run out of physical memory. Exhausting the address space typically only happens with 32-bit processes because the maximum 4GB of address space is easy to allocate. A 64-bit process has a user space of hundreds or thousands of gigabytes and is hard to fill up even if you try. If you do exhaust the address space of a Java process, then the Java runtime can start to show the odd symptoms Ill describe later in the article. When running on a system with more process address space than physical memory, a memory leak or excessive use of native memory will force the OS to swap out some of the virtual address space. Accessing a memory address that has been swapped is a lot slower than reading a resident (in physical memory) address because it must be loaded from the hard drive.If you are simultaneously trying to use so much RAM-backed virtual memory that your data cannot be held in physical memory, the system will thrash that is, spend most of its time copying memory back and forth from swap space. When this happens, the performance of the computer and the individual applications will become so poor the user cant fail to notice theres a problem. When a JVMs Java heap is swapped out, the garbage collectors performance becomes extremely poor, to the extent that the application may appear to hang. If multiple Java runtimes are in use on a single machine at the same time, the physical memory must be sufficient to fit all of the Java heaps.How the Java runtime uses native memoryThe Java runtime is an OS process that is subject to the hardware and OS constraints I outlined in the preceding section. Runtime environments provide capabilities that are driven by some unknown user code; that makes it impossible to predict which resources the runtime environment will require in every situation. Every action a Java application takes inside the managed Java environment can potentially affect the resource requirements of the runtime that provides that environment. This section describes how and why Java applications consume native memory.The Java heap and garbage collection.The Java heap is the area of memory where objects are allocated. The IBM Developer Kits for Java Standard Edition have one physical heap, although some specialist Java runtimes such as IBM WebSphere Real Time have multiple heaps. The heap can be split up into sections such as the IBM gencon policysnurseryandtenuredareas. Most Java heaps are implemented as contiguous slabs of native memory.The heaps size is controlled from the Java command line using the-Xmxand-Xmsoptions (mxis the maximum size of the heap,msis the initial size). Although the logical heap (the area of memory that is actively used) will grow and shrink according to the number of objects on the heap and the amount of time spent in garbage collection (GC), the amount of native memory used remains constant and is dictated by the-Xmxvalue: the maximum heap size. The memory manager relies on the heap being a contiguous slab of memory, so its impossible to allocate more native memory when the heap needs to expand; all heap memory must be reserved up front.Reserving native memory is not the same as allocating it. When native memory is reserved, it is not backed with physical memory or other storage. Although reserving chunks of the address space will not exhaust physical resources, it does prevent that memory from being used for other purposes. A leak caused by reserving memory that is never used is just as serious as leaking allocated memory.The IBM garbage collector on AIX minimises the use of physical memory by decommitting (releasing the backing storage for) sections of the heap as the used area of heap shrinks.For most Java applications, the Java heap is the largest user of process address space, so the Java launcher uses the Java heap size to decide how to configure the address space. Table 2 lists the default memory model configuration for different ranges of heap size. You can override the memory model by setting theLDR_CNTRLenvironment variable yourself before starting the Java launcher. If you are embedding the Java runtime or writing your own launcher, you will need to configure the memory model yourself either by specifying the appropriate linker flag or by settingLDR_CNTRLbefore starting your launcher.The Just-in- time (JIT) compilerThe JIT compiler compiles Java bytecode to optimised native binary code at run time. This vastly improves the run-time speed of Java runtimes and allows Java applications to run at speeds comparable to native code.Compiling bytecode uses native memory (in the same way that a static compiler such asgccrequires memory to run), but the output from the JIT (the executable code) also mist be stored in native memory. Java applications that contain many JIT-compiled methods use more native memory than smaller applications.Classes and classloadersJava applications are composed of classes that define object structure and method logic. They also use classes from the Java runtime class libraries (such asjava.lang.String) and may use third-party libraries. These classes need to be stored in memory for as long as they are being used.The IBM implementation from Java 5 onward allocates slabs of native memory for each classloader to store class data in. The shared-classes technology in Java 5 and above maps an area of shared memory into the address space where read-only (and therefore shareable) class data is stored. This reduces the amount of physical memory required to store class data when multiple JVMs run on the same machine. Shared classes also improves JVM start-up time.The shared-classes system maps a fixed-size area of shared memory into the address space. The shared class cache might not be completely occupied or might contain classes that you are not currently using (that have been loaded by other JVMs), so its quite likely that using shared classes will occupy more address space (although less physical memory) than running without shared classes. Its important to note that shared classes doesnt prevent classloaders being unloaded but it does cause a subset of the class data to remain in the class cache. SeeResourcesfor more information about shared classes.Loading more classes uses more native memory. Each classloader also has a native-memory overhead so having many classloaders each loading one class uses more native memory than having one classloader that loads many classes. Remember that its not only your application classes that need to fit in memory; frameworks, application servers, third-party libraries, and Java runtimes contain classes that are loaded on demand and occupy space.The Java runtime can unload classes to reclaim space, but only under strict conditions. Its impossible to unload a single class; classloaders are unloaded instead, taking all the classes they loaded with them. A classloader can only be unloaded only if: the Java heap contains no references to thejava.lang.ClassLoaderobject that represents that classloader.The Java heap contains no references to any of thejava.lang.Classobjects that represent classes loaded by that classloader.No objects of any class loaded by that classloader are alive (referenced) on the Java heap.Its worth noting that the three default classloaders that the Java runtime creates for all Java applications bootstrap,extension, andapplication can never meet these criteria; therefore, any system classes (such asjava.lang.String) or any application classes loaded through the application classloader cant be released.Even when a classloader is eligible for collection, the runtime only collects classloaders as part of a GC cycle. The IBM gencon GC policy (enabled with the-Xgcpolicy:genconcommand-line argument) unloads classloaders only on major (tenured) collections. If an application is running the gencon policy and creating and releasing many classloaders, you can find that large amounts of native memory are held by collectable classloaders in the period between tenured collections. SeeResourcesto find out more about the different IBM GC policies.Its also possible for classes to be generated at run time, without you necessarily realising it. Many JEE applications use JavaServer Pages (JSP) technology to produce Web pages. Using JSP generates a class for each .jsp page executed that will last the lifetime of the classloader that loaded them typically the lifetime of the Web application.Another common way to generate classes is by using Java reflection. When using thejava.lang.reflectAPI, the Java runtime must connect the methods of a reflecting object (such asjava.lang.reflect.Field) to the object or class being reflected on. This accessor can use the Java Native Interface (JNI), which requires very little setup but is slow to run, or it can build a class dynamically at run time for each object type you want to reflect on. The latter method is slower to set up but faster to run, making it ideal for applications that reflect on a particular class often.The Java runtime uses the JNI method the first few times a class is reflected on, but after being used a number of times, the accessor is inflated into a bytecode accessor, which involves building a class and loading it through a new classloader. Doing lots of reflection can cause many accessor classes and classload
温馨提示
- 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
- 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
- 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
- 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
- 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
- 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
- 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。
最新文档
- 儿童美术油条课件
- 班组岗前安全培训
- 文明出行课件大班
- OA系统行政培训
- 圣诞蛋糕绘画课件
- 立体动画考试题及答案
- 广东公共关系学自考试题及答案
- 课件显示受保护视图
- 矿山车辆考试题及答案
- 口腔医考试题及答案
- 商丘市金马药业有限公司年产60万件中成药品生产项目环境影响报告
- 员工上下班交通安全培训
- PTN原理、PTN设备和工程维护
- 钢结构分包单位考察文件(项目考察表及生产厂考察内容提示要点)
- 船舶管理-船舶的发展与种类课件
- “条块结合”、创新学校管理的实践与思考
- 纯电动汽车整车控制器(VCU)策略
- 商会入会申请书
- 习作我的暑假生活公开课一等奖市优质课赛课获奖课件
- QCC报告参考模板
- 高中数学必修一全部课件-高中数学必修1
评论
0/150
提交评论