WebLogic Performance Tuning学习笔记

本文探讨了WebLogic服务器的性能调优策略,包括内存复制、JVM堆配置、执行队列管理、线程池设置及socket连接优化等内容。通过合理配置这些参数,可以显著提升WebLogic服务器的应用性能。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

WebLogic Performance Tuning

In memory replication is almost 10 times faster than JDBC-based persistence. Whenever possible, you should place fewer, coarse-grained objects into the HTTP session, rather than multiple, find-grained objects.

 

JVM in WLS

The Java Heap represents the memory space for all runtime objects. At any time, the heap consists of live objects, dead objects, and free memory. When an object is no longer referenced by anyone, it is considered “garbage” and is ready to be reclaimed as free memory. Garbage collections is the JVM’s way of managing the JVM heap, it refers to the process of reclaiming unused Java objects from the heap. JVM heap size impacts the frequency and duration that a JVM spends on garbage collection.

 

The JVM heap is divided into two areas: young and old. The young generation area is further subdivided into Eden and two survivor spaces (of equal size). Eden refers to the area where new objects are allocated. After a pass of the JVM’s garbage collector, all surviving live objects are copied into either one of the two survivor spaces. On successive iterations of garbage collection, objects are copied between these survivor spaces until they exceed a maximum heap size threshold value, in which case they are transferred to the old generation area.

 

By setting aside separate memory pools to hold objects of different generations, the garbage collector needs to run in each generation only when it fills up. You can therefore find-tune the JMV’s garbage collection by relying on the fact that the majority of the objects die young, thereby optimizing the garbage collection cycles. However, improperly configured sizes for different generations also can degrade the performance of garbage collection.

 

The –XX:NewSize and –XX:MaxNewSize options let you specify the minimum and maximum sizes of the new generation area. As a general rule, you should ensure the size of the new generation area is one-fourth of the maximum heap size. If you have a large number of short-lived objects, you also may consider increasing the size of the new generation area. As mentioned earlier, the new generation area is further subdivided into Eden and two survivor spaces of equal size. Use the –XX:SurviverRatio=Y option to set the ratio of the Eden/survivor spaces – 8 is a good starting point for the Eden/survivor ratio. You then can monitor the frequency and the duration of garbage collection during the lifetime of the server and adjust this setting accordingly.

 

In order to enable the HotSpot VM, you may need to explicitly pass additional options to the Java command line. On Windows platforms, use the –hotspot or –client option to select the HostSpot VM for client-size applications, and use the –server option to enable the HotSpot VM for server-side applications.

 

In some case, the generational garbage collector may not give you optimal performance. For example, if you have deployed a WebLogic server to a multi-CPU machine, using JRockit and its parallel garbage collector could be more optimal.

 

WebLogic can log low memory conditions automatically. It samples the available free memory a set number of times during a fixed time interval. At the end of each interval, it computes the average free memory for that interval. If the average drops by a user-configured amount after any sample interval, WebLogic logs a warning message indicating the low memory and changes the server’s health status to “warning”. In addition, WebLogic logs a warning message if at any time the average free memory after any interval drops below 5% of the initial free memory.

 

You can use the Administration Console to explicitly request garbage collection on a particular WebLogic instance. On many Unix platforms, the JVM has a choice between two threading models: green threads and native threads. Green threads are Java threads provided by the JVM itself. Native are kernel-based threads provided by the OS on which the JVM runs.

 

Green threads are lightweight and have a smaller memory footprint than native threads. A JVM typically runs multiple green threads within a single OS thread. Therefore, green threads enable the JVM to optimize thread-management tasks such as scheduling, switching, and synchronization. Alternatively, a JVM can rely on native threads directly, in which case each JVM thread maps to a single OS thread. Native threads have a larger memory footprint and also incur a higher overhead during creation and context switching. It also means that the number of overhead during concurrent JVM threads is limited by the number of processes/threads built into the kernel.

 

Native threads offer better performance on multi-CPU machines because they can benefit from the OS support for thread scheduling and load balancing across multiple CPUs. On a single-CPU machine, green threads probably offer better performance for most applications.

 

Most JVMs for Unix platforms provide a –native or –green option that lets you choose the threading model for your application.

 

Execute Queues

In WebLogic, an execute queue represents a named collection of worker threads for a server. Dedicated execute queues can be made available to Servlets, JSPs, EJBs, and RMI objects deployed to the server. Any task that is assigned to a WebLogic instance is first placed into an execute queue and then assigned to an available server thread on a first-come-first-served basis.

 

The default execute queue, weblogic.kernel.default is preconfigured for each WebLogic instance. If a server is in development mode, this queue defaults to 15 threads. If the server is in production mode, it defaults to 25 threads. Unless you create additional execute queues and assign them to specific applications, all web applications and RMI objects rely on this default execute queue.

 

WebLogic also depends on two queues reserved for administration purposes only. The weblogic.admin.HTTP execute queue belongs to the Administration Server and is dedicated to the Administration Console. The weblogic.admin.RMI execute queue is available on the Administration Server and all Managed Servers. It too is reserved for administration traffic only. Neither of these execute queues can be configured in any way.

 

You can use Admin Console to configured a new execute queue for a server. Settings includes Queue Length, Queue Length Threshold Percent, Thread Count, Threads Minimum, Threads Maximum, Thread Priority. Once you’ve applied your changes to the settings for the new execute queue, you need to reboot the associated server in order for the changes to take effect.

 

By default, all applications deployed to WebLogic (except JMS producers and consumers) use the server’s default execute queue. WebLogic lets you assign an execute queue to Servlets, JSPs, EJBs and RMI objects. In order to associate the execute queue with a servlet (or JSP), you need to specify the wl-dispatch-policy initialization parameter for the servlet or JSP in web.xml file. In order to assign an execute queue to an RMI object, you must specify the –dispatchPolicy option when using WebLogic’s RMI compiler(rmic). (Same as EJB)

 

To monitor the active queues using the Administration Console. It will provide an indication of each execution queue, including the queue length, the number of idle threads, and the throughput. For instance, if your queue length increases over time while your throughput stays the same, any new incoming requests will have to wait longer and longer for the next available thread. Some kind of tuning undoubtedly will be needed in these circumstances.

 

Server Threads

The runtime behavior of an execute queue is influenced by a number of parameters, and optimal server performance depends on the optimal configuration of these parameters:

  • *       The thread count for an execute queue
  • *       The percentage of execute threads that will act as socket readers
  • *       How execute queues react to overflow conditions
  • *       How often WebLogic detects “stuck” threads in the queue
  • *       The size of the wait queue, or the number of TCP connections that are accepted before WebLogic refuses additional request.

 

The thread count for an execute queue determines the number of allocated threads that can be used concurrently to serve application tasks that utilize the execute queue. You must be careful when allocating more server threads because Threads are valuable resources and they consume memory. Too many execute threads means more memory consumption, and thus more context switching among the available threads.

 

Ideally, a WebLogic instance should rely on the native I/O implementation provided for the server’s OS. If you must rely on WebLogic’s pure-Java socket reader implementation, you still can improve the performance of socket communication by configuring the appropriate number of execute threads that will act as socket readers for each server (and client). Use ThreadPoolPercentSocketReaders attribute to specify the maximum percentage of execute threads that are used to read incoming requests to a socket. By default, WLS allocates 33% of server threads to act as socket readers- its optimal value is again application-specific.

 

You even can configure the number of socket reader threads for JVM on which client runs. Use the –Dweblogic.ThreadPoolSize option to specify the size of the client’s thread pool.

 

When threshold value is exceeded, WLS changes the its health status to “warning” and optionally allocates additional worker threads to reduce the workload on the execute queue.

 

WebLogic automatically detects when a thread assigned to an execute queue becomes “stuck”. A thread becomes stuck when it cannot complete its current task or accept new tasks. The server logs a message every time it detects a stuck thread. Recall that the Node Manager can be configured to automatically restart a Managed Server whose health status is “critical”. By default, WebLogic marks a thread as stuck if it has been busy for 600 seconds.

 

Socket Connections

Always use Native Socket, because it improves the performance of socket-based communication considerably. The pure-Java Socket uses threads that must actively poll all open sockets to determine whether they contain data that needs to be read.

 

The pure-Java socket can be optimized somewhat by ensuring that there are always enough reader threads available on a WebLogic instance. The number of reader threads necessary for a WebLogic instance is highly contextual. If the server belongs to a combined-tier cluster, each server potentially will open a maximum of two sockets, which it needs to replicate any session state.

 

If a cluster has a pinned object, you will need an additional socket reader on each member of the cluster, which can be used to reach that pinned object. If you’ve designed a multi-tier application setup, each member of tier will potentially communicate with each member of the object tier.

 

By default, WebLogic accepts 50 TCP connections before refusing additional connection requests. The maximum value for this setting is OS-dependent. If clients keep getting “connection refused” message when trying to access the server, it may be because your accept backlog is not large enough.

 

 

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值