jPOS applications have a very low memory footprint, so we usually are fine running a simple
java -server -jar jpos.jar
But for applications more memory intensive (due to caching), such as jCard, the stop-the-world Full GC becomes a problem, freezing the JVM for as much as 5 seconds, or more.
If your response time is usually good but from time to time you have an inexplicable spike, I suggest you add:
that will create a nice log file that you can
tail -f to get some insight into the JVM GC whereabouts.
If you see too many Full GCs, it's time to dig deeper. First thing to check is your
-Xmx. If you set a
-Xmx parameter which is high (say 1GB) but you don't set
-Xms with the same value, the JVM will try to bring down the memory in use, even if you are not reaching the max value, causing frequent Full GCs. You may want to try:
and check how the
If that solves the problem, leave the JVM default options alone, you're done for now. If that doesn't solve your problem I suggest you change your GC implementation. My choice is the Concurrent Mark Sweep GC implementation that you can enable using:
This will probably solve your problem, but while you are at it, I suggest some of these:
jPOS' Q2 forces a full GC (calling
System.gc()) when re-deploying jars in the
deploy/lib directory. You could be using RMI that also initiates a distributed GC calling
System.gc() too, or an operator could be fascinated by the GC button in his JMX console, so you want to prevent a Full GC in those situations, you do that using:
If you care about processing transactions fast after a restar, you may set:
and to get some latest JVM optizations:
The full list of my jCard JAVA_OPTS p0rn is:
-jar jcardinternal-2.0.0-SNAPSHOT.jar "$@"