A quick overview on JVM Architecture

In this article we are going to see a quick overview on JVM Architecture. This is very useful if you are a developer or a performance engineer who need to monitor JVM and investigate issues within JVM. Our main goal is to know inner components, workflows and key monitoring items.

First of all, JVM refers to java virtual machine. It comes with JDK/JRE. JVM is the java run-able instance where JRE is the package of JVM , Java Package Classes and necessary run time items containing with this. So, JVM is a part of JRE. When an application launches, it creates it's own JVM to run.
JVM Components : JVM has mainly 3 component.
1. Class loaders
2. Run time Data area(memory)
3. Execution Engine.

There is another component stays beside these component which is very necessary, which is Native method interface , Which actually talk directs to OS. In windows , if you open Jre directory , In bin, you can see OS dependent dlls and native method libraries.

[Note : Because of these(JNI & Libs), Java Can actually load any OS runnable C++ or DotNet assembly ]
Now, lets see details of these component responsibilities and classifications.
1. Class Loader :  
By the name class loader , we can easily understand what it actually do. It loaded the class(byte code) to execution engine. To load a class, it must need some checking before initiation and then it will load into memory of execution engine.

A. Load : Loads byte code(class/Jar/War/Ear) into memory from local or remote sources. This has mainly 3 types of class loading by different class loaders.
Types of class loader :

1. Bootstrap CL : Loads the Java’s internal classes. In windows , if you open Jre directory (if you have jdk, inside jdk there is jre) , you can see rt.jar in lib directory which contains packages.

2. Extension CL : Loads JRE library Jars from \jre\lib\ext.

3. Application CL : Loads byte codes (class/Jar/War/Ear) following CLASSPATH or –cp parameters. This is our main application loader. It also follows extension CL to load application dependent libraries.
B. Link : Links classes with Execution engine before finalize. The responsibilities can be described in three major parts.

i. Validation/Validation : It checks loaded byte code by CL for valid byte code. It check for specification of java class file(byte code). If you

ii. Prepare : In this step, memory is allocated for class. This is Class property only, that means all static items, meta data of the class. Not instances. And all allocation are done with default values. Java data type of default values.

iii. Resolve: In this steps, all symbolic references are resolves. That means, all member, all outgoing references(member objects) are actually build in this step. 
C. Initialize : In this phase, classes are actually initialized. It means, 
->Static initializer run in here. The block which are declared with static.
->Static values are set. The fields which are declared with static.
image

Note : Each steps has separate runtime exceptions. I will provide separate blog post for Java Runtime exceptions.

2. Runtime Data area (JVM Memory) : 

In JVM there are 6 types of memory(basic 5) defined by their functions:

A. Method area/Meta Space: This is JVM’s method spaces that stores Class data/metadata/Class level constant pool/reflections API /static variable are stored here. Class loaders usages this area to load & initialize of classes.
This is actually JVM’s internal memory used to execution. It is created once per JVM.
Default size , 64MB at MAX. And tuning JVM parameters are XX:PermSize, -XX:MaxPermSize
Before java 1.8 it was called perm-gen space. Method area is eliminated and replaced with Meta Space in JVM 1.8. So, -XX:PermSize, -XX:MaxPermSize parameters will show warning messages in JVM 1.8.

Meta space has no memory limit from JVM, so based on which operating system, meta space can occupy memory. it is dynamically managed by HotSpot.
A high water mark is used for inducing a GC—when committed memory of all metaspaces reaches this level, a GC is triggered. 
We can use the following flag: -XX:MetaspaceSize=<size> ,to specify the initial high water level.

Metaspaces expand the native memory usages until it gets to some level (starts at MetaspaceSize). When it reaches the level, it performs a GC (to see if classes can be unloaded). After that if there is free spaces, it is used , if not it uses more native memory.
After that GC it decides to do on the next level for doing a GC .
If MetaspaceSize is set higher, then fewer GC will be done early.
[Note : MetaSpace size can have performance impact(CPU) due to its sizing and nature of execution]
B. Heap Space : Is the main area where objects are created. Anytime we create new objects, it is present in here. It is must to tune. By default 1/4 th of physical memory. Example : if I have 8GB ram, Max Heap will be 2GB. Heap is created once per JVM.

We can tune heap by following JVM parameters 
-Xms<Size>, Minimum size of heap
-Xmx<Size>, Maximum size of heap
-XX:MinHeapFreeRatio=<Percentage Minimum >, JVM shrink parameter set to minimum to keep low memory usages when it is not needed.
-XX:MaxHeapFreeRatio=<Percentage maximum>, JVM shrink parameter set to Maximum to keep highest memory usages in full execution.
-XX:NewRatio=<Number> , The ration of old:young generation objects.ex- 2, means, old:young = 1:2
-XX:NewSize=<Size>, Young generation minimum size.
-XX:MaxNewSize=<Size>, Young generation Maximum size.

Heap is divided in several area based on what GC you select. Typically, a heap is divided into two area based on what type of memory they create.
1. Young Generation space : For newly created objects.
2. Old generation space : For objects that are created in young generation but did not cleaned by GC events due to its usages.

C. Java Stack : This is similar to C stack, memory space used for method execution. A stack is created when a thread is created in side JVM . JVM stack stores frames. A frame is consist for following items. Local Variables, Operand Stacks. It has more functionality (Ref : Link ) .A frame is created under a method. I am skipping detail here. (it should be a separate blog post to know details).
SStack is highly govern by heap by pushing and popping frames for program execution.
-Xss<Size> sets the java thread stack size.

D. Runtime Constant Pool : It is representation of a class’s constant pool table(Ref : CP). It is created per class/interface. It contains all items from compile time to runtime which actually get resolved.
Each run-time constant pool is allocated from the Java Virtual Machine's method area (so it is ignored as separate type often). It is constructed for a class or interface when the class or interface is created by the Java Virtual Machine.(Load from Class Loaders).

E. PC Register : Program Counter Register keeps the next instruction to executed. As you can see, it has instruction data, it is allocated per thread(logical unit that performs tasks). So, if you have more thread in you application, you will have more PC register. Think of this as processor register, works almost the same way.

F. Native Method Stack : Java memory to execute native method. It is governed by Java Native Interface. This space is used for only native calls, so mostly used by JRE it self, unless we make some native calls from program. It will be used vastly when we write application that calls directly native calls. It has same errors as java stack.
image


3. Execution Engine : 

This part of JVM performs task. Java Execution engine has following parts inside.

A. Interpreter : It take instructions, decide what to do, defines what native method to call. It takes byte code initially to run native calls which are directly included in byte calls.

B. JIT Compiler : Just in time compiler. The main functionality is to compile byte code to machine code. Another functionality of JIT is to keep compiled instructions for further usages. That means, if the same byte code block comes for compilation, JIT does not compile that again, it just gives the previously stored compiled code.

C. Hotspot Profiler : Byte code that are running and compiled are tracked by hotspot profiler. It eliminates same block compilation twice. It tracks those previously compiled machine code and instructs JIT to not recompile and give the output of previous compiled item. This has become the key of JVM performance.
D. Garbage Collector (GC) : Responsible cleaning used classes and objects which are no longer needed. It is kind of cleaner of all memory. Java GC has different algorithm like mark & sweep. It has different mode(Concurrent, parallel, generational). I will provide separate blog post on Java GC , how it works, how to choose application specific mode, how to twine GC.
image
The Execution engine is very critical part of JVM. So, for each application behavior we should configure all components of Execution engine with memory to run JVM smoothly for specific application. 
So Finally, we get the following picture
image

And From Internet, there are lots of good images too.
jvm architecture

Thanks..:)

Do you understand your web application performance?

In this article we are going to know about an web application performance ? what does it means by performance. This is very generic ideas , spatially for managers, decision makers or a person who is very new to performance testing

Do you really know what is you application performance? I am trying to break down, you can see each politest for understanding.

Understand your Application User Load model or Business mode : This is very important , before trying to know any of performance activity, you need to know how user interact with your system. What they expect/ what your company target. This will breakdown your performance expectation. This is non technical part. User activity monitoring from browser tools might help to know your user better. Now a days all browsers have build in tools for user experience measurement.

Understand your Application load : Do you know how much you application can take load? Load of what? User load, data load. By mean of load, i referrer to knowing the application's height usages model. To know this , you need simple load testing. You need to measure, what max user supports, max data load support. While testing the load, you need to stop when you are getting error.

Understand your Application stability : Do you know how much you application is stable. What do you mean by stable? It means can you application work properly under any circumstances. From previous load test activity, we will know application average and high load condition. So, how the application is stable under this high/avg condition. To know that you need to perform stress testing for a longer period of time. This period depend on your required environment. It can be 1 hour, 2 hours, 8 hours or even 1 week. The target of this is to know stability of application in given load situation which is normally referred to stress testing.

Understand your Application tolerance: Do you know what will happen if your application gets sudden peak of user or data transaction or low network / IO bandwidth? To know the tolerance of your web application, you need to perform spike test. A spike test refers to , sudden growth of user/ data usages for a small number of time. This will test your system , how they can tolerate sudden peak.

Understand your Application capacity : Understanding maximum capacity and stability is very important. For this you need to perform load testing and stress testing together. Than mean , long term maximum support load and stress scenario together. You will know about  , the bucket(server/IO/NET) capacity. This is very important in cloud environment spatially in scale in or scale out capacity measurement. This is measured by X, means, for example, your application supports 5000 users , but it can be scale up to 2x (2*5000=10000 users).

Understand your Application boundary : This is the boundary checking. What will happen if  application is under extreme load and stress with error/exception condition. This test will define your extreme condition and boundary. Usually this is measured with capacity (X) and expected errors. Mainly this is king of precaution test for expected error behaviors. This is very critical for , financial domain and security tests.

Understand your server throughput : Do you know what is your server (app/data etc) power or capacity? This is measured by throughput. More powered server has better throughput. It is measured by Hit per second. More often it is also expressed in Transaction per second where transaction referrers to business specified transaction. The more throughput you have does not mean the fast websites, but it means server power is more. Because, faster response time has many factors and definitely throughput is one  of them. So, that is why all Load generation tools have this measurement. And, from SLA agreement, this term is clearly specified.

Understand your resource usages : Do you know , how much resources you have to run your web application and how much your application uses? And is it enough? By resource, i want to mean
a. Network bandwidth :
b. Disk IO
c. Memory
d. CPU
Do you monitor in you system server? If not, then how can you be ensure the performance that you need.
To monitor the whole system, you need to know about your resources for each component of your whole environment. Most of OS have their own monitoring system.
You may visit Sysinternals Suite if you are monitoring/debugging windows environment.
Windows also have perfmon builtin which you can monitor the whole system.

Beside that, now a days companies use APM solution which covers all of you monitoring in single application. To use an APM, you need an agent in each server that does the job for you.
[ there are some opensource APM, you need to get & configure for your system]

Understand Application Activity: As you monitor you infrastructure, there is become essential to monitor application it self. That mean, application server monitoring, DB monitoring, middle layer monitoring etc. Based on technology each items has its own different performance counters. For dotnet, java, ruby, python, php etc has its own performance counters according to its architecture and working mechanism. To pinpoint your bottlenecks and validate improvements, you must know your application activity. Each technology has its own monitoring tools.
Java : Java Mission Control , Visual VM, Command line tools
Dotnet : PerfView & Framework tools .

And again, APM tools these days can track application activity also, with server monitoring and browser activity monitoring.

These are some primary items. To know each item deeper, don't miss this full page dedicated for performance.

Thanks..:) 

Performance Reporting : KPI (key Performance Indicator)

In this article we are going to know about KPI for software, especially web application. And, we will get general idea on how to represent performance through KPI. 

What is KPI ? 
By Name , KPI refers to key performance indicator. A measurement which can shows application performance value as a whole. And, it can be represented in Graph or tabular format.

Think of KPI as abstract value which tells us about performance at a glance. We will compare with , some expected value and define acceptance of the application performances.

In general, if we think of KPI of a ball point pen, the KPI should refer to performance quality of the pen. And, performance quality of a pen is how well it can write, how long it can last etc. 

So, KPI value should contain the reference value of "how well" & "How long". So, we will provide quality measurement "how well" & "How long" and then we can add them to express KPI of the pen. You can add here color intensity that might change with a time to completely describe in pen case.

Note : we can also multiply them , this decision based on how we calculate KPI. Usually, multiplication express more importance in graph representation.

Let' s have some example for web application : In general (unless you have any specific performance target) , a web application performance quality measure by
1. How fast the response time
2. How quick it can perform basic functionality(if we think for banking domain, it is business transactions)
3. How much less it takes resource(size & memory)

More often, as web applications are multi-tier application, it is very logical to measure particular application performance. For server performance , it is used to measure
1. How much is the Server Throughput (Hit/sec)
2. How much is error rate (down time %)

So, let's think about an any simple company internal application (not public facing) build on ASP.Net and has oracle DB to serve data. If we are measuring KPI, we should come up with some values that represent the performance.

For typical web product, it is considered, as less the size of the page is , the more it is performant. Same goes for as less bandwidth it takes, it is more performant.

As we are considering internal application, we may ignore bandwidth.  So, the important measuring parameters are
>Response Time
>Size of each page/request(send + received)
And, to add server power with that, we should add
>Server throughput

If you have load balancer and serial layer for data communication(we are assuming , we have) and, application gets different type of error due to Database and legacy system dependency, we should skip error rate. If we include this error%, it might be not logical for testing web servers only. So, we skip error% from key performance indication.(when we test individual servers, then it is logical. That means, webservice, DB, middle tire all separately). 

[Note : If it is customer facing website, we should not ignore error rate ]

So, we have 3 items. And, the meaning of each item is
a. Response time : More response time less performant(inversely proportionate )
b.Size : More size, less performant(inversely proportionate )
c.Throughput: More throughput , more performant. (proportionate )

So, in here KPI for each step/transaction = Response Time x Size x(1/Throughput)

In here, you can calculate throughput inversely and then multiply all. Or, add all.
= Response Time x Size x Throughput inverse
= Response Time + Size + Throughput inverse
(addition is also used to measure the value)

[Note : To keep graph more align, we might have to change the unit, Example – millisecond to min/hour, changing items in Kilo/Mega range ]

So, KPI for a step/transaction in that application is
KPI = RT*Size*TrInv

If we have multiple steps in test case (usually we have) and each of them have different priority, then we can include priority in KPI to keep things more realistic. (it will help in release to release comparison)
Then the KPI = RT*Size*TrInv*Priority.

So, let say in the test application , we have 5 transactions to test. And in current release, we have tested and get all KPI values from test tool. How can we measure , we are going good or bad (people provide large benchmarks which is not always useful). In that case , we should use KPI delta

KPI Delta = KPI expected- KPI from test results

In here expected KPI = the same KPI value calculation from expected results. (We should include SLA specification reflecting inside this expected KPI)

For example , Let say, Log in transaction. From test results we have found ,
Response time = 5s (Expected was 2s)
Size = 1.5Mb (expected was 1MB)
Through put = 20 hit per sec (Expected was 25)
Priority = 5(Measuring 1-10)

So, KPI = (5*1.5/20)*5 = 1.875
Expected KPI = (2*1/25)*5=0.4

So, Delta KPI = 0.4-1.875  = -1.475

As Delta is below Zero, so, application performance is bad. And , really it is, as we failed to achieve all expected performance marks.

So, to measure application performance here, if we use single Performance metrics KPI Delta, we can express performance status in single graph. This graph will be shown  
a.Over time: X-Axis = time, Y-axis = KPI Delta . It means  Performance status over time passes.
b.Over user : X-Axis = User Increment, Y-axis = KPI Delta.  It means  Performance status over user increment.

Note :

1.If we have difficulties with Expected KPI, we can assume any version of application as standard. So in that case the
KPI Delta = KPI Of Previous Standard version- KPI from test results

2.While making test scripts, we need to keep arrangement in such way so that we can measure our KPI directly from raw results , which makes faster on reporting.

3.Regarding response time, from test tools, we can get Max, Min, Average, 90% response time. We might be confusion , what should we choose here. By nature, 90% response time is more accurate to real customer time, my preference is this. But, if you application is very fast and scalable , have less initiation time, you can choose average also. You may choose MAX also when you need to evaluate performance for worst case scenario.

4. Scalability is also a part of performance , but for measuring, it is ignored as that is related to capacity

For web service (as it is vastly used in web), I prefer
KPI = Response time * Error Rate* size of response*Throughput Inverse

A Example : 
In this example , we can see the performance trends. 
Total 3 test performed. And test 1 considered as standard. 
Total 4 requests are there but, see the "Ajax" request. The performance trends says, it is getting low in performance from standard mark. Other requests seems higher , that means, trends is good. 


Thanks, please let me know if you have any questions....:)