Showing posts with label Performance Testing. Show all posts
Showing posts with label Performance Testing. Show all posts

2017-01-08

Introduction to Junit Benchmark , a performance testing library for your unit tests

------------------

In this article we are going to see how to use Junit Benchmark where can we perform small performance tests on existing unit tests.

What is Junit benchmark?
Junit Benchmark is a library for running your unit tests by multi threads. It has simple annotation to support this type of execution. It usages Junit Rule to drive multi threaded execution.

So, idea is simple, if you use the rule and use the annotation you can easily benchmark a test method.

You can apply this tests for Java 6,7,8. For Java 9, this has already accepted and part of JDK tool library as JMH.

Example to use it :

Lets see an example to know  how to use.
I am using a simple Calculator class with following methods.
public class Calculator {
    public double add(double a, double b){
        return a+b;
    }
    public double sub(double a, double b){
        return a-b;
    }
    public double mul(double a, double b){
        return a*b;
    }
    public double div(double a, double b){
        return a/b;
    }
    public double mod(double a, double b){
        return a%b;
    }
}

In the project , I add this dependency for junit benchmark along with other dependencies. 

<dependency>
    <groupId>com.carrotsearch</groupId>
    <artifactId>junit-benchmarks</artifactId>
    <version>0.7.2</version>
</dependency>

 I have wrote a unit test for subtract method.

@Test
public void testSub(){
    Assert.assertEquals(-5.0, aCalculator.sub(10.0,15.0), 0.01);
}


Now, How to use it?

Step 1 : Add this Rule in your test class.
@Rulepublic TestRule benchmarkRun = new BenchmarkRule();

Step 2. Use this annotation before your junit tests.
@BenchmarkOptions(concurrency = 2, warmupRounds = 0, benchmarkRounds = 5)

 And thats it. If you run, it will run your test with 2 concurrent threads.

Now, let's see what we have done with benchmark options. You can there are 3 parameters

concurrency(a number) = How many threads you want to use your test to be executed.

warmupRounds (a number)=> how many time you run the test to initialize your environment. How many threads you want to run before actual test starts. This is very important for testing stand point. Usually JVM based application need some initial time to start the process and JVM. So, you can add one or two threads to keep it realistic on timing that you observe. These rounds wont be calculated but, after the worm up round, you will get the timing with running environment.

benchmarkRounds (a number) = How many iterations you want to run?

If you are jmeter user, you may experienced similar parameters in thread groups.

Now, in test case I put some thread delay to see my test running as the test is very small so, it will run very fast. After adding some delay, the test case becomes.

public class PerfTest {
    protected Calculator aCalculator = null;
@Before     
public void init(){
        aCalculator = new Calculator();
    }     
@Rule     
public TestRule benchmarkRun = new BenchmarkRule();
     
@Test 
@BenchmarkOptions(concurrency = 2, warmupRounds = 0, benchmarkRounds = 5)
    public void testSub(){
        Assert.assertEquals(-5.0, aCalculator.sub(10.0,15.0), 0.01);
 
//optional delays         
try {
            Thread.sleep(1000);
        } catch (InterruptedException e) {
            e.printStackTrace();
        }    
    }
}

And , if we run as Junit test from IDE, we can see this in output window (default logging)



Alternative ways to use : 
A. You can extend any of your test class from AbstractBenchmark. Like

public class TestCalculator extends AbstractBenchmark {...} 

This AbstractBenchmark has this BenchmarkRule .

B. You can  also do that by your own base class with test rules and all testing parameters. I have one example for base class having test rule

public abstract class Benchmarking {
    protected Calculator aCalculator = null;     
@Rule     
public TestRule benchmarkRun = new BenchmarkRule();     
@Before 
 public void init(){
        aCalculator = new Calculator();
    }
}

And a test class extending that.

public class PerfTest_custom extends Benchmarking {
     
@Test 
@BenchmarkOptions(concurrency = 2, warmupRounds = 0, benchmarkRounds = 5)
    public void testSub(){
        Assert.assertEquals(-5.0, aCalculator.sub(10.0,15.0), 0.01);
        //To run the test slow enough to see it running         
     try {
            Thread.sleep(1000);
        } catch (InterruptedException e) {
            e.printStackTrace();
        }
    }
}


Example parameters :

@BenchmarkOptions(warmupRounds = 1,concurrency = 20, benchmarkRounds = 10)

It will run :
1. Single iteration for worm up,
2. Then 20 threads will run the tests
3. The threads will continue for 10 times.

So, Total it will run 20x10+1=401 times.

To validate number of current thread, we can use Thread.activeCount() inside test function to print current thread number .

We can actually use this annotation in both method level as well as class level.
 A class level annotation actually apply for all test methods. where a method level annotation will override class level annotation if we use together.

Class level example :
 
@BenchmarkOptions(concurrency = 2, warmupRounds = 2, benchmarkRounds = 20)
public class TestCalculator extends AbstractBenchmark {
    private Calculator aCalculator = null;
    @Before     
public void init() {
        aCalculator = new Calculator();
    }
 @Test     
public void testAddition() {
        Assert.assertEquals(25.0, aCalculator.add(10.0, 15.0), 0.01);
    }
    @Test     
public void testSub() {
        Assert.assertEquals(-5.0, aCalculator.sub(10.0, 15.0), 0.01);
    }
}

Method Level Example :

public class TestCalculator_MethodLevelExample extends AbstractBenchmark{
    protected Calculator aCalculator = null;     
@Before     
public void init(){
        aCalculator = new Calculator();
    }
    @Test 
@BenchmarkOptions(concurrency = 6, warmupRounds = 1, benchmarkRounds = 50)
    public void testAddition(){
        Assert.assertEquals(25.0, aCalculator.add(10.0,15.0), 0.01);
    }
    
    @Test 
@BenchmarkOptions(concurrency = 8, warmupRounds = 0, benchmarkRounds = 100)
    public void testSub(){
        Assert.assertEquals(-5.0, aCalculator.sub(10.0,15.0), 0.01);
    }
}

Now, this class in my repository has some extra item with it for reporting and testing delay purposes. We will see those later.

Bonus features : Result Storing & Reporting :  

This is common practices to show results of performance testing as report. Beside showing results in command line, junit benchmark has inbuilt capabilities to store your test results in H2 or mysql database. I will show examples with H2 Database. For this

Step 1 : I am adding H2 database in my project POM.XML

<dependency>
    <groupId>com.h2database</groupId>
    <artifactId>h2</artifactId>
    <version>1.4.192</version>
</dependency>

Step 2 : Put all Property information in jub.properties and load them before test. I am using my own base test case to load before class.

@BeforeClass 
public static void loadProperties() throws IOException {
    Properties p = new Properties();
    p.load(new FileInputStream(new File("src/test/resources/jub.properties")));
    for(String k:p.stringPropertyNames()){
        System.setProperty(k,p.getProperty(k));
    }
}

Define properties are :
jub.ignore.annotations=false 
jub.ignore.callgc=true 
jub.consumers=CONSOLE,H2 
jub.db.file=charts/.benchmarks
jub.charts.dir=charts
jub.customkey=AddBenchMark

[You may see in my repository bout other properties which can be added]
So, we can see, the chart will be created in a chart folder under working directory (not maven build directory)

Step 3 : Adding reporting annotation in the tests:
There are 3 annotations in test class level:

@AxisRange(min = 0, max =1)
@BenchmarkMethodChart(filePrefix = "benchmark-lists2")
@BenchmarkHistoryChart(labelWith = LabelType.RUN_ID, maxRuns = 20)


a. @AxisRange : It represent axix information of the graph that is shown. We can add min & max values.


b.  @BenchmarkMethodChart : It we provide name of the report that will be generated.

c. @BenchmarkHistoryChart : It is used for telling how many history results will be included in DB and results. maxRun will be how many times and label with provide options to include different type of label in each entry. I have added Run ID as label. We can add TIMESTAMP and our own CUSTOM_KEY. This CUSTOM_KEY should be included in the jub.properties.
Here is the example, as you saw from my property, the custom key was
jub.customkey=AddBenchMark

So, in the report , i can see also this

 

After running several rests , my chart folder looks like this.

 

 In here the jsonp file contains results as json type object. You can also parse this to show with your json based live dashboard (like grafana).


Where to use this ?
1. You have your unit tests, use this to know your concurrency state performance.
As this is unit level performance, this will not prove concurrent user actions but, it can prove concurrent request processing.
So, when you have Strict SLA , use this to validate throughput.
This type of test is not suitable for Response Time SLAs as this is not user time.
And , important factor, error rate or error tolerant. This type of test can ensure (mostly) about server's error possibility.

2. You have functional integration tests which validates backend request. Like DB request, web service calls via UI layer.
you can use this to to test before actual tests for performance. This will help you to know how system behaves when they are integrated .
Some time it is very useful to run with manual tests in QA environment.
A number of parallel request is going on while a manual tester is testing application UI behavior.
  
3. Mission critical/Business critical data concurrency tests can be easily done by this.
Example, business transaction data concurrency testing for a banking or financial domain. This can validate data integrity when concurrent requests are in place in the system. (Synchronization, locks etc)
  
4. simulate thread deadlock scenario

5. simulate OOM scenario

6. Testing individual component, tire, request for Throughput and Error%

7. Simulate Required Resource Capacity scenario.

Note : While running the tests, make sure of resources that you are using in the test which will be affected by concurrency. Usually DB connections, file handlers, open ports, created sessions etc should be taken care properly for testing this way.


My Github Project Link :https://github.com/sarkershantonu/Automation-Getting-Started/tree/master/junitbenchmark

Main project link : http://labs.carrotsearch.com/junit-benchmarks-tutorial.html


----- Thanks.. :)

2015-10-19

Performance analysis : Top-Down and Bottom-Up Approach

In this article we are going to see basic Performance Analysis approaches. I will be referring to Top-Down & Bottom-up approach. I will not compare them as both are used for analysis, I will only try to explain what are the basic steps and when to choose what type. This article mostly match for Java & DotNet application. You may use similar approaches for other platforms too.

Top-Down Analysis :

This is the most popular method. The idea is simple, performance monitoring application from top view. That means, client monitoring -> server monitoring in OS & Resource level, then app server-> then run time environment
And, when we get unusual behavior (or not expected or targeted), then profiling or instrument application. In this moment, we have found problems , and experiment with possible solution, choose the best one.
And then, we have to tune the system. Tune code with best solution, tune environment and tune run time.
Finally, we need to test for the impact. Typically, before starting an analysis, we should have some measurement data(or benchmarks) from performance test results. We need to retest those and compare with previous results. If that is significant, then it is done, if not, we need to get back to profiling and tuning application.

Here is a flow chart to show at a glance : Google drive link. Open with draw.io. 

image


When we use?
-> Application causing issues, we need optimize whole or a part of application.
-> Optimize application for given resources(CPU/Disk/Memory/IO/network)
-> Tune the system & application for best performance.
-> We have access to code change, now need to tune application for specific goal.(throughput, response time, longer service etc).
-> We need Root Cause Analysis for unexpected Performance Issues(OOM, Slowness, crashing in different level or sub-systems, unwanted application behavior primary suspected for performance, etc)

Bottom-Up Analysis :
This is another popular approach when we need to tune resource or platform (or hardware) for specific application. Let say, you have a java application , deployed. Now, bottom up analysis will allow you to analyze and find optimization scope for deployed system, hardware and resources. This is very common approach for application capacity planning, benchmarking for changed environments(migration). The key idea is, monitor application in specific environment and then tune environment(software+ hardware resources) that makes target application running at top performance.

Here is a flow chart to show at a glance : Google drive link. Open with draw.io. 

image


When we use?
-> When you need to increase performance but you cant change source code.
-> You need to optimize resources & environment for specific application( deployed environment)
-> You need to have benchmark and get to know resource usages as well as find possible area for tuning.
-> You need to optimize run time (JVM/CLR) for your application. You can see resource usages and tune as your app needs.
-> When you need capacity planning for your hardware which must run application in optimal way.
-> When you have optimized your application from code and there is not visible area to tune, you can use this technique to achieve some more. 

Please comment if you have any question.

Thanks.. :)

2015-09-08

Offline Java Memory Analysis : Introduction to tools for Heap and Thread Dumps, GC log Analysis

In this article we are going to see different tools which can be used for different Java Memory Analysis in offline. I will try to include all kinds of offline analysis tools. OS process memory monitoring and tools are ignored due to context.

What is offline analysis?
When we do analysis on recorded data , not live data, then it refers to offline analysis.

For Java memory analysis, we will perform analysis on not  live data but recorded. For Java memory analysis, we need mainly three type of information of JVM to get to bottom of it.
1. Java Heap Dump : This represents JVM heap memory information
2. Java Core /Thread Dump: This shows running thread state and conditions. The core contains more detail information. For IBM , core and thread dump are same. Different JVM may also include trace files here.
3. GC logs : This shows Garbage collection history in logs.

I will not go detail on each one, lets see the tools only

For All IBM JVM : You need to have IBM Support Assistant. This is a web distribution, you need to run this with Java and your PC will host all tools to gather. Here is the download linkHow to setup IBM ISA? Unzip and run start_isa.bat .
And in browser if you go to this link http://localhost:10911/isa5 You will be redirected here (all of this link & ports are configurable)

image

You will get 3 type of tools, JNLP web start, eclipse plugins link, web based. If you download and open the JNLP via notepad, you can actually configure JVM configuration. I used make a separate bat file to run JNLP with IBM JVM(non environment run time environment)

Make sure, your PC don’t have global log file as environment variable(if you install HP testing product you will have), Just delete that environment variable. ISA will create a variable path for log for it self and you can run this.

For all Oracle JVM : 

1. Java Mission Control(JMC), Comes free with JDK. But you need to install these tools as plugins.
a. JOverflow / Heap Analysis
b. DTrace Recorder
c. Flight Recorder Plugins
d. Console Plugins

image 

image

2. Visual VM : Comes free with JDK. Download useful plugins to get best out of it.

image

Architecturally JMC based on eclipse & Visual VM based on netbeans., so plugins installation process follows process of those IDE in installation (network config+ tools installations)

Java Heap Analysis :

1. Eclipse MAT(including IBM IDDE) supports IBM (phd)& Oracle JVM heap dumps.

2. Visual VM : Oracle hprof format heap dump reader

3. JOverflow/Dump Analysis as plugins of Java Mission Control :  Oracle hprof format heap dump reader, provides more details analysis than Visual VM. 

image

4. Heap Analyzer(IBM Only) : IBM Phd format heap analyzer. Download link.



Thread Analysis :

For IBM JVM: (Java Core file and .trc file)

1. JCA (Java Core analyzer) : This is from IBM support. Download and load text format java Core files.



2.  GCMV( Garbage collector Memory Visualizer) standalone or GCMV Eclipse Plugins , comes with ISA


3. Class Loader Analyzer (IBM only) used for class loader analysis from java core and Snap<>.trc file. It comes with ISA.

4. Trace & Request analyzer(IBM only)  used for reading Snap<>.trc files . Download Link

5. Thread & Monitor Dump Analyzer (TMDA, IBM only)


Note : .trc is trace file, you need to enable and configure to get proper information.

For Oracle JVM

1. .tdump format:  Visual VM

2. hs_err_pid.log format : We need to use Command line tools comes free with JDK.
a. jstack -> Prints Stack Trace
b. jmap -> Heap memory details
And most of the time manually reading as this is text format. We can use filtering. There are also small parser, mainly shell scripts for different filters. Here is one example.

GC Log Analysis :

As , there are different type of JVM implemented from Open JDK, GC log also have different format. Mostly commonly use IBM & Oracle format.  And in each JVM there are different type of GC logs but mostly they follow the general format. Which is text format. Some times , old IBM verbose GC might not be compatible for latest openJVM or Oracle JVM generated GC. How to generate log, I will go details in separate blog.

For Oracle(default : Solaris JVM format)  & IBM Verbose GC log reading :

1.  GCMV( Garbage collector Memory Visualizer) standalone or GCMV Eclipse Plugins , comes with ISA


2. PMAT (IBM Pattern Modeling Tool) : Multiple GC log together, comes with ISA

3. One of good 3rd Party Log viewers : GCViewer ,  Supported formats :
Sun JDK 1.4/1.5 ( -Xloggc:<file> [-XX:+PrintGCDetails] )
Sun JDK 1.2.2/1.3.1/1.4 (-verbose:gc )
IBM JDK 1.3.1/1.3.0/1.2.2 ( -verbose:gc)
IBM iSeries Classic JVM 1.4.2 (-verbose:gc)
HP-UX JDK 1.2/1.3/1.4.x ( -Xverbosegc)
BEA JRockit 1.4.2/1.5 (-verbose:memory)

Some other GC log reader tools you may find in this link.

Note :
1. For both VMs, we can use JProfiler, which is paid tool. If we use yourkit, different version of yourKit supports different version of JVMs, see archive page of yourkit for more details
2. For IBM tools, it is better to use IBM JVM. Either you can download JDK from IBM, or , you can download their development package with eclipse where JDK is present.
3. For IBM JVMs, usually an OOM will create a phd, a javaCore & a trc file. In case you are storing GC logs, verbosegc log will be there.

Please comment if you have any question. Thanks.. :)

2015-01-29

Performance Bugs: How to find? How to report?

In this article we are going to know how to find bugs related to performance as well as how to report them. This is very basic checklist which will differ based on your project. I will try to give example with asp.net web application.

To know about basic bug and reporting , you can see one my previous posts which tells us all about a bug report.

Performance Bugs :

This is kind of admiring that Performance bugs are hard to find. Very difficult to get the evidence. So, if you are finding bugs, you have to be preferred with full performance goal and requirements. 
And, you have to monitor your application as well as resources(CPU/Memory/Disk/IO etc).
While monitoring resources, you have to see the pattern of resource uses and application behavior.
You need to make heuristics analysis either with pen and paper or inside monitor tool to observe the change behaviors.

This way you can get the scenario or scope of area for possible bug. It's more like investigation and waiting for possible incident(like Sharlok Homes).

Before doing that, you must know the Architecture of your
->Application (Development and Deployment model, framework used, platform architecture, security modules etc)
->Infrastructure(How deployed, environment, which code/modules executes where & how)

And you also should know, why you are testing in every aspect (Goals and Requirement)

Example : I consider following for an individual request to find bug for my last project(web)

1.    What is the size of the request?

2.    How much time it needs for going to server and coming back with http 200 messages(server get the request) and with responses.

3.    What are parameters we are sending as request in Http body, URL and Headers?

4.    Why we are sending which parameters?
  a.    Categorizing those parameters,
        i. Which parameters are based on client behavior/step
        ii. Which parameters are based on client data?
        iii. Which are based on environment
        iv.Which are based on our application implementation

  b.    Finding unnecessary parameters which browser send from client and how can we eliminate them.(optimization)
    i. Eliminating unnecessary parameters(based on business logic)
   ii. Optimize necessary parameters(like view state, call backs)

  c.    How can we make those requests more precise and optimize for communication.
 
  d.    Organizing resources(JS/AJAX/Image/CSS/JQuery)

  e.    Compress the requests and responses

  f.     Using long time cookie for static resources with expire ending.

5. Why we are sending request in this manner? can we send in any other way to keep the application faster?(like replacing post back calls with Ajax)

How to get the Evidence? 
Difficult bugs comes with much harder traceability...:)
So, as it is hard to find, you have prepare for evidence. Usually, performance bug evidence can be found in these aspects.

a. While debugging application from client and server for defined test steps. This should be done with tools as well as manually. (before performance test script generation in tool)

b. Monitoring resources and application while performance testing(like : IIS/apache, hosted PC, Application , DB for a web application)

c. Analyzing Reports generated by performance tools after test execution

d. Client side single user execution during high /peak load test is going on. You have to have debug tools enabled along with browser monitoring tools(web product)/proxy based tools.

So, to get evidence,
->I suggest to be prepared for Screenshots/Video recording for analysis later on.
->Make some Log analysis scripts or parsers to have quick log analysis.
->Be prepare with Pen & Paper to make quick notes.
->Make you mind for observing interesting important items which you see and related to goal
->Try not to interrupt test execution, better to let it finish
->Try long term test execution , when you need to test capacity of your application . I prefer 12/24 hour run over weekends.
->Try mixing up load scenario for different execution, More user, less time, more user long time, constant user increment, user high-low combination(like spike) , combining high load and security scanning etc
This is fully depend on application that you are working with. It may not be based on my experiences. Try to get more load scenarios involving with your application architecture & performance goals.

How to write a performance bug report? :
This is interesting, usually bugs are very specific to UI or user interactions. But, as performance testing is more related to application architecture testing. So, performance bugs should also have reference with application execution as well as how it happened from architectural prospective. That means, this standard procedure of bug reporting and + Something. And This "something" part should be developer friendly as well as full detail with references(as developers will read and get find what you are trying to explain). Let's see more detail on the "Something" with example. 

Example , for a web application with http protocol, when you click a button, a function bug will refer to only the use case, but a performance bug should also refer to what request we send to server & received from server in details.
You should add reference on the requested URL, header details, body details. It should have detail explanation on those areas.

For this example, let say a http request  Body has 25 parameters & 21 of them with values. So,
- What are those parameters,
- What are the reference of the parameters,
- Why those values are there.

Same justification for request header also. Up to this, it is only description.

So, what will be inside evidence? It should reflect your requirements. Let's say you need to know size, time and capacity for this request and you get the bug. So, in bug report you should provide

a. The total size of the request send to server

b. Time taken for server processing(http 200 response) for this requests

c. How the time varies in case of different load scenario

d. How many responses are received for this request.

e. Individual and total size of all of those responses.

f. Individual and total time  of all of those responses for different load scenarios

g.What is the rendering time for those request(Time from individual single user perform the same task in browser – server response time calculated from server)

h. For Rendering, which response takes how much time in browser(from browser tool, or fiddler or proxy)

And, From individual request, you need to break down each parameter detail and possible cause for taking the amount of time that the request is taking.

For advance QAs : (Root Cause Analysis & Suggestions) :

You can go more further, for that you might need to have access of the code. You have to include profiler and should see what cause taking more time for that single request processing.

For web application, use profiler in IIS as well as in code base to pinpoint each request impact. Like, we are doing in our project , tracing from IIS, prove with profiler in IIS, tracing respective event inside code with code analysis tools and provide evidence on what cause the extra time.

Some, senior QAs can go more further. They can provide possible solution to overcome situation.

Example : In our project, we have found , we are not using viewstate(asp.net) properly that cases a huge time. So, we are in need for optimizing view state. But, there are several ways to do. So, we are suggesting different ways as well as impact of the ways for solving. This involve retesting ,that verifies the bug . This part should consider business requirements as well as security requirements.

So , summary, a bug should contains steps to reproduce and standard items mandatory for a function bug , as well as Screenshots, Logs, Charts, test results, Analysis reports, Code snipped, Demo projects, tool references etc.

Thanks...:)