How to perform client side web performance testing in JMeter?

In this article we will see how can we do client side performance testing using Jmeter Plugins.

I will be using jmeter webdriver plugins. Before starting this topic, please have some basic on what is client side performance testing from my previous post.So, lets start with

Installation :

1. Install Jmeter and Plug ins from below links following this post.
->Jmeter
-> Plugins(you may choose only web driver, but I prefer all of them)

2. Download Selenium Server from here. (you need java to run that)

3. Download Firefox 26 from Archive. Why 26? because jmeter webdriver plugins supports firefox 26. Here is link where you see supports the details.
Note: This can be tricky if you have updated Firefox version. In that case you can do like me.
->Disable firefox update checking
-> Install in new folder in separate directory name.
installation

->When you run this one for the first time , just cancel initial update process. As you have disabled firefox update(in your updated firefox), make sure you see the update settings disabled in this firefox 26 too.

Note : This part is little tricky, I have provided separate post to resolve it.
For Jmeter Remote Execution or local, it is better to to have only one firefox (version 26) with no auto update settings which will minimize complexity for test execution.

4. Keep firefox 26, selenium server in path variable. To check, type firefox from command line and run. You should see, firefox 26 launched in desktop.

image
image
image

5. Setting Jmeter: Usually , we do not need any extra things for webdriver sampler. But, as we need debugging, we might use this following property in user.properties file.
webDriverJmeter

It enables sub sampling which is good for debugging. 
webdriver.sampleresult_class=true
Let me explain more in how it works: JMeter webdriver sampler is just an extension of http sampler, not alternative, with a script editor. When it runs, it actually calls firefox driven by webdriver. That means, it send its instruction to mainly webdriver and, webdriver does every thing. Now, you might be wondering how code goes to web deriver. Like as other code support, webdriver core run as external code following JSR specification. It is actually JavaScript execution. And, you see, it is just like as webdriver java code with some basic modification due to jmeter adoption. I will provide separate blog or coding.

And after you write the steps as webdriver script, use listeners to get time. Like as other samplers, you use listeners to debug sensibly.
Browser supporting : 
Just follow this link which tells the configurable browser names that are supported by webdriver sampler. You can see this from jmeter also,
image

Time Measurement:

webdriver sampler calculates time from this line of code
WDS.sampleResult.sampleStart()
to this line of code
WDS.sampleResult.sampleEnd()

So, for debugging, we need sub samples which will be shown as child of main sample. For this purpose, we need to activate sampleresult_class true. After activation we can do sub sampling like

WDS.sampleResult.sampleStart()
//Brows to a URL
//Add user name and password
WDS.sampleResult.subSampleStart('Log In request')
//Click Log in
WDS.sampleResult.subSampleEnd(true)
//Check title
//do some another process
WDS.sampleResult.sampleEnd()
In view result tree listener, you can see , main sample contain sub sample name “'Log In request'”. And a main sample can have multiple sub sample. That means, we can calculate each time separately from result tree.

Note that, sub samples will not be shown separately in tabular format listeners or graphs.
And, if we need to measure particular transaction . we can split single test among multiple webdriver sample. Like, a sampler for log in, a sampler for doing some work in home page, a sampler for messaging. In this way, we can see results in reports. Usually each business transaction is measured in separate sample where as detail steps are sub sampled.

Writing first script :

To write webdriver sampler script , you need to add webdriver sampler(sampler –>webdriver sampler) with any browser (driver)configuration. See image from browser supporting section to get the driver configuration elements.
I will provide separate post for how to write webdriver test script with an example. You can see some nice guidelines form wedriver sampler wiki.

Test Planning :

As we know from my previous post on client side performance test, this test should run from single user or thread. As jmeter sampler populate browser with webdriver, this has me particular hardware requirement. That is, it will occupy a single thread of a processor. That means, if you want to run webdriver sampler, you need at least 2 core CPU. Why 2 core, another one is for Jmeter. So, if you have 8 core CPU, you can run only 7 threads for webdriver samplers. So, for testing we have to add separate thread group or test block for webdriver sampler.

We will run for measuring client execution time on
1. When there is not much user load, .
2. When average load on server ,
3. When High load(considered as peak load)

Sometimes , It is also better to test under
1. Beyond capacity , where error can be occurred or may just after error condition.
2. As continuous performance testing. Usually, people run selected regression test with jmeter daily or weekly.

Again, formula is simple, 1 thread of CPU for single client performance testing. 

And, you run the test simply like as Jmeter test, that's all. More detail will be in my blog with example.

Thanks …:)

What is Client side performance testing in Client Server Application?

In this article we are going to learn about client side performance testing. And following engineering practice, we will see client side analysis and helping tools.

In old time (like 8-10 years ago), the web was totally different then now. At that time there were less type of client and less client side processing. But, now a days, spatially after Web 2.0 booming, we see client has become more smarter, more functionality there as well as new innovation through new technology. And, we have seen hardware become cheaper. So, client also started using local hardware not like as old time , fully depends on server. So, in performance engineering context, if we only do performance testing and monitoring of server, it does not make any sense. We will be missing a lot of performance issues based on client functionality. To measure over all performance situation, it has become necessary to test for both Server and Client for any server-client application. So,

What is Client side performance testing?

When we say client side, it means it involves every thing involving client side activity. That means, when we do performance testing of an application based on its client activity, that is client side performance testing. Example : if you consider an web application, client side performance will include the time of server execution and client side browser rendering, JS/AJAX calling, socket responses, service data population etc. So, it can be differ based on operating systems, browser version, environment settings, firewall and antivirus, faster priority functional execution and user activity of course.

So, main targets for client side performance testing are

1. Measuring actual Client timing for particular scenarios. It can be grouped as business transactions or can be measured with separate request for single user. 

2. Measuring single user time under different load and stress scenario. It is actually part of usability but it is included as performance test activity. 

3. Observe application behavior when server is down. Spatially due to application stress, when one or more servers are down, what are the situation . This might be some critical due to data integrity. I have tested for server back up time test after getting down.
 This particular type is fully based on requirements. Like as in our current project, we run this test every day to see the progress. We run regression test script(critical items only) so that we case see where is our business transition timing is going.
As you know from my previous post, type of performance testing, this will lead us to two basic part, performance measurement and performance monitoring or debugging.

Client side Performance Measurement :

This part is tricky. In performance world. when we say performance tools, it all refers to server side performance measurement tool like loadrunner, jmeter etc. So, what about client side performance?. As, it was not popular before, it was mostly done by manually. Still it is one of the best practices to sit and test application critical functionality with a stop watch and measure that. I remember doing that in back to 2008. These are handy, no need automation , no need to know  much technical stuffs. But, as it is manual time measurement and humans are not as perfect as machine for measuring time. So, it has error. So, there should be tool there.

Usually, before Jmeter Plug-ins, there was no mentionable tool for web application performance test tools. We can use Jmeter webdriver plug-in to operate the same functionality that a human does and measure that time accurately. And, we can do same steps programmatically by using browser simulation. Like

1. Selenum-webdriver-running in Java/C#/Python/Ruby/nodeJS with any supported test runner that measures.
2. Ruby-watir-Gherkin-cucumber
3. Java-robot simulation
4. Java/c# – Robot framework
5. Native action simulation tools/scripts(Auto IT/ Sikuli)
6. Robotium/monkey runner/ Applium for Mobile client performance measurement.

So, as Jmeter has this web driver sampler in plugin , we can use that. I will provide separate post for webdriver sampler example.

Client side performance monitoring :

This means we have to have monitoring for our application as well as client resources.
Like as every operation systems, windows or linux has their own system internal tools to monitor resources. And, as opens source jmeter consultant, i should say we can use perfMon as Jmeter plug in to monitor client side (you may say local host)

Now, for client side application monitoring, its really depend on application client type. If it is a TCP client, so you have to use TCP monitoring tool on the port which your application works.

Lets see some tools for web application protocol, http(s) for monitoring and analysis.

1. Browser Based Tools : Most of modern browsers have build in tools. Like , IE or Chrome, if you press F12, you can see the tools.(they follow w3 standards on navigation timing)
->I like YSlow with firebug in firefox. (first install fire bug and then YSlow)
->Most popular , Page Speed by google.
->Tools from Chrome Extension like , Easy website optimizer. Developer tool, for REST web service Rest console or Advance REST client etc.

2. 3rd party Website:
->GTMatrix , is one of my favorite
->Web Page Test is very useful

3. Proxy : For traffic monitoring i use
->YATT with winPicap
->MaxQ(not focus on monitoring but you can use for that)
->IEWatch

4. 3rd Party tools:
->DynaTrace
->SolarWinds
->AppDynamics
->Nagios (Free)
->MS Message analyzer
->For web service testing, SOAP UI

 And more and more....:).
Paid tools are good but i guess you can use a skill person for using set of other tools rather paying them…:)

Helper tools: For different web architecture, data come to client in different format. So, you should have
1. Base64 decoder that supports different character set.
2. URL decoder
3. Decompressing tool
4. File/Character format converters.

Here we get the tools, but before using tools, we need to define what to monitor. Usually for an application we monitor.
1. Application rendering time
2. Specific JS/AJAX request/response/processing time
3. User dependent request time
4. Client side validation time
5. Loading time for script/style/dynamic content.
6. Total and request specific data coming and going
7. How request are queued and processed(the behavior)
8. Any exception based(serer/client) function or behavior.
9. Business transaction or Particular  request time

And for client resources.
1. Browser & Application occupying CPU/Cache/Memory/Disk/IO/Bandwidth
2. If application interact with any other service or application , we need monitoring that too.

Example, once our application uses the export function open with MS excel and in a point of time it crashes due to our application occupied so much memory , that excel could have memory to load up the big size data.

Test plan for client side execution: 
Usually, a separate thread or users used to run the client side performance test to get the timing. Not like as server side script that will run thousands of user and run parallel as it is specifically made for single user execution time for specific scenarios.

So, in the next post we are going to test a sample application using Jmeter webdriver and measure the time.
Thanks..:)

What is Performance Reporting? Example with web application.

In this post we are going to learn what are the things need to be included for report after performance testing, results and analysis. This is typically as integration of performance results. This document communicates with different type of people interested with performance testing.

What is performance report?
After a successful performance test execution, we testers has to provide a report based on performance results. I am referring that report not raw results. A report should be fully formatted based on requirement and targeted for specific people or group of people.
First of all we need to know what does reporting means. I have seen lots of performance reports with very much technological terms with a lot of numbers. yes, that is what we performance testers do.

But, what I found, every one is not very good with those numbers. Actually, i think, those numbers does not mean any thing if there is no certain context. So, how can we get context? Its not that hard. Performance testing is related to certain type of people in the group. And, a good performance engineer should add value with those results by analyze them based on different goal and requirements. I have separate blog post based on goal and requirements. Please read before this.

So, after getting requirement we have to make reports. I will give an example of web application(financial domain) to have better understanding. So, lets discuses what should be the steps.

Step 1 : Analyze the results : 

This is very fundamental. We have to analysis the results. I believe, analysis is same important as performance test run. This actually get real value for the test execution and can pinpoint the problem. And, a performance tester should have that capability unless where is the skills. He has to analyze and define the issues may or may not be there.
So, based on the goal and requirements we should gather information and categorize them .
Example : we have a financial software that does transactions (debit, credit) with a lot of approval process. So, we have a lot of people interested among those transaction results (all business , marketing and software team). So, we categorize rest results in those grouping and show repot to only related groups.

Next is, we do, matching requirement with grouped data.
-What were the goals for these kinds of people?
-What were the primary requirement and target?
-What is the actual value and how far we are from expired? 
-What are the impacts? Impact based on revenue , client feedback, prestige of the company, interaction to another system etc.
-What are the causes? Architectural problem,database problem, application problem, human resource skill problem, process problem, resource problem, deployment problem etc. 
-What are the evidence of those causes? What are those related values? How much the project can tolerate and how much they cant. We might need to use profilers along with performance test tools. Like Ants memory profiler, Yourkit profiler, or even framework build in profiler(on the language platform that you are using).
-What are things can be done to resolve those?
(this part is tricky, a new performance tester may not come up with this, but he can talk to his system architect or lead to come up with solution. In my current project , I am trying to do. I am proposing possible solution and discuses with dev lead about those and we sit together and do some experiment to suggest the best solution that match with existing architecture. BTW, there can be issues with architecture, I have seen this in 2009, where I was involve with a product that could support some amount of users, (avoiding to be specific). To scale up the software, our team have to change the full architecture of the application to support almost 4oo% more)
So, we have done some analysis. We can do a lot more in analysis. Usually, if you are doing analysis with performance results for a product for the first time, you will get a lot of issues and need a large time to do that. But, gradually time goes, issues will be solved.

Step 2: Group Data :  

After analysis, we need to select what data should be involve with each group. This is king of tricky if you do not know the interested group. First step should be know those people and have some idea on how to communicate them. I think, performance report is just a communication format of you performance results. So, you have to very careful in this. It should be based on goal. Let me give example from project, (web app). We have 3 kind of people interested with performance testing.

1. Business users or client, who are real users interact with the system. They do trading by them self. So, for them, the high priority is , how fast we can do business transaction. A transaction includes multiple steps and involving approval, so how fast we can do that. Think, if we add anything like throughput or request size or bandwidth measures, it will not get as much attention as the time for each total business transaction. And, as they are paying, we have to ensure, after each build/release we are not decreasing. And , if we decrease, there should proper explanation from development team.(like, new feature, DB shifted, adding more security etc).

2. Product Stakeholders(all CxOs, BAs and Managers): These people are not like business users, they know the basic of inner system component but most of the time i found they want to avoid technical details. They interested same as users on those values, but they not only need to know time values, they also want to know more detail regarding what is causing those. And, if you include how to resolve those in minimal cost (with cost measurement) , believe you will be appreciated . Believe me, if you add those work values and your findings, these people will be interested more in performance reports.

3. Development team(DEVs or QAs): These are people are from development team. We used to attach raw results with report. In their report , things are little different. We start with problems, and explain those problem, provide them evidence with reproducible way(even teach then how to use performance tools ), and some guidance how to solve them like best practices, code samples, tricks people have used so far. And, as graph, we give them detail timing. Like , throughput, size, hit to server, server processing time , db request time, post/get request individual time), resource time with expected measurements.

Step 3: Arrange reports :Drive Shared Example

Like as all other report, typically a repot contains (I am adding common for all groups)

->First page with Heading as Home where product name and performance testing
->A page for Table of content
->An introduction: This will keep people on context. Add objective of the report in 2/3 sentences.
->Summary : It should contain what is the final outcomes. Pass/fail/improvement areas.
->Test Objectives : Why we testing? This should contain the requirements in bullet format.
->Test Strategy : This contains, what was the plan, which tools was used, what tools for analysis or debugging.
->Test Overview : How it was tested, during testing, what were the situation, what were monitored, what were the observation.
->Test Scenario: What are scenarios involved in test execution (It may be presided based on group of users)
->Test Conditions: Test conditions based on tool, environment, application settings and configuration including delays.
->Load Profile : How user was generating load during test. Jmeter/ Load runner or all other tools provide this. You can take screenshot of the graph and add here. Like, 100 user, 1 hour, 500 users 3 hours like this with graph.
->KPI :It is optional. It is called Key Performance Indicator. Based on requirement , each group need to know a value that indicates performance situation of the product. Usually it drives future investment and activity. I will provide a separate post how to make a KPI based on user requirements in case you don’t have any measurement.
->Results: Tabular results, common for every tool. Jmeter provides summary results or Synthesis Report. Some times, this can be optional to hide detail results from end user/business users. We used hide them.
->Results Graph: All graphs based on tabular results. We should be very careful in this area. We should put only related report here. We have see the goal and requirements and then decide. I mean, put the context with each graph. Ask your self, why this graph you use.
For example, in our project, we include only transaction comparison graph for business users.
But, for stakeholders, we added Throughput/min, Hit/Sec, Error% over time and user. Etc.
And, for developers, we include al most all type of graphs following jmeter listeners. And, graphs with raw request reference not in business transaction so that each step can be shown.

Note:
->Some time we might have to change the unit of the results for better graph, like through per second to per minuets. It should be based on what are your value in range. 
->Please be careful to add at least 1 line to describe each graph before putting them in report.

->Product Analysis : This part should be shown only to DEV/QA team. if interested, to stakeholders. This is very important part if you consider about project. Put all necessary part based on your analysis here, specifying issues with evidence.
This might include detail report separately. 
This might have detail screenshots with different tools.

->Suggestions : This part should be based on group. Suggestions for business user report should be more generic way , at best reference with UI.  For stakeholders, it can be referenced with product module or resource(like DB server, app server). But for DEV team , it should be pinpointing solutions or best practices. This whole area is based on project context, so use it sensibly. Try to be more generic and non technical in language( I have learn this in hard way..).

->Conclusion : This section should contain, 3/4 sentence defining performance status and things can be considered for testing in future.

->Appendix : This section should have detail definition on the terms we have used in whole report. Usually what is throughput, hit per second, transaction, average, min, median, 90% line etc should be defined here.

Step 4: Report Format :

Performance reports can be in provide in PDF, DOC, Excel or even Power point format. It really based on your company or working group. If you don’t have any standards, just follow any other report of you project. It is not that important unless your group maintain a system that reads the report and shows to other people. In that case you have to be specific  on file specification. I personally like PDF edition.

Notes :
->Some time we need to have section for report document summary.
->Some reports might have a section for who will see this report.
->Some reports may have less sections than example, just making sure that it follows your requirements.

So, have performance report with context. and have priority for the management in performance test activity.

Thanks..:)

What are Performance requirements?

In the article we are going to see details about requirements. What are those? How to deal with it.

What is Performance requirements?
Like all testing , we need requirement before performance activity. So, what is Performance Requirement, how does it look like? How to deal with it.
As we know performance testing is all about , time & resource. So, Performance requirement will full of time and resource requirement related to application. That means basically related to the following questions.
How many users?
How fast?

And based on that we need to see
What are the bottlenecks?
How can we solve it?

Before going to deeper into requirements, please see the performance goal from this post. It is necessary for getting in context. 

As we have seen from my previous post about performance test types, if we realize, we will get requirements based on Application and the infrastructure. And we need to have those related to time. Usually time are measured in Millisecond and size in bytes. So, based on that the performance requirements must be involve in following types..

Application Requirements :
Server Side :
->Number of user application can handle concurrently.
->Request/Transaction Processing maximum time
->Request/Transaction Required maximum Memory/CPU/Disk Space usages.
->Minimum Required Memory/Storage/Disk Space for running Supported number of users.
->Application (request/transaction) behavior in case of errors or extreme peak.

Client Side:
-> Request/Transaction Processing time. This involving , particular request time ,browser rendering time(for browser based application), or native app render time(if it is mobile application/desktop application) or even environment render(if it is running under custom environment/OS.
->Request/Transaction Required maximum Memory/CPU/Disk Space usages. Usually , it is based on where the application is running. There should be specification for the environment like for web app, which OS, which browser version on what settings under what conditions(Antivirus or any monitoring tools)
->Minimum Client environment specification and its verification.
->Application (request/transaction) behavior in errors or extreme peak.

And , we also need Application monitoring for all of those in server side and client side. This might involve with application monitoring and debugging tools as well as framework based tools and environment based tools(OS tools, browser tools, network tools)

And, before project, please define those monitoring requirements. It is also based on Goals. That means, if you are targeting for server update, so have spatial monitoring on server. I had a change to work with some requirement for performance testing involving with monitoring activity. That means, those performance test are design to monitor not only application, including infrastructure. Those have specification for network / bandwidth consumptions, and resource consumptions in server and client.(PC/Tablet/Mobile). Example :

Server Performance :
->Maximum CPU/Memory/Storage/Virtual Memory usages during certain scenario.
->Fault /Error recovery time(including reboot or initiation time)
->Resource Temperature monitoring during extreme conditions(full load, and going up)
->GC and Thread conditions in extreme peak.
->Max Log, size in case of App/DB Logs/Web server logs.

Client Performance :
->Maximum CPU/Memory/Storage/Virtual Memory usages during certain scenario.
->For web application, browser behavior/time for Reponses.(Apps for mobile)
->Device Temperature monitoring during high data/complex operation (for Mobile/Tablet / device)
->GC and Thread conditions in extreme peak.
->Max client Log size and limit.
->For web app, network and application resource(file/requests) monitoring, spatially time and resource needed in client.
->As, now a days web Applications does a lot of processing in client side. So depend on architecture of the application, we need to trace those for client environment.

Network Performance :
->Maximum/Minimum Bandwidth for application/transaction/request
->Behavior during extreme peak
->Fault or Error recovery time and behavior.
->As this is involves network devices, so tracing there behavior are used to be in specification.(like, we had a HUB, we needed to test what temperature during multiple request to server connected with this. Usually, this become complex when we send large data over network following lower network layer)

More over, depend on architecture, there should be performance requirements. For example, we had an asp.net application and we followed default view state and call back state policy , which become main issue for application slowness. We had to test default timing for a particular transaction, then we test that with different browser to verify we are going in right direction. And we also tested the resource and time needed in client for particular request. And then new version released after modification of the  view state implementation. And do full thing again to prove application performs well and takes less resources while running in client.

Example : We had a Small SOAP service.That takes an XML, and based on XML data, it process and returns XML. Client used to sell that service to other vendors for using. So, our requirements were like
->Max server processing time for particular XML type(we have different type) 750ms.
->Our server should server concurrent 120 requests and at least 2000 users in 1 hour block for particular XML type.
->Client should not get more than 1200ms time in response in any case(request and response time = network time+sever process time)
->Get the Max support of users by our server
->After Maximum users, when it is in extreme condition, server should not die and provide server busy message as XML return
->During High or Extreme Load Clients should not get any 4x or 5x messages
Note : As, our server need authentication tokens, we should not have any security issues during high or extreme load.

Analysis & Reporting  Requirements:
In real testing, there are some requirements related to analysis and reporting. Those are mainly performance goal based requirements. I think it is mandatory to have those goal based reporting requirements so, that we can find bugs during analysis phase.

Analysis should fully based on performance test goals. So that we can pinpoint the issue or bug or improvements and provide in well formatted reports for different type stakeholders. This is important because, performance testing is not easy to understand by any one. We need to format the results and issues for stakeholders, developers, other QA members, Business people, CTO/CEO/COO etc.

User Stories : In agile based projects, performance tester should come up with Performance Scenario. This is important because, it will make easy to understand by other members of the team. It is very easy to make, just follow standard user story making rules. Like, from my example, one story can be , a client(mobile/web) should be able to process certain type of XML in 1200ms when server is processing 80 other requests in 70% of resource loaded.

And, if you don’t have requirements, use my previous post of GOAL and this one to come up with requirements for your requirements.

So, in summary (i do not go deeper on client server, just the principles)
image

The Mup Link(open with mindmup)
This is an incremental post, I will add more later on. Please comment bellow if you have more ideas.

Thanks … :)

Why Performance Test? Performance test goals.

In this article we are going to see Why we do Performance Tests? What are the main Performance test goals?

It is really depressing that very less performance test project started with goals. (as far as I have seen). It is defined when it is gradually moving foreword. Commonly, it is necessary to have goals to save time. As we, see we have a lot of parameters to take care about for performance testing(I will make separate post on this) we should have goals. So,

What is Performance Testing? What are the goals : 
Performance goal refers to what exactly we want to accomplish from performance testing activity. Like what we want to measure/see. This measurement has different prospective. Let me come up with real examples : (better to understand)
Usually different performance test goals comes from different type of group of people from team. like

Business Goals :(Business people are involved)
->We want a jump up a revenue , so what we can do to improve our performance?(which is major customer complains)
->We have a upcoming sales event/cycle. How many target clients we can drive for? What point should we not exceed?
->We might have a good reputation mentioned by Important people over Media on a particular date. What areas we need to take initiative to scale up the application?
->Specifically for online shopping application, if there is Black Friday or Cyber Monday event coming, what areas they need to take care about to keep their sites up and running for selling.
->Similar things for ticket booking application in holiday season and any major event (like concert, sports).
->Marketing might need to know the application tolerance for how much offer they can provide to customer, when they can.
->How much budget we need to scale our application for certain number of users? 
->How quick we back on business(function) if we crashed?
 
Technical Goals :(DEVs & QAs)
->How fast is our application? How much data we are using?
->We want performance testing for measure our application resource consumption and timings.
->We are getting lots of user/QA feed back on certain functionality/transaction. So, we need performance test on those to debug bottlenecks. 
->How good is our Application Recovery process?
->We want to have performance review to see how our application behaves when we have a major request processing?
->How much our application is scalable?Where are failing?
->We are migrating / changing our application architecture, we want to know the benchmarks.
->We are applying new language/framework/platform to make application faster. Are we really faster then previous?
->We are providing continuous delivery every week So, in case of performance , where are we? Are we upgrading or going down?
->In Scrum, Lets have a performance cycle. So before that, lets have a performance test. 
 
Usability Goals :(QAs, BAs, UXs)
->How many user we are supporting with usual behavior? 
->What are the behavior when we are in high or extreme load.
->We want to see how our customers feeling while application is using?
->What are the critical issues that may occur in peak point?
->What are prevention plans need to be taken when user face such bottlenecks? (Risk Planning)
->How good is our Application handles error conditions on high load?How is recovery?
 
Operational Goals :(Admin, Net Admin, Ops)
->Does our infrastructure supports the scalability which is provide by our application?
->We want performance testing for measure our application resource consumption and timings on operation/admin prospective.
->We are transferring our whole physical resource system (DB/APP server change or move to cloud) , so what is the bench mark, are we really improving?
->We are trying to add more resources in the system, so does our application supports performance improvements when we add more resources, or its just waste of money?
->What is happening maliciously by the application over our resources while running?(i faced that practically over DB logs which becomes so big while app is running that it overflows C drive and application crashes)
->What should be the recovery plans? What areas to take care about? 
 
This is an incremental post, I will add more as soon as I get. These are some primary ideas that I have worked with. 
 
If you think there are more, please feel free to comment. I will add with your name
 
Thanks ..:)
 
 

What are Performance Tests?Basic Performance Test Types.

What is Performance? And What the types?. In this article we are going discuses about performance test types from a technical prospective. In previous two article i have explained about Load Testing and Stress Testing. And, there is another type, called capacity testing. I will not talk about those generic criteria of performance testing. These are based on my own activity on performance testing over web application.
Now a days, web applications are more scalable and robust. Not like as old time, the web architecture has been change. So, for performance testing of a full web application become more challenging as Performance testing is more related to system, how it works and what are the weak points.
So, I would say Performance testing is mainly 2 types.
A. Application Performance
B. Infrastructure Performance
A. Application Performance :
By the name Application Performance, it is easily understandable what I am referring. Its the core performance of the application. Mainly measuring
-How the functionality works, what are timing
-How much resource(CPU/RAM/Disk Space) they consume
-What are the bottlenecks & Complexity?
-Is there any Resource/Process Dependency delays?(like, 3rd party service, component, critical resources)
And, How we expect those behavior. This comes from user feed back or performance requirements.
However, Application performance testing is two types.
1. Client side Performance
2. Server side performance.

1. Client side Application Performance: This refers to performance in client side. That means performance of client activity. Now a days web applications are more client centric. They use more client resources. Spatially when you are considering single page application or HTML5 application or Angular.js based applications, or even Ajax toolkit application, all of them do client side functionality. They talk to server very often and get data preloaded or in delay time. They try to make application more smarter and that makes more complex the situation. That means, if your client is doing or processing before sending requests to your server , if there is some issues, it wont go to server. And, user will get performance problem though servers remain Idle. So, actually client side performance testing refers to the testing of client side executing codes based on
a. Ajax performance
b. Java Script performance(Used JS or Different JS Frameworks)
c. Application framework Performance (Asp.Net/PHP/Ruby/Python or any language that you used with)
Now, question is How to test that? I will provide a separate blog for this with small example. Primarily, its very resource hungry and has dependency with browsers/client that you use. For browser, I have used selenium with firefox/chrome/IE/phantomJS for running those test. (like fully single pc based application execution, so you need more PCs for more user activity)
BTW, Now a days, business owners and stakeholders or even users are more concern about this. Main concern on, how the application behaves during high peak. How much it takes for a transaction( Transaction is a set of unit request for particular business goal). There are some key points for client side performance testing
->Its all about client side function execution ,rendering, calls. So, usually, load and stress conditions are not applied like server side performance.
->Single user based test scripts are usually made to determine time of every transaction.
->That single user activity is monitored on different situation of load.
For example : Let say, your bank is transferring money , your server is idle, so how much time it takes for a transaction requested from client. In this condition, server response time will be deducted from full transaction(as it is server issue). We will do the same thing when there are average load, let’s say 1000 transactions, Or high load (say 10000 transaction) or even Over loaded condition(25000 transaction). We need to measure this incrementally because, client side code often based on server side response resource(like next javaScript /Ajax loading). For different condition, we might have different server response time but , the particular functionality/process involving local javaScript/ajax/Application should take the same time(because we are overlooking server response time).
->because of mathematical implementation in side, most of the time we need to calculate the reports manually. (jMeter has webdriver integration, so it will easy to get this values with built-in functions)

2. Server Side Application Performance : It’s the most common in the performance world. Usually , all performance test tools refers to server side performance testing. Basically we need to test application performance in server. That means
->Request send and receive Time (that will contain request processing time in server)
->Request resource(CPU/RAM/Disk Space/Network) needed in server
->Bottlenecks or dependencies in the server from you application.
So, we need to test server for
1. Regular Load user hits
2. High Load user hits
3. Extreme/Over load user hits(to see, how it acts on fail, fail recover, fail safe, application and data integrity) 
And, we perform Load Testing, Stress Testing and Capacity testing(I hope I do not need to explain those).

B.Infrastructure Performance :
By the name we can guess that it says. Infrastructure Performance refers to physical and logical Infrastructure consumptions and performances caused by our application. That means, the affect because of the application on the physical or logical resource. Like, if we need to deploy our application in IIS, what resource are needed. What will happen in different load conditions and complex scenarios or even high volume of data transfer. How much bandwidth application takes for sending /receiving requests. Or, what should be minimum condition of client resource.
It is more related to monitoring and monitoring based stress and load. It’s the load & stress condition performance of the infrastructure.
In my opinion, there are three types.
1. Server Performance
2. Network Performance
3. Client Performance

1. Server Performance : It is server’s core performance while running and serving the application. The CPU/Memory/Disk Space usages.  We need to find what happened when different type of request provide to server. What are different dependencies and what are the bottlenecks. Even these can related to hardware failure or software failure. For server performance testing, we need to use different type monitoring tools. Some are Server monitoring tools, some are application monitoring tools.(like dynatrace, perfmon, java or Dotnet base application have monitoring tools built-in framework, we may need to develop our own tools for monitoring)

2. Network Performance : It’s network core performance when our application is running in server. That means,
->Network Latency time(like more router, more address translate, more time need)
->Request send and receive time
->Network Dependency time (like waiting for another network for some process or authentication)
->Network Speed
->Network Packet Size & Performance
->Network Protocol dependency.
Basically all the stuffs related to network performance.

3. Client Performance : Like as server, it is related to client device resource monitoring while application is running. In past days, application uses less client resource and do most of stuffs in servers. But, these days due to cheap hardware, applications are using more client resources for data processing. Application like WPF browser application or JavaScript based client side application. Those are more resource hungry in client side. So, we need to define client configuration requirement for our application. And how they behave in different condition. And, now a days, (for web application) we have different type browsers/clients. And, they have their own resource usages(like chrome, takes a lot of resources in background, so as others). (example : I have worked with browser based ERP type application, which uses dotnet components and there was lots of parameters and pre-conditions for client hardware, like minimum pc configuration, windows version, browser version, network speed, monitor resolution, minimum GFX ). In this criteria we need to monitor client (CPU/RAM/DISK/GFX) in different stress conditions like
->High Size Data request
->Complex UI rendering and processing(for games, frame rates, triple buffering, anti-strophic filtering, vertical synchronization,
->Multiple Event firing(how application and client handle those concurrent events)
->Complex Business Request processing(like long java script call, DB procedure request sending, or serializing objects)
->Big size cookie or data processing while in memory.
->Spatially when we are running client side application performance testing.
We have to remind that as it is related to client, so these following may have affect on client for using with application.
->Plug-ins that we are using in the application(flex, silver light, flash, rich text editors etc)
->Client side compress methods(client need to compress/decompress and use Send/receive server data)
->Client Rendering hardware(now a days, network gaming over browser based games are popular and those games uses local GFX processing power for tendering)
->Client side Security measures(like client side authentication validator, or spam protector)
->Full Screen conditions and associate with existing other apps.
So, In summary Types
Link to Drive (Use Mind Mup)

Note: Modern applications (web or Hybrid) are smarter now a days. They behave depend on load to server. Like, if the server is in extreme condition, it wont show all pages or don't do regular functionality . Some are showing server busy message to users(like twitter). So, it totally depend on application smartness how its handles extreme conditions. In those extreme conditions most of the time time caused by DBs. And people says, its data base issues, but in my opinion, your application should be smart enough to handle all kind of issues. There may be an issue in any sector, why it will be expose. That is why, I some time do security testing like injection/XSS/CSRF in load condition to see data or application integrity in extreme condition. If you application can not handle extreme cases , how it can be scalable. And, that is why Smart applications these days have
->Risk Planning
->Application Behavior handing in all possible scenarios.
->Data integrity and security policy and precautions.
->Application fail safe measurement. (that is why we do capacity testing)
->Server fail safe precautions.
->Simple & Smart UI design
->Small server Requests & responses
->Zipped/compressed data communications
->Load Balancers for database and application
->Distributed architecture
->Application and Server monitoring & Debugging facility. Some framework provide web interface also.
..This is initial thoughts. I will add more incrementally…Thanks…:)

How to get all Method information under a Class in Java Reflection?

This is continuing article of my previous post. In this article we will see how can we retrieve Class Related Information using Java Reflection .We will see Method Names .

Spatial Note : I will make a separate reflector utility class where we input a target class in its constructor and we will retrieve information using separate method. In this way ,we can isolate our needs. Please see this before start.

How to get all declared Method Names inside of a class?
This means , we will get the method names which are declared inside of the class(public , private, default, protected). Not inherited methods. 

   1: public String[] getAllOwnMethodNames(){
   2:         ArrayList<String> allMethods = new ArrayList<String>();
   3:         for(Method aMethod : myClass.getDeclaredMethods()){          
   4:         allMethods.add("Method Name : "+aMethod.getName()+" , Full Name : "+aMethod.toString());
   5:         }
   6:         return allMethods.toArray(new String[allMethods.size()]);
   7:     }
How to get all Method Names accessible from of a class 
(which includes inherited, implemented methods of its own, super class, interfaces)? 
Thant means , we will get method names which are included in the class and the method which are taken from its parent class.
(include its parents and interfaces)
   1: public String[] getAllPubliAndInheritedMethodNames(){
   2:         ArrayList<String> allMethods = new ArrayList<String>();
   3:         for(Method aMethod : myClass.getMethods()){            
   4:         allMethods.add("Method Name : "+aMethod.getName()+" , Full Name : "+aMethod.toString());
   5:         }
   6:         return allMethods.toArray(new String[allMethods.size()]);
   7:     }
 
Note: To know information in detail i use getName() and toString() method especially. 
 
For both case, we can specify method name to get that specific method.
myClass.getDeclaredMethod(<Name of the method as string>, parameter of that method)
myClass.getMethod(<Name of the method as string>, parameter of that method)
 
In the both case we need to know the name of the method. 
In a class, often we need to know either the method is a Getter or a setter method. 
We can apply small string filter. While iterating if we add like following 
 
To know if it is a Getter method :
aMethod.getName().startsWith("set"); 

To know if it is a Setter method :
aMethod.getName().startsWith("get"); 

This is only class related method post, for method detail I will provide separate post.
Thanks..:)



How to get all Field info in a Class in Java Reflection?

This is continuing article of my previous post. In this article we will see how can we retrieve Class Related Information using Java Reflection .We will see Filed Names .
Spatial Note : I will make a separate reflector utility class where we input a target class in its constructor and we will retrieve information using separate method. In this way ,we can isolate our needs. Please see this before start.

How to get all declared Fields inside of a class? This means , we will get the fields names which are declared inside of the class(public , private, default, protected). This is similar to what we did while method name extraction.
   1:  public String[] getAllOwnFieldNames(){
   2:          ArrayList<String> allFields = new ArrayList<String>();
   3:          for(Field aField : myClass.getDeclaredFields()){
   4:          allFields.add("Field Name : "+aField.getName()+" , Full Name : "+aField.toString());
   5:          }
   6:          return allFields.toArray(new String[allFields.size()]);
   7:      }
How to get all Fields accessible from of a class (which includes inherited fields of its own, super class and full hierarchy)?
That means , we will get fields which are included in the class and the method which are taken from its parent class.(following full hierarchy)
   1:  public String[] getAllAccessableFields(){
   2:          ArrayList<String> allFields = new ArrayList<String>();
   3:          for(Field aField : myClass.getFields()){
   4:              allFields.add("Field Name : "+aField.getName()+" , Full Name : "+aField.toString());
   5:          }
   6:          return allFields.toArray(new String[allFields.size()]);
   7:      }

Note : in both case, I used getName(), and toString() get full information. And , for both scenarios we can get access to the fields by specific names.
myClass.getDeclaredField("StingName");
myClass.getField("StringName");
And, we need to know the field name .

This is class related post where we can access fields. I will provide filed specific post separately.

Thanks..:)

How to get all Constructor information under a Class in Java Reflection?

This is continuing article of my previous post. In this article we will see how can we retrieve Class Related Information using Java Reflection .We will see Constructor Names .

Spatial Note : I will make a separate reflector utility class where we input a target class in its constructor and we will retrieve information using separate method. In this way ,we can isolate our needs. Please see this before start.

How to get all Constructors of a class?
This is very simple, unlike the method, a class has only its own constructors so, we can have direct access.
   1: public String[] getAllConstructorNames(){
   2:         ArrayList<String> allConstructors = new ArrayList<String>();
   3:         for(Constructor aConstructor: myClass.getDeclaredConstructors()){
   4:         allConstructors.add("Constructor Name : " 
+aConstructor.getName()+" , Full Name : "+aConstructor.toString());
   5:         }
   6:         return allConstructors.toArray(new String[allConstructors.size()]);
   7:     }
How to get only Public Constructors of a class?
In this way we can see only public constructors.
   1: public String[] getAllPublicConstructorNames(){
   2:         ArrayList<String> allConstructors = new ArrayList<String>();
   3:         for(Constructor aConstructor: myClass.getConstructors()){
   4:         allConstructors.add("Constructor Name : "
 +aConstructor.getName()+" , Full Name : "+aConstructor.toString());
   5:         }
   6:         return allConstructors.toArray(new String[allConstructors.size()]);    
   7:     }
Like as filed and method cases, we can get access using parameter names. like as follows.
myClass.getConstructor(<Parameters>);
myClass.getDeclaredConstructor(<Paraeters>);
I will provide separate post for calling and manipulating those constructors.

Thanks..:)

How to get Class Info in Java Reflection? Class Name and Signer Names

This is continuing article of my previous post. In this article we will see how can we retrieve Class Related Information using Java Reflection .We will see Class Name (all kinds of) , and the Signer Names(by which the class is signed while compiling)

Spatial Note : I will make a separate class reflector utility class where we input a target class in its constructor and we will retrieve information using separate method. In this way ,we can isolate our needs. Please see this before start

How to get Class Names?
Usually, when we talk about a class name, we mean its name that we declare after class keyword. But in compiler , a class is represented with its namespace or package information. That is why the term Class Name can provide different type of information. In the refection we get

Full Name/getName() = Full name of a class which includes package info , like java.util.ArrayList.
Canonical Name/getCanonicalName() = Is shows Full name with uniquely identifying element(often $<num value>) for inner classes.
Simple Name/getSimpleName()  = It means Only the name (what we declare with class keyword) like myClass
Type Name =  It will express the type, usually either primitive type or other class. Usually , a class type represents it self(Full Name).
toString() = This will provide  class keyword, and full name. like class <full name of the class>
toGenericString() = This will provide modifier name, class keyword, and full name. like <access modifier> class <full name of the class>

I am using a single string variable and adding each information with separate method call.
   1: public String getAllTypesOfClassNames(){
   2:         String allTypeNames; 
   3:         allTypeNames = "Name : "+ myClass.getName()+"\n";
   4:         allTypeNames+="Canonical Name : "+ myClass.getCanonicalName()+"\n";
   5:         allTypeNames+="Simple Name : "+ myClass.getSimpleName()+"\n";
   6:         allTypeNames+="Type Name : "+ myClass.getTypeName()+"\n";
   7:         allTypeNames+="To String Name : "+ myClass.toString()+"\n";
   8:         allTypeNames+="To Generic String Name : "+ myClass.toGenericString();
   9:         return allTypeNames;
  10:     }

How to get Class’s Signer Names?
What is Signer? I will be very brief as it is not Signer Specified post. When we sign a jar file (which is actually a class archive), java signer tool go through every class inside of the jar and sign every class with specific sign information. Usually, for every signing has purpose. That is why, a class may be signed by multiple signer tool with different signs.

By reflection , we can get all signer names for a particular class. Default a sign is represented as object, I will call for its toString() method to retrieve information.
   1: public String[] getAllSignerNames(){
   2:         String[] names=null;
   3:         int x=0;
   4:         for(Object a: myClass.getSigners()){
   5:             names[x]=a.toString();
   6:             x++;
   7:         }
   8:         return names;    
   9:     }

Thanks..:)