How to perform client side web performance testing in JMeter?

In this article we will see how can we do client side performance testing using Jmeter Plugins.

I will be using jmeter webdriver plugins. Before starting this topic, please have some basic on what is client side performance testing from my previous post.So, lets start with

Installation :

1. Install Jmeter and Plug ins from below links following this post.
->Jmeter
-> Plugins(you may choose only web driver, but I prefer all of them)

2. Download Selenium Server from here. (you need java to run that)

3. Download Firefox 26 from Archive. Why 26? because jmeter webdriver plugins supports firefox 26. Here is link where you see supports the details.
Note: This can be tricky if you have updated Firefox version. In that case you can do like me.
->Disable firefox update checking
-> Install in new folder in separate directory name.
installation

->When you run this one for the first time , just cancel initial update process. As you have disabled firefox update(in your updated firefox), make sure you see the update settings disabled in this firefox 26 too.

Note : This part is little tricky, I have provided separate post to resolve it.
For Jmeter Remote Execution or local, it is better to to have only one firefox (version 26) with no auto update settings which will minimize complexity for test execution.

4. Keep firefox 26, selenium server in path variable. To check, type firefox from command line and run. You should see, firefox 26 launched in desktop.

image
image
image

5. Setting Jmeter: Usually , we do not need any extra things for webdriver sampler. But, as we need debugging, we might use this following property in user.properties file.
webDriverJmeter

It enables sub sampling which is good for debugging. 
webdriver.sampleresult_class=true
Let me explain more in how it works: JMeter webdriver sampler is just an extension of http sampler, not alternative, with a script editor. When it runs, it actually calls firefox driven by webdriver. That means, it send its instruction to mainly webdriver and, webdriver does every thing. Now, you might be wondering how code goes to web deriver. Like as other code support, webdriver core run as external code following JSR specification. It is actually JavaScript execution. And, you see, it is just like as webdriver java code with some basic modification due to jmeter adoption. I will provide separate blog or coding.

And after you write the steps as webdriver script, use listeners to get time. Like as other samplers, you use listeners to debug sensibly.
Browser supporting : 
Just follow this link which tells the configurable browser names that are supported by webdriver sampler. You can see this from jmeter also,
image

Time Measurement:

webdriver sampler calculates time from this line of code
WDS.sampleResult.sampleStart()
to this line of code
WDS.sampleResult.sampleEnd()

So, for debugging, we need sub samples which will be shown as child of main sample. For this purpose, we need to activate sampleresult_class true. After activation we can do sub sampling like

WDS.sampleResult.sampleStart()
//Brows to a URL
//Add user name and password
WDS.sampleResult.subSampleStart('Log In request')
//Click Log in
WDS.sampleResult.subSampleEnd(true)
//Check title
//do some another process
WDS.sampleResult.sampleEnd()
In view result tree listener, you can see , main sample contain sub sample name “'Log In request'”. And a main sample can have multiple sub sample. That means, we can calculate each time separately from result tree.

Note that, sub samples will not be shown separately in tabular format listeners or graphs.
And, if we need to measure particular transaction . we can split single test among multiple webdriver sample. Like, a sampler for log in, a sampler for doing some work in home page, a sampler for messaging. In this way, we can see results in reports. Usually each business transaction is measured in separate sample where as detail steps are sub sampled.

Writing first script :

To write webdriver sampler script , you need to add webdriver sampler(sampler –>webdriver sampler) with any browser (driver)configuration. See image from browser supporting section to get the driver configuration elements.
I will provide separate post for how to write webdriver test script with an example. You can see some nice guidelines form wedriver sampler wiki.

Test Planning :

As we know from my previous post on client side performance test, this test should run from single user or thread. As jmeter sampler populate browser with webdriver, this has me particular hardware requirement. That is, it will occupy a single thread of a processor. That means, if you want to run webdriver sampler, you need at least 2 core CPU. Why 2 core, another one is for Jmeter. So, if you have 8 core CPU, you can run only 7 threads for webdriver samplers. So, for testing we have to add separate thread group or test block for webdriver sampler.

We will run for measuring client execution time on
1. When there is not much user load, .
2. When average load on server ,
3. When High load(considered as peak load)

Sometimes , It is also better to test under
1. Beyond capacity , where error can be occurred or may just after error condition.
2. As continuous performance testing. Usually, people run selected regression test with jmeter daily or weekly.

Again, formula is simple, 1 thread of CPU for single client performance testing. 

And, you run the test simply like as Jmeter test, that's all. More detail will be in my blog with example.

Thanks …:)

What is Client side performance testing in Client Server Application?

In this article we are going to learn about client side performance testing. And following engineering practice, we will see client side analysis and helping tools.

In old time (like 8-10 years ago), the web was totally different then now. At that time there were less type of client and less client side processing. But, now a days, spatially after Web 2.0 booming, we see client has become more smarter, more functionality there as well as new innovation through new technology. And, we have seen hardware become cheaper. So, client also started using local hardware not like as old time , fully depends on server. So, in performance engineering context, if we only do performance testing and monitoring of server, it does not make any sense. We will be missing a lot of performance issues based on client functionality. To measure over all performance situation, it has become necessary to test for both Server and Client for any server-client application. So,

What is Client side performance testing?

When we say client side, it means it involves every thing involving client side activity. That means, when we do performance testing of an application based on its client activity, that is client side performance testing. Example : if you consider an web application, client side performance will include the time of server execution and client side browser rendering, JS/AJAX calling, socket responses, service data population etc. So, it can be differ based on operating systems, browser version, environment settings, firewall and antivirus, faster priority functional execution and user activity of course.

So, main targets for client side performance testing are

1. Measuring actual Client timing for particular scenarios. It can be grouped as business transactions or can be measured with separate request for single user. 

2. Measuring single user time under different load and stress scenario. It is actually part of usability but it is included as performance test activity. 

3. Observe application behavior when server is down. Spatially due to application stress, when one or more servers are down, what are the situation . This might be some critical due to data integrity. I have tested for server back up time test after getting down.
 This particular type is fully based on requirements. Like as in our current project, we run this test every day to see the progress. We run regression test script(critical items only) so that we case see where is our business transition timing is going.
As you know from my previous post, type of performance testing, this will lead us to two basic part, performance measurement and performance monitoring or debugging.

Client side Performance Measurement :

This part is tricky. In performance world. when we say performance tools, it all refers to server side performance measurement tool like loadrunner, jmeter etc. So, what about client side performance?. As, it was not popular before, it was mostly done by manually. Still it is one of the best practices to sit and test application critical functionality with a stop watch and measure that. I remember doing that in back to 2008. These are handy, no need automation , no need to know  much technical stuffs. But, as it is manual time measurement and humans are not as perfect as machine for measuring time. So, it has error. So, there should be tool there.

Usually, before Jmeter Plug-ins, there was no mentionable tool for web application performance test tools. We can use Jmeter webdriver plug-in to operate the same functionality that a human does and measure that time accurately. And, we can do same steps programmatically by using browser simulation. Like

1. Selenum-webdriver-running in Java/C#/Python/Ruby/nodeJS with any supported test runner that measures.
2. Ruby-watir-Gherkin-cucumber
3. Java-robot simulation
4. Java/c# – Robot framework
5. Native action simulation tools/scripts(Auto IT/ Sikuli)
6. Robotium/monkey runner/ Applium for Mobile client performance measurement.

So, as Jmeter has this web driver sampler in plugin , we can use that. I will provide separate post for webdriver sampler example.

Client side performance monitoring :

This means we have to have monitoring for our application as well as client resources.
Like as every operation systems, windows or linux has their own system internal tools to monitor resources. And, as opens source jmeter consultant, i should say we can use perfMon as Jmeter plug in to monitor client side (you may say local host)

Now, for client side application monitoring, its really depend on application client type. If it is a TCP client, so you have to use TCP monitoring tool on the port which your application works.

Lets see some tools for web application protocol, http(s) for monitoring and analysis.

1. Browser Based Tools : Most of modern browsers have build in tools. Like , IE or Chrome, if you press F12, you can see the tools.(they follow w3 standards on navigation timing)
->I like YSlow with firebug in firefox. (first install fire bug and then YSlow)
->Most popular , Page Speed by google.
->Tools from Chrome Extension like , Easy website optimizer. Developer tool, for REST web service Rest console or Advance REST client etc.

2. 3rd party Website:
->GTMatrix , is one of my favorite
->Web Page Test is very useful

3. Proxy : For traffic monitoring i use
->YATT with winPicap
->MaxQ(not focus on monitoring but you can use for that)
->IEWatch

4. 3rd Party tools:
->DynaTrace
->SolarWinds
->AppDynamics
->Nagios (Free)
->MS Message analyzer
->For web service testing, SOAP UI

 And more and more....:).
Paid tools are good but i guess you can use a skill person for using set of other tools rather paying them…:)

Helper tools: For different web architecture, data come to client in different format. So, you should have
1. Base64 decoder that supports different character set.
2. URL decoder
3. Decompressing tool
4. File/Character format converters.

Here we get the tools, but before using tools, we need to define what to monitor. Usually for an application we monitor.
1. Application rendering time
2. Specific JS/AJAX request/response/processing time
3. User dependent request time
4. Client side validation time
5. Loading time for script/style/dynamic content.
6. Total and request specific data coming and going
7. How request are queued and processed(the behavior)
8. Any exception based(serer/client) function or behavior.
9. Business transaction or Particular  request time

And for client resources.
1. Browser & Application occupying CPU/Cache/Memory/Disk/IO/Bandwidth
2. If application interact with any other service or application , we need monitoring that too.

Example, once our application uses the export function open with MS excel and in a point of time it crashes due to our application occupied so much memory , that excel could have memory to load up the big size data.

Test plan for client side execution: 
Usually, a separate thread or users used to run the client side performance test to get the timing. Not like as server side script that will run thousands of user and run parallel as it is specifically made for single user execution time for specific scenarios.

So, in the next post we are going to test a sample application using Jmeter webdriver and measure the time.
Thanks..:)

What is Performance Reporting? Example with web application.

In this post we are going to learn what are the things need to be included for report after performance testing, results and analysis. This is typically as integration of performance results. This document communicates with different type of people interested with performance testing.

What is performance report?
After a successful performance test execution, we testers has to provide a report based on performance results. I am referring that report not raw results. A report should be fully formatted based on requirement and targeted for specific people or group of people.
First of all we need to know what does reporting means. I have seen lots of performance reports with very much technological terms with a lot of numbers. yes, that is what we performance testers do.

But, what I found, every one is not very good with those numbers. Actually, i think, those numbers does not mean any thing if there is no certain context. So, how can we get context? Its not that hard. Performance testing is related to certain type of people in the group. And, a good performance engineer should add value with those results by analyze them based on different goal and requirements. I have separate blog post based on goal and requirements. Please read before this.

So, after getting requirement we have to make reports. I will give an example of web application(financial domain) to have better understanding. So, lets discuses what should be the steps.

Step 1 : Analyze the results : 

This is very fundamental. We have to analysis the results. I believe, analysis is same important as performance test run. This actually get real value for the test execution and can pinpoint the problem. And, a performance tester should have that capability unless where is the skills. He has to analyze and define the issues may or may not be there.
So, based on the goal and requirements we should gather information and categorize them .
Example : we have a financial software that does transactions (debit, credit) with a lot of approval process. So, we have a lot of people interested among those transaction results (all business , marketing and software team). So, we categorize rest results in those grouping and show repot to only related groups.

Next is, we do, matching requirement with grouped data.
-What were the goals for these kinds of people?
-What were the primary requirement and target?
-What is the actual value and how far we are from expired? 
-What are the impacts? Impact based on revenue , client feedback, prestige of the company, interaction to another system etc.
-What are the causes? Architectural problem,database problem, application problem, human resource skill problem, process problem, resource problem, deployment problem etc. 
-What are the evidence of those causes? What are those related values? How much the project can tolerate and how much they cant. We might need to use profilers along with performance test tools. Like Ants memory profiler, Yourkit profiler, or even framework build in profiler(on the language platform that you are using).
-What are things can be done to resolve those?
(this part is tricky, a new performance tester may not come up with this, but he can talk to his system architect or lead to come up with solution. In my current project , I am trying to do. I am proposing possible solution and discuses with dev lead about those and we sit together and do some experiment to suggest the best solution that match with existing architecture. BTW, there can be issues with architecture, I have seen this in 2009, where I was involve with a product that could support some amount of users, (avoiding to be specific). To scale up the software, our team have to change the full architecture of the application to support almost 4oo% more)
So, we have done some analysis. We can do a lot more in analysis. Usually, if you are doing analysis with performance results for a product for the first time, you will get a lot of issues and need a large time to do that. But, gradually time goes, issues will be solved.

Step 2: Group Data :  

After analysis, we need to select what data should be involve with each group. This is king of tricky if you do not know the interested group. First step should be know those people and have some idea on how to communicate them. I think, performance report is just a communication format of you performance results. So, you have to very careful in this. It should be based on goal. Let me give example from project, (web app). We have 3 kind of people interested with performance testing.

1. Business users or client, who are real users interact with the system. They do trading by them self. So, for them, the high priority is , how fast we can do business transaction. A transaction includes multiple steps and involving approval, so how fast we can do that. Think, if we add anything like throughput or request size or bandwidth measures, it will not get as much attention as the time for each total business transaction. And, as they are paying, we have to ensure, after each build/release we are not decreasing. And , if we decrease, there should proper explanation from development team.(like, new feature, DB shifted, adding more security etc).

2. Product Stakeholders(all CxOs, BAs and Managers): These people are not like business users, they know the basic of inner system component but most of the time i found they want to avoid technical details. They interested same as users on those values, but they not only need to know time values, they also want to know more detail regarding what is causing those. And, if you include how to resolve those in minimal cost (with cost measurement) , believe you will be appreciated . Believe me, if you add those work values and your findings, these people will be interested more in performance reports.

3. Development team(DEVs or QAs): These are people are from development team. We used to attach raw results with report. In their report , things are little different. We start with problems, and explain those problem, provide them evidence with reproducible way(even teach then how to use performance tools ), and some guidance how to solve them like best practices, code samples, tricks people have used so far. And, as graph, we give them detail timing. Like , throughput, size, hit to server, server processing time , db request time, post/get request individual time), resource time with expected measurements.

Step 3: Arrange reports :Drive Shared Example

Like as all other report, typically a repot contains (I am adding common for all groups)

->First page with Heading as Home where product name and performance testing
->A page for Table of content
->An introduction: This will keep people on context. Add objective of the report in 2/3 sentences.
->Summary : It should contain what is the final outcomes. Pass/fail/improvement areas.
->Test Objectives : Why we testing? This should contain the requirements in bullet format.
->Test Strategy : This contains, what was the plan, which tools was used, what tools for analysis or debugging.
->Test Overview : How it was tested, during testing, what were the situation, what were monitored, what were the observation.
->Test Scenario: What are scenarios involved in test execution (It may be presided based on group of users)
->Test Conditions: Test conditions based on tool, environment, application settings and configuration including delays.
->Load Profile : How user was generating load during test. Jmeter/ Load runner or all other tools provide this. You can take screenshot of the graph and add here. Like, 100 user, 1 hour, 500 users 3 hours like this with graph.
->KPI :It is optional. It is called Key Performance Indicator. Based on requirement , each group need to know a value that indicates performance situation of the product. Usually it drives future investment and activity. I will provide a separate post how to make a KPI based on user requirements in case you don’t have any measurement.
->Results: Tabular results, common for every tool. Jmeter provides summary results or Synthesis Report. Some times, this can be optional to hide detail results from end user/business users. We used hide them.
->Results Graph: All graphs based on tabular results. We should be very careful in this area. We should put only related report here. We have see the goal and requirements and then decide. I mean, put the context with each graph. Ask your self, why this graph you use.
For example, in our project, we include only transaction comparison graph for business users.
But, for stakeholders, we added Throughput/min, Hit/Sec, Error% over time and user. Etc.
And, for developers, we include al most all type of graphs following jmeter listeners. And, graphs with raw request reference not in business transaction so that each step can be shown.

Note:
->Some time we might have to change the unit of the results for better graph, like through per second to per minuets. It should be based on what are your value in range. 
->Please be careful to add at least 1 line to describe each graph before putting them in report.

->Product Analysis : This part should be shown only to DEV/QA team. if interested, to stakeholders. This is very important part if you consider about project. Put all necessary part based on your analysis here, specifying issues with evidence.
This might include detail report separately. 
This might have detail screenshots with different tools.

->Suggestions : This part should be based on group. Suggestions for business user report should be more generic way , at best reference with UI.  For stakeholders, it can be referenced with product module or resource(like DB server, app server). But for DEV team , it should be pinpointing solutions or best practices. This whole area is based on project context, so use it sensibly. Try to be more generic and non technical in language( I have learn this in hard way..).

->Conclusion : This section should contain, 3/4 sentence defining performance status and things can be considered for testing in future.

->Appendix : This section should have detail definition on the terms we have used in whole report. Usually what is throughput, hit per second, transaction, average, min, median, 90% line etc should be defined here.

Step 4: Report Format :

Performance reports can be in provide in PDF, DOC, Excel or even Power point format. It really based on your company or working group. If you don’t have any standards, just follow any other report of you project. It is not that important unless your group maintain a system that reads the report and shows to other people. In that case you have to be specific  on file specification. I personally like PDF edition.

Notes :
->Some time we need to have section for report document summary.
->Some reports might have a section for who will see this report.
->Some reports may have less sections than example, just making sure that it follows your requirements.

So, have performance report with context. and have priority for the management in performance test activity.

Thanks..:)

What are Performance requirements?

In the article we are going to see details about requirements. What are those? How to deal with it.

What is Performance requirements?
Like all testing , we need requirement before performance activity. So, what is Performance Requirement, how does it look like? How to deal with it.
As we know performance testing is all about , time & resource. So, Performance requirement will full of time and resource requirement related to application. That means basically related to the following questions.
How many users?
How fast?

And based on that we need to see
What are the bottlenecks?
How can we solve it?

Before going to deeper into requirements, please see the performance goal from this post. It is necessary for getting in context. 

As we have seen from my previous post about performance test types, if we realize, we will get requirements based on Application and the infrastructure. And we need to have those related to time. Usually time are measured in Millisecond and size in bytes. So, based on that the performance requirements must be involve in following types..

Application Requirements :
Server Side :
->Number of user application can handle concurrently.
->Request/Transaction Processing maximum time
->Request/Transaction Required maximum Memory/CPU/Disk Space usages.
->Minimum Required Memory/Storage/Disk Space for running Supported number of users.
->Application (request/transaction) behavior in case of errors or extreme peak.

Client Side:
-> Request/Transaction Processing time. This involving , particular request time ,browser rendering time(for browser based application), or native app render time(if it is mobile application/desktop application) or even environment render(if it is running under custom environment/OS.
->Request/Transaction Required maximum Memory/CPU/Disk Space usages. Usually , it is based on where the application is running. There should be specification for the environment like for web app, which OS, which browser version on what settings under what conditions(Antivirus or any monitoring tools)
->Minimum Client environment specification and its verification.
->Application (request/transaction) behavior in errors or extreme peak.

And , we also need Application monitoring for all of those in server side and client side. This might involve with application monitoring and debugging tools as well as framework based tools and environment based tools(OS tools, browser tools, network tools)

And, before project, please define those monitoring requirements. It is also based on Goals. That means, if you are targeting for server update, so have spatial monitoring on server. I had a change to work with some requirement for performance testing involving with monitoring activity. That means, those performance test are design to monitor not only application, including infrastructure. Those have specification for network / bandwidth consumptions, and resource consumptions in server and client.(PC/Tablet/Mobile). Example :

Server Performance :
->Maximum CPU/Memory/Storage/Virtual Memory usages during certain scenario.
->Fault /Error recovery time(including reboot or initiation time)
->Resource Temperature monitoring during extreme conditions(full load, and going up)
->GC and Thread conditions in extreme peak.
->Max Log, size in case of App/DB Logs/Web server logs.

Client Performance :
->Maximum CPU/Memory/Storage/Virtual Memory usages during certain scenario.
->For web application, browser behavior/time for Reponses.(Apps for mobile)
->Device Temperature monitoring during high data/complex operation (for Mobile/Tablet / device)
->GC and Thread conditions in extreme peak.
->Max client Log size and limit.
->For web app, network and application resource(file/requests) monitoring, spatially time and resource needed in client.
->As, now a days web Applications does a lot of processing in client side. So depend on architecture of the application, we need to trace those for client environment.

Network Performance :
->Maximum/Minimum Bandwidth for application/transaction/request
->Behavior during extreme peak
->Fault or Error recovery time and behavior.
->As this is involves network devices, so tracing there behavior are used to be in specification.(like, we had a HUB, we needed to test what temperature during multiple request to server connected with this. Usually, this become complex when we send large data over network following lower network layer)

More over, depend on architecture, there should be performance requirements. For example, we had an asp.net application and we followed default view state and call back state policy , which become main issue for application slowness. We had to test default timing for a particular transaction, then we test that with different browser to verify we are going in right direction. And we also tested the resource and time needed in client for particular request. And then new version released after modification of the  view state implementation. And do full thing again to prove application performs well and takes less resources while running in client.

Example : We had a Small SOAP service.That takes an XML, and based on XML data, it process and returns XML. Client used to sell that service to other vendors for using. So, our requirements were like
->Max server processing time for particular XML type(we have different type) 750ms.
->Our server should server concurrent 120 requests and at least 2000 users in 1 hour block for particular XML type.
->Client should not get more than 1200ms time in response in any case(request and response time = network time+sever process time)
->Get the Max support of users by our server
->After Maximum users, when it is in extreme condition, server should not die and provide server busy message as XML return
->During High or Extreme Load Clients should not get any 4x or 5x messages
Note : As, our server need authentication tokens, we should not have any security issues during high or extreme load.

Analysis & Reporting  Requirements:
In real testing, there are some requirements related to analysis and reporting. Those are mainly performance goal based requirements. I think it is mandatory to have those goal based reporting requirements so, that we can find bugs during analysis phase.

Analysis should fully based on performance test goals. So that we can pinpoint the issue or bug or improvements and provide in well formatted reports for different type stakeholders. This is important because, performance testing is not easy to understand by any one. We need to format the results and issues for stakeholders, developers, other QA members, Business people, CTO/CEO/COO etc.

User Stories : In agile based projects, performance tester should come up with Performance Scenario. This is important because, it will make easy to understand by other members of the team. It is very easy to make, just follow standard user story making rules. Like, from my example, one story can be , a client(mobile/web) should be able to process certain type of XML in 1200ms when server is processing 80 other requests in 70% of resource loaded.

And, if you don’t have requirements, use my previous post of GOAL and this one to come up with requirements for your requirements.

So, in summary (i do not go deeper on client server, just the principles)
image

The Mup Link(open with mindmup)
This is an incremental post, I will add more later on. Please comment bellow if you have more ideas.

Thanks … :)

Why Performance Test? Performance test goals.

In this article we are going to see Why we do Performance Tests? What are the main Performance test goals?

It is really depressing that very less performance test project started with goals. (as far as I have seen). It is defined when it is gradually moving foreword. Commonly, it is necessary to have goals to save time. As we, see we have a lot of parameters to take care about for performance testing(I will make separate post on this) we should have goals. So,

What is Performance Testing? What are the goals : 
Performance goal refers to what exactly we want to accomplish from performance testing activity. Like what we want to measure/see. This measurement has different prospective. Let me come up with real examples : (better to understand)
Usually different performance test goals comes from different type of group of people from team. like

Business Goals :(Business people are involved)
->We want a jump up a revenue , so what we can do to improve our performance?(which is major customer complains)
->We have a upcoming sales event/cycle. How many target clients we can drive for? What point should we not exceed?
->We might have a good reputation mentioned by Important people over Media on a particular date. What areas we need to take initiative to scale up the application?
->Specifically for online shopping application, if there is Black Friday or Cyber Monday event coming, what areas they need to take care about to keep their sites up and running for selling.
->Similar things for ticket booking application in holiday season and any major event (like concert, sports).
->Marketing might need to know the application tolerance for how much offer they can provide to customer, when they can.
->How much budget we need to scale our application for certain number of users? 
->How quick we back on business(function) if we crashed?
 
Technical Goals :(DEVs & QAs)
->How fast is our application? How much data we are using?
->We want performance testing for measure our application resource consumption and timings.
->We are getting lots of user/QA feed back on certain functionality/transaction. So, we need performance test on those to debug bottlenecks. 
->How good is our Application Recovery process?
->We want to have performance review to see how our application behaves when we have a major request processing?
->How much our application is scalable?Where are failing?
->We are migrating / changing our application architecture, we want to know the benchmarks.
->We are applying new language/framework/platform to make application faster. Are we really faster then previous?
->We are providing continuous delivery every week So, in case of performance , where are we? Are we upgrading or going down?
->In Scrum, Lets have a performance cycle. So before that, lets have a performance test. 
 
Usability Goals :(QAs, BAs, UXs)
->How many user we are supporting with usual behavior? 
->What are the behavior when we are in high or extreme load.
->We want to see how our customers feeling while application is using?
->What are the critical issues that may occur in peak point?
->What are prevention plans need to be taken when user face such bottlenecks? (Risk Planning)
->How good is our Application handles error conditions on high load?How is recovery?
 
Operational Goals :(Admin, Net Admin, Ops)
->Does our infrastructure supports the scalability which is provide by our application?
->We want performance testing for measure our application resource consumption and timings on operation/admin prospective.
->We are transferring our whole physical resource system (DB/APP server change or move to cloud) , so what is the bench mark, are we really improving?
->We are trying to add more resources in the system, so does our application supports performance improvements when we add more resources, or its just waste of money?
->What is happening maliciously by the application over our resources while running?(i faced that practically over DB logs which becomes so big while app is running that it overflows C drive and application crashes)
->What should be the recovery plans? What areas to take care about? 
 
This is an incremental post, I will add more as soon as I get. These are some primary ideas that I have worked with. 
 
If you think there are more, please feel free to comment. I will add with your name
 
Thanks ..:)