2013-04-26

Outsourcing Software Development to Bangladesh

Today's article is a bit different than my usual QA related articles. Today I will make a case of outsourcing software projects to Bangladesh.

Bangladesh is becoming a powerhouse in software outsourcing very fast . Over the last few years, the volumes of software projects (off shored/outsourced) have grown to almost double yearly. More that 20,000 people are now directly connected with Software and IT services. Among them, a large portion of that resource is involved with exporting services to European, North American and East Asian clients. We are regularly participating Netherlands Trade Fair and CeBIT.

Bangladeshi freelancer community has further supplemented the IT exports by close to US$7 million in 2010. Bangladesh appears in one of the top 3 positions consistently in freelancing platforms( ODesk, Elance and Freelancer etc). Here is an interesting article about this phenomenon.

Bangladesh has a huge supply of young, trained ,skilled and English speaking resources available at costs almost 40% lower than countries like India or Philippines. If all other costs like infrastructure, support and management is considered the price advantage with those competing outsourcing destinations become even more prominent. Cost of living in India (for example) is fast becoming comparable with countries like Malaysia or Thailand. And thus the future of outsourcing advantage to India is decreasing every year. This is where Bangladesh shines the best with very similar educational standards and processes, Bangladeshi resource pool is yet not fully tapped and it's a potential candidate to become a major software outsourcing destination is very much like India back in 90's. KPMG did a strategic whitepaper sometime back highlighting this potential and it is a very good read to gain some idea of what is happening.

Apart from the freelancing community the more formal software organizations are also becoming mature in the field. A very good example is my previous Employer Kaz Software has been doing software development outsourcing from Bangladesh since 2004. More such companies are being setup with proper development  process and management required for outsourcing needs. The creative culture for software development of such organizations are also noteworthy.
And recently, software association in Bangladesh BASIS(Bangladesh Association of Software and Information Services) is promoting top freelancers and freelancing companies with out sourcing award 2013. And KAZ software is one of them.

So we ,technology workers, are looking to a great future in software outsourcing in Bangladesh. I am adding a minimum rate sheet to get some idea about the costing. [Sources : Different freelancing platform]. The rate will be changed depending on company to company.

2013-04-15

How to make reports(and comments) in Jmeter?

In this article we are going to have an idea on how to make report and comment on those reports based on the jmeter listener results For primary idea on jmeter listeners, see this.  

We have to remind that, performance report or any report may send to any level of users have some useability or understanding issues . In here level of user define by technical knowledge. I have been experienced in such projects where performance reports are send to technical, non-technical, managerial, business development people etc.
So, we have to make the comment along with report in such a way so that every kind of person(mentioned before) have at least some understanding over the report. You may ask why this is important
-This will help the whole software team inside a company to understand the report which will make importance about the performance
-This will help SQA team to set standards based on target clients
-This will help management person to set up the timeline and milestones of the projects
-This will help the support team to define their efforts and answers of feedback to client.
-This will help stake holders to know the actual stability of the product in production.
[As, a performance test normally performed after development , sometimes after beta]

Normally we used to keep a few listeners in the jmeter test plan for running as most of them takes higher resources(memory and process). It is best practice to use jmeter to run without listeners but to save results in CSV file. We will have mostly used two listeners Summary Report, Aggregate Report.  After getting the results from these two listeners, we need to save the results as CSV file and then we will process them in report and then we will make some comments.
Preparing reports: From Summary Report, Aggregate Report, we will get these attributes

Throughput : (Request/second, sometimes it is shown in request/min or request/hour but when you save the CSV, it is always in request/second)
What is this? It indicates how many request/second is gotten by jmeter. It means, the more throughput your web pages have, it will be more responsive and faster. It includes any intervals between samples.
Some time, we might get a higher throughput because of cache server serving the same data again and again. To overcome this, try to avoid static data while requesting.
This is an ideal candidate for reporting.
Throughput = (Number of requests) / (total time).

Average : (Millisecond) It indicates the average time needed for one request among the samples jmeter determined. Ex- Suppose we are testing 100 user load for a log in request, and among the 100 users, jmeter listener see the results among 86 users and gave a average time. In here average time means total time needed by 86 sample and divided by 86. (per sample time).
This is not ideal candidate for reporting as , most of the time starting and ending thread may need some extra time, so the average time may not represent the actual average time.

Sample: (numbers) : It represents the number of sample requests determined by a jmeter listener. During execution, it is normal to have determination among less number than thread numbers. Ex- I am testing 100 users but, a listener could record or measure results among 86 samples among that 100. So, it will define the number of threads(samples) under measurements.
As, it don’t represents any state of the wep pages, so we can avoid this to include in the report. If it is asked , how many samples were used for measurement, then it will be mention in the report.

Min : (Millisecond)  The shortest time needed for a sample among the same named samples. It can be ignored in the report.

Max : (Millisecond) The longest time needed for a sample among the same named samples. It is one of the ideal candidate for report.

Std.dev: (Millisecond) The standard deviation of sample elapsed time. Jmeter calculates the population of standard deviation( same as STDEVP function in spreadsheet) not from the sample standard deviation. It means, it calculates among results shown in the summary report data, not from the sample time.
Depend on client, it can be mentioned in the report. Usually it is not mentioned. 

Error : (%) Percent of requests with errors.
It means, if there are 100 samples, and among then 10 samples took more time than it should(time can be mentioned in the sampler) or not responding or getting false (of http 200) .
This is an ideal candidate for reporting as it represents errors.
Sometimes, we may get 0 because of non responsive sites or exception in apache / java socket exception. We should be aware of the log before mention this in the report.

Bandwidth :(kb/sec) The throughput calculated in Kilobytes per second
Normally, it is mentioned beside throughput in reporting. It is optional; it just shows additional visibility with throughput.

Size : (avg. byte) Average size of the sample response (shows in byte) . It can be mentioned in the report but not mandatory. It is useful when refactoring the solutions, showing which are heavy requests.

Median: (Millisecond): IT represents time in the middle set of results. That means, 50% of sample took less time that this and other 50 took more time than this. The Median is the same as the 50 th Percentile
This may be mentioned in the report. It shows a overall average for requests.

90%Line : (Millisecond) It represents the time needed by 90% of the samples. In other words, 90% of all samples took not more than this time. And the other 10% took at least this time. It is same as 90th Percentile.
It is an ideal candidate for reporting as it represents the max time needed for most of pages(90%).

So, we get the measurements. Now, reporting. In this section, we will see different representation of reports.
1.    Compare graph: This is comparison among all get and post requests based on the measurement under single test run. So, it is side by side comparison of requests under single measurement. Ex- In log in test, if we compare log in page load and log in request side by side, it will show which will have more throughput(a measurement) or need less average time(another measurement).
2.    Progressive graph: This is comparison among progressive test run on a get / post request based on a measurement. Progressive test run means increasing / decreasing the number of user/time for test. For example, if we do testing with same settings under 50, 100, 150, 200 … number of users. And when we will compare (let’s say Log in Request) response time for 50, 100, 150, 200 users, then it will show progressive graph for log in request under response time. Same for time driven approach, like running same test under constant number of user for 30min, 1hr, 2 hr, 4 hr, 8hr etc.
3.    Mixture (Ultimate) Graph: This is among the all get and post requests based on the measurement under progressive test run. It is basically a mixture graph of the previous two.

Tips:
- Change the unit to make graph more understandable to user. Ex- make millisecond to second or req/sec to req/min. This is important to have more visibility over graphs.
-Change unit to have a good size graph. Some time graphs became small for using small unit, if we change the unit, it will be more visible.
- Change the label of the request. This is must when we use recording. For better understanding over page/request , change the label so that every one can understand. Ex- Change the page name to Log in page instead on domai\login.html.
-Define the problems in the graph. (you can see the standards mention below to identify problems with in the graph report)

So, when we have the reports, we may comment based on the reports like following.
A.    Using Compare graph :
   1.Which request is talking the most time of all. According to this we can apply refactoring, implement caching, identify bottlenecks. 
   2.Which page size is bigger, so let’s restructure or re engineer the page. (like optimization).
   3.We can identify the ajax/js time dependencies.
   4.We can also show which pages have high error rates
   5.We can define max throughput of a page/requests and define which need to improve.

B.    Using Progressive graph :
   1.We can show which page/requests are failing/generating error at increment over user/time.
   2.We can show which page/requests are taking more time at increment over user/time.
   3.We can define maximum user/time supported by the application.
   4.We can also find the server’s breaking point.

If we have a chance to compare results among multiple servers. We can comment on
1.Which server requires less time(performs better) on which page/requests
2.Which server need to improve (performs poor) in which page/requests
3.Which server has bottlenecks
4.Which sever is busy most of the time(using server agent)

So, now we know the comments for a test on a web application. But there are other things we should mention in the comment. These are fully depends on clients. I am adding some from my previous projects.
1.    Server configuration & bandwidth where test were performed
2.    Server configuration & bandwidth on which the tested application hosted
3.    Jmeter setting and configuration( jmeter property, test thread configuration, ramp-ups , delay time, plug in configuration, etc)
4.    Test Scenario settings
5.    Notes : on dependencies, blocking issues, known issues. Etc.
6.    Suggestions :  Based on what we get with measurement points.
7.    Good areas : Based on what we get with measurement points.
8.    Bad Areas : Based on what we get with measurement points.

Note : It is best to set standards before starting the test. This is one of the best practices. So, when can make standards. It should be at the beginning or before test plan approved. First, we should find what are the type of requests are there, then set the standard.
To have more ideas, let me give you some areas…
Let’s say out testing web application have following type page/request
1.    Page Load Get
2.    Ajax
3.    JS
4.    Page Post with 10 parameters

So,  what will be the standard. Actually , this part is fully depend on
-The robustness of the application
-Client target and standards
-Mostly used standards in similar type application over the world.
-Development time line.

Depend on my previous experiences; I used to provide 2000ms for Page Load Get. 3000 for a Ajax / JS request, 500ms for one parameter for a Page Post request. This is the data I set with my project experiences and it will vary project to project.

I will try to add more ideas time to time.

Thank you...:)...


2013-04-12

How to parameterize jmeter?

In this article we are going to see how can we parameterize jmeter. That means , we will see how can we use variable passing values in jmeter. 
Mainly, this article is for novice like me. There are plenty of resource you may find from google. I am just making this for learning quickly.

So, first we have to
1. Declare the variable in some place in test plan
2. We have to use the variable in the test plan
As, jmeter test step will need the value during execution, so we can input value in 3 ways
1. Directly provide value(static)
3. Provide the value using txt/csv file
3. Provide value from previously performed step(ex-from any response data of previous test/step run)

1. Directly Value Insertion :(Static value)
Step 1: Go to test plan Insert
Step 2: Add a user define variable
Step 3 : Provide a variable name and its value.
Name section is the name of the variable, and value section is for value of the variable to use.
 
Step 4: Go to your request( i have used post http request) and use in following way
${NameOfTheVariable}. For example , I use NAME as name of my variable, so ${NAME}.
What will happen?.. Jmeter will send request as replacing the NAME with its value(in this case shantonu)


Standard Uses :
- Variable to send static requests
- Variable as name of the domain(to control main domain centrally)
- Variable as predefined data for comparison or validation
- Variable for debugging test cases

We can also  assign the user define variable from inside the Test Plan.
Add > config elements > User Defined Variables
 
This does the same as from Test Plan. But, we have to ensure that this should be assign before the step where that values can be used. It means , variable should be present before use either as parent or as ahead step.

Google plugin : jp@gc - Parameterized Controller does the same thing.


To add this , Add>Logic Controller >jp@gc - Parameterized Controller.
Well this plug in mainly used for referring other variables. We can use multiple jp@gc - Parameterized Controller. We can make a tree type reference relation between variables(one variable will point another , like that) with this. But, we need to keep careful about sequence of using this controller. In a test plan, this controller work sequentially(variable will pass one to another)

2. Value from CSV/TXT : 
Step 1: Add a  config element CSV Data Set Config under test plan
Step 2: Define the file from where data will be fetched(variable name and value)
Step 3: We can use some function like_CSVRead() and_StringFromFile()

I will Make a separate post on how to work with CSV Data Set Config. I am just showing a screen shot.

There is one issues with this ,  It can pass values to only one other config element which is Authentication manager.It does not pass value to other config elements.
So, mostly we use Google plug in.

Google Plugins :  jp@gc - Variables From CSV File

I get those variable with values from the text File mention in right portion. We can check by clicking  "Test CSV". In here I use comma "," as separator. You can use ; , : or other symbols.
 To add this Add > config elements > jp@gc - Variables From CSV File
It is best to use google plugins as we can test values during test case making.

We can also use Function Helper . It is in the Option menu.


Step 1 : Open Function helper
Step 2 : click _CSVRead , you will get the helper name & value column.
Step 3 : Add the file path where is your file( if you don't want to provide full path, keep the file in the bin folder and provide the file name only.
Step 4 : Click Generate, and you will get the reference function variable. Copy that.
Step 5 : Add a pre processor "User Parameters"  where you will use the variable
[Keep eye in my blog as I am going post detail on pre processors]
Step 6 : Add a variable
Step 7 : Add the variable name and assign user_1 as copied function variable.
This will take the value from the file and assign to NAME.
Note : This is used for specific cases where we need to define variable separate from each requests as well as thread.
I will make a different post on Function  Helper and its capabilities.

3. Value from previously performed step : 
The main concept is, we will fetch variable dynamically (depend on previous response data) in run time. Main cause for that, we will get some variable changing over time like token, cookie, JSON, session etc. If we use dynamic parametrization, we can send our requests properly. To do this
Step 1 : Add a suitable post processor with the response request where the value can be present
Step 2 : Make a variable declaration in the post processor
Step 3 : Use the variable with the new requests.
To know more about post processors , see my previous post.

I will try more examples in future.

Thanks...:)