2013-04-30

How to re-sign an android app(apk)?

In this article we are going to see how to un-sign a signed app. This is required when we will test an application apk file with robotium. For general ideal on robotium, I will provide separate post. First , let's have the apk file(application extension for a android app). Ex- game.apk

We can do this in two ways.

A. Manually :
Step 1:  Rename the APK as zip. Ex.game.zip

( To see the file extension in windows, Explorer > tools >folder option > view > un check )

Step 2: Delete META-INF folder
Step 3: Make the full folder into a ZIP file(re zip). Ex- game.zip

Step 4: Rename into apk extension. Ex- game.apk
Note(now there is no certificate now, so installation will be fail, if you try now) 
Step 5: open command prompt. and write the command
jarsigner -keystore [source keystore with path] -storepass android -keypass android [source apk with path] androiddebugkey
Example :  jarsigner -keystore C:\Users\shantonu/.android/debug.keystore -storepass android -keypass android D:\Android\android -sdk\platform-tools\game.apk androiddebugkey

Note : jarsigner  In windows PC, This command in here "C:\Program Files (x86)\Java\jdk1.6.0_06\bin" , so using command, you might need to brows to there.

Now you should be able to install apk.

Step 6: This is optional , it is necessary if apk is not installed properly. This will align the content inside the apk. To do this, From command prompt, navigate to \Android\android-sdk\tools, and type
zipalign 4   [source apk with path] [destination apk with path]
Example :
zipalign 4 D:\Android\android-sdk\platform-tools\game.apk D:\Android\android-sdk\platform-tools\AllignedApkName.apk

Now ,use that new apk and install to device.Your apk is re-signed with debugging key.

B. Using tool :
Step 1. Download the tool from this link
Step 2. Run this jar and drag the apk on to UI of the tool

Step 3: When it will prompt to save, save with new name


Step 4: press ok from confirmation message.


Now your apk is re-signed with debugging key.You may install in device.

Note : For windows pc, you need to add two system variable.
My computer > properties>Advance system settings >Environment variable
Then under system variables, press new

Variable Name : ANDROID_HOME
Variable Name : . Ex:- D:\Android\android-sdk\

Variable Name : JAVA_HOME
Variable Name : . Ex:- C:\Program Files (x86)\Java\jdk1.6.0_06\


Thanks...:)

2013-04-26

Outsourcing Software Development to Bangladesh

Today's article is a bit different than my usual QA related articles. Today I will make a case of outsourcing software projects to Bangladesh.

Bangladesh is becoming a powerhouse in software outsourcing very fast . Over the last few years, the volumes of software projects (off shored/outsourced) have grown to almost double yearly. More that 20,000 people are now directly connected with Software and IT services. Among them, a large portion of that resource is involved with exporting services to European, North American and East Asian clients. We are regularly participating Netherlands Trade Fair and CeBIT.

Bangladeshi freelancer community has further supplemented the IT exports by close to US$7 million in 2010. Bangladesh appears in one of the top 3 positions consistently in freelancing platforms( ODesk, Elance and Freelancer etc). Here is an interesting article about this phenomenon.

Bangladesh has a huge supply of young, trained ,skilled and English speaking resources available at costs almost 40% lower than countries like India or Philippines. If all other costs like infrastructure, support and management is considered the price advantage with those competing outsourcing destinations become even more prominent. Cost of living in India (for example) is fast becoming comparable with countries like Malaysia or Thailand. And thus the future of outsourcing advantage to India is decreasing every year. This is where Bangladesh shines the best with very similar educational standards and processes, Bangladeshi resource pool is yet not fully tapped and it's a potential candidate to become a major software outsourcing destination is very much like India back in 90's. KPMG did a strategic whitepaper sometime back highlighting this potential and it is a very good read to gain some idea of what is happening.

Apart from the freelancing community the more formal software organizations are also becoming mature in the field. A very good example is my previous Employer Kaz Software has been doing software development outsourcing from Bangladesh since 2004. More such companies are being setup with proper development  process and management required for outsourcing needs. The creative culture for software development of such organizations are also noteworthy.
And recently, software association in Bangladesh BASIS(Bangladesh Association of Software and Information Services) is promoting top freelancers and freelancing companies with out sourcing award 2013. And KAZ software is one of them.

So we ,technology workers, are looking to a great future in software outsourcing in Bangladesh. I am adding a minimum rate sheet to get some idea about the costing. [Sources : Different freelancing platform]. The rate will be changed depending on company to company.

2013-04-15

How to make reports(and comments) in Jmeter?

In this article we are going to have an idea on how to make report and comment on those reports based on the jmeter listener results For primary idea on jmeter listeners, see this.  

We have to remind that, performance report or any report may send to any level of users have some useability or understanding issues . In here level of user define by technical knowledge. I have been experienced in such projects where performance reports are send to technical, non-technical, managerial, business development people etc.
So, we have to make the comment along with report in such a way so that every kind of person(mentioned before) have at least some understanding over the report. You may ask why this is important
-This will help the whole software team inside a company to understand the report which will make importance about the performance
-This will help SQA team to set standards based on target clients
-This will help management person to set up the timeline and milestones of the projects
-This will help the support team to define their efforts and answers of feedback to client.
-This will help stake holders to know the actual stability of the product in production.
[As, a performance test normally performed after development , sometimes after beta]

Normally we used to keep a few listeners in the jmeter test plan for running as most of them takes higher resources(memory and process). It is best practice to use jmeter to run without listeners but to save results in CSV file. We will have mostly used two listeners Summary Report, Aggregate Report.  After getting the results from these two listeners, we need to save the results as CSV file and then we will process them in report and then we will make some comments.
Preparing reports: From Summary Report, Aggregate Report, we will get these attributes

Throughput : (Request/second, sometimes it is shown in request/min or request/hour but when you save the CSV, it is always in request/second)
What is this? It indicates how many request/second is gotten by jmeter. It means, the more throughput your web pages have, it will be more responsive and faster. It includes any intervals between samples.
Some time, we might get a higher throughput because of cache server serving the same data again and again. To overcome this, try to avoid static data while requesting.
This is an ideal candidate for reporting.
Throughput = (Number of requests) / (total time).

Average : (Millisecond) It indicates the average time needed for one request among the samples jmeter determined. Ex- Suppose we are testing 100 user load for a log in request, and among the 100 users, jmeter listener see the results among 86 users and gave a average time. In here average time means total time needed by 86 sample and divided by 86. (per sample time).
This is not ideal candidate for reporting as , most of the time starting and ending thread may need some extra time, so the average time may not represent the actual average time.

Sample: (numbers) : It represents the number of sample requests determined by a jmeter listener. During execution, it is normal to have determination among less number than thread numbers. Ex- I am testing 100 users but, a listener could record or measure results among 86 samples among that 100. So, it will define the number of threads(samples) under measurements.
As, it don’t represents any state of the wep pages, so we can avoid this to include in the report. If it is asked , how many samples were used for measurement, then it will be mention in the report.

Min : (Millisecond)  The shortest time needed for a sample among the same named samples. It can be ignored in the report.

Max : (Millisecond) The longest time needed for a sample among the same named samples. It is one of the ideal candidate for report.

Std.dev: (Millisecond) The standard deviation of sample elapsed time. Jmeter calculates the population of standard deviation( same as STDEVP function in spreadsheet) not from the sample standard deviation. It means, it calculates among results shown in the summary report data, not from the sample time.
Depend on client, it can be mentioned in the report. Usually it is not mentioned. 

Error : (%) Percent of requests with errors.
It means, if there are 100 samples, and among then 10 samples took more time than it should(time can be mentioned in the sampler) or not responding or getting false (of http 200) .
This is an ideal candidate for reporting as it represents errors.
Sometimes, we may get 0 because of non responsive sites or exception in apache / java socket exception. We should be aware of the log before mention this in the report.

Bandwidth :(kb/sec) The throughput calculated in Kilobytes per second
Normally, it is mentioned beside throughput in reporting. It is optional; it just shows additional visibility with throughput.

Size : (avg. byte) Average size of the sample response (shows in byte) . It can be mentioned in the report but not mandatory. It is useful when refactoring the solutions, showing which are heavy requests.

Median: (Millisecond): IT represents time in the middle set of results. That means, 50% of sample took less time that this and other 50 took more time than this. The Median is the same as the 50 th Percentile
This may be mentioned in the report. It shows a overall average for requests.

90%Line : (Millisecond) It represents the time needed by 90% of the samples. In other words, 90% of all samples took not more than this time. And the other 10% took at least this time. It is same as 90th Percentile.
It is an ideal candidate for reporting as it represents the max time needed for most of pages(90%).

So, we get the measurements. Now, reporting. In this section, we will see different representation of reports.
1.    Compare graph: This is comparison among all get and post requests based on the measurement under single test run. So, it is side by side comparison of requests under single measurement. Ex- In log in test, if we compare log in page load and log in request side by side, it will show which will have more throughput(a measurement) or need less average time(another measurement).
2.    Progressive graph: This is comparison among progressive test run on a get / post request based on a measurement. Progressive test run means increasing / decreasing the number of user/time for test. For example, if we do testing with same settings under 50, 100, 150, 200 … number of users. And when we will compare (let’s say Log in Request) response time for 50, 100, 150, 200 users, then it will show progressive graph for log in request under response time. Same for time driven approach, like running same test under constant number of user for 30min, 1hr, 2 hr, 4 hr, 8hr etc.
3.    Mixture (Ultimate) Graph: This is among the all get and post requests based on the measurement under progressive test run. It is basically a mixture graph of the previous two.

Tips:
- Change the unit to make graph more understandable to user. Ex- make millisecond to second or req/sec to req/min. This is important to have more visibility over graphs.
-Change unit to have a good size graph. Some time graphs became small for using small unit, if we change the unit, it will be more visible.
- Change the label of the request. This is must when we use recording. For better understanding over page/request , change the label so that every one can understand. Ex- Change the page name to Log in page instead on domai\login.html.
-Define the problems in the graph. (you can see the standards mention below to identify problems with in the graph report)

So, when we have the reports, we may comment based on the reports like following.
A.    Using Compare graph :
   1.Which request is talking the most time of all. According to this we can apply refactoring, implement caching, identify bottlenecks. 
   2.Which page size is bigger, so let’s restructure or re engineer the page. (like optimization).
   3.We can identify the ajax/js time dependencies.
   4.We can also show which pages have high error rates
   5.We can define max throughput of a page/requests and define which need to improve.

B.    Using Progressive graph :
   1.We can show which page/requests are failing/generating error at increment over user/time.
   2.We can show which page/requests are taking more time at increment over user/time.
   3.We can define maximum user/time supported by the application.
   4.We can also find the server’s breaking point.

If we have a chance to compare results among multiple servers. We can comment on
1.Which server requires less time(performs better) on which page/requests
2.Which server need to improve (performs poor) in which page/requests
3.Which server has bottlenecks
4.Which sever is busy most of the time(using server agent)

So, now we know the comments for a test on a web application. But there are other things we should mention in the comment. These are fully depends on clients. I am adding some from my previous projects.
1.    Server configuration & bandwidth where test were performed
2.    Server configuration & bandwidth on which the tested application hosted
3.    Jmeter setting and configuration( jmeter property, test thread configuration, ramp-ups , delay time, plug in configuration, etc)
4.    Test Scenario settings
5.    Notes : on dependencies, blocking issues, known issues. Etc.
6.    Suggestions :  Based on what we get with measurement points.
7.    Good areas : Based on what we get with measurement points.
8.    Bad Areas : Based on what we get with measurement points.

Note : It is best to set standards before starting the test. This is one of the best practices. So, when can make standards. It should be at the beginning or before test plan approved. First, we should find what are the type of requests are there, then set the standard.
To have more ideas, let me give you some areas…
Let’s say out testing web application have following type page/request
1.    Page Load Get
2.    Ajax
3.    JS
4.    Page Post with 10 parameters

So,  what will be the standard. Actually , this part is fully depend on
-The robustness of the application
-Client target and standards
-Mostly used standards in similar type application over the world.
-Development time line.

Depend on my previous experiences; I used to provide 2000ms for Page Load Get. 3000 for a Ajax / JS request, 500ms for one parameter for a Page Post request. This is the data I set with my project experiences and it will vary project to project.

I will try to add more ideas time to time.

Thank you...:)...