Saturday, April 28, 2012

Test Automation ROI with COFFEe (at) Coffee Day

Most of the customers that I work with and management that I work with are interested to get automation testing implemented for their application or product and sometimes a portfolio of applications. The reason for implementing automation that they would give for implementing automation testing and plan for budget is they believe or in some cases they are made to believe one or more of the following.

  1. Automation is the way of saving testing costs
  2. Automation is the only way for improving the quality of testing
  3. Automation is the way to improvise the testing process
  4. Automation is for Regression testing
  5. Once automation is implemented it is possible to cut billing and reduce some head count for the project
  6. Automation can do magic by testing the application to pile the bugs
  7. Automation can reduce the testing effort

Then does it mean all the statements mentioned above is TURE in all cases or does it mean it is FALSE in all the cases. There is no immediate answer to this question rather it needs to be decided based on the context. We all know and most of the time customer as well (at least in my case as of now) understands that automation is an investment and unless done with care it can lead to failure and loss of money. So now the question is how to make such a critical decision of implementing or not implementing automation for any application. We as TESTERs are responsible for providing information to the stakeholders so that they can make informed decision about it.

The Management and Customer are more interested to know about how much cost savings can be expected and achieved by implementing automation. However, it is not always true that cost savings is achieved. The value of automation is to be presented with much more benefits that we can gain out of it. Doug Hoffman, Michael D Kelly and many others have already written quite a lot about ROI for Test Automation. I thank them for such wonderful information. In this blog post I make an effort to explain how I have presented test automation benefits to customers with a heuristic COFFE CD (easily remembered as COFFEe at Coffee Day). (Earlier I have used the mnemonic FCCOFED for the same)   

The mind map below answers some of the basic questions that one may have about Test Automation ROI.

Test Check  Automation ROI

Below is the heuristic/mnemonic that I use while analyzing the automation ROI and further for presenting the same to customer and the management. Thanks to Pradeep Soundarrajan for correcting the method by helping me with the “Coverage” (second C) in COFFE CD.

COFFE CD

Let me know if this helps you in some manner and also share your experiences with presenting ROI of Automation.

Wednesday, December 29, 2010

Feedbacks – A Lesson Learnt

Recently, I had an opportunity to sit with the customer and take the customer satisfaction feedback. During this meeting, I had a good lesson learnt which is the reason for this post.

Usually Customer Satisfaction Review (CSR) is conducted quarterly. In a collaborative project where customer is also part of the project CSR means feedback by customer about the Project Manager, Project Team and the Organization on different aspects.

We both (I and my customer contact) sat together in a silent room and started the discussion. He explained about all the good achievements that we had made and all other concerns that he had. These are all usual stuff. The interesting part was, he said it is easier for me to say that you have not done this and need to improve on this and so on. But, for me as a manager to you, all these feedbacks raise question “How did you help to the team to achieve this?”.

Instantly, my thoughts went back thinking about “How I have helped my team members for whom I have commented saying you have not done this, you need to improve on that and so on..” 

This looks very basic but an impressive question that many of us as managers never think about. This kind of retrospective is very important for  both of us ‘managers’ and 'receiver of the feedback’.

Further, I thought about reaction of a manager to a question by a reportie saying, I agree to you that I could not do this (some task or goal …) but “How did you help me to achieve this?”  Most of the managers would feel uncomfortable to these questions.

This is good lesson learnt for me. After some thinking around this, I have decided to implement this every time I need to give a feedback.

Let me know if you have different thoughts or comments.

Regards,

LN

Sunday, June 6, 2010

Low Cost Performance Testing with free tools - Front-End Performance Testing

In this article I will share information on conducting Front-End Performance Testing (FPT) which is low cost yet very effective and does not need any commercial tools to be used. Before starting with the core topic I should thank Steve Souders for writing great books High Performance Web Sites and Even Faster Websites and Scott Barber for making it easier by bringing in the heuristic approach.

So, what is Front-End Performance Testing?

The Front-End Performance Testing is Monitoring and measuring the user experience from  the performance point of view and validating the front-end design with respect to the practices for high performance websites without any focus on the server side load. FPT is conducted to find any client side performance problems in the application.

You may ask why do we need to conduct FPT?

FPT is important for the reasons listed below (few of the points I have taken it from Scott Barber’s article on FPT)

  • 50-90% of a web page's response time results from front-end design and implementation  (Taken from Scott Barber’s article)
  • The front-end design and development of websites is conducted with little to no thought on performance (Other than, possibly reducing the size of the graphics)
  • Usually neglected by development considering the client system not in control. Also, neglected by test team during testing
  • Checking/Testing for potentially largest and cheapest performance improvements
  • Relatively easy to conduct tests and investigate the Front-End performance issues (in comparison with backend performance tests)
  • Minimal learning curve, can be tested by developers or testers [Capturing the data may not need any experience but effective analysis of the captured data needs reasonable performance testing skills and understanding]
  • The client side processing and activities done at the client side by Java Script and Silverlight or not captured by LoadRunner like performance test tools that work on the HTTP/HTTPs layer (E.G. You would have seen in Google that suggests the search criteria on typing every single letter in the search box. This is a JSON request that should be quick enough not to let the user know that the response is from the server. Measuring the response time and user experience of this type of scenarios could be real hard without FPT)
  • It needs less effort and can be done with Functional Testing, Can be automated as well. [For testing and reporting a website it would take about 4 hours based on my experience (here it also matters how big is the website) :-)]. I usually combine it with manual exploration during initial navigation through the application for learning it. In fact the information captured by HTTP sniffers is worth taking a look as you can understand about the application much better and device your tests.(not only performance but also other tests such as security and even functional or validation etc.,)

The objective of FPT is to

  • Find client side performance problems without any focus on server loading or resource utilization by the server
  • To understand the end user experience

Few considerations to be made while conducting FPT include the following

  • It is important to consider all the web pages in the application including any alternative flows
  • Use Free HTTP Sniffer tools and Browser plug-ins to gather information about the end user experience
  • Document any functional defects that you may find during FPT

Now, let us see how to do the real test. The strategy that I follow for FPT is based on the heuristic created by Scott Barber.

Test Strategy

Use the heuristic SCORN for comprehensive Front-End Performance Testing. The heuristic can be defined as below (I have changed a few points based on my understanding but you can see the original heuristic as defined by Scott at Use "SCORN" to test the front end of a website for performance):

  • Size – Focus on testing for
    • Uncompressed graphics and media
    • Object or code duplication
    • Script and styles living outside of the base HTML
    • Code "minification."
  • Caching – Focus on testing for
    • Expires settings,
    • Etags
  • Order – Focus on testing for the order sequence of components
    • Styles/style sheets
    • Critical content (i.e. what the user came to see the page)
    • Relevant media (i.e. graphics related to the critical content)
    • Incidental content (i.e. non-critical graphics)
    • Scripts
  • Response Codes – Test for
    • Requests for objects that don't actually exist (Any exception is costly - MSDN)
    • Superfluous redirects
    • Errors that are not obvious from the browser
    • Redundant Requests
    • Invalid URL references (E.g. http://:)
    • Unused Request
  • Number – Ask questions related to
    • Number of Requests
    • 1 Heavy Graphic Vs many small graphics
    • Inline Scripts Vs 1 External Script
    • Inline Styles Vs 1 External Style Sheet

Testing Procedure

  • Use browser plug-ins or online tools to capture page load times.
    • YSLOW – FireFox FireBug Add-on
    • Episodes – YSLOW Add-on
    • HammerHead – YSLOW Add-on
    • HTTPWatch Basic
    • Page Speed – FireBug Add-on
    • Microsoft Visual Round Trip Analyzer
    • Fiddler with nXpert – Performance Analysis Plug-in
  • Conduct FPT with functional tests and/or user acceptance tests
  • Devise the tests based on the project context and criteria – Pick the tool based on the need, best for the context
  • Additionally, for monitoring the resource utilization during the test you can leverage on Perfmon (for windows based systems ) and NMon (Unix based systems). PAL is a good tool for analysis of Perfmon Logs.

My favorite tools for FPT are Net tab of FireBug, YSLOW, Episodes, HTTPWatch and VRTA. 

Reporting

The performance errors and slow pages need to be analyzed and reported for both executive and technical audience.

Although, I call it as Low cost performance test in the title of this article. FTP is not a substitute for comprehensive performance test. In fact the FPT is part of the complete performance testing strategy. When customer is not ready to invest for performance testing and in situations wherein you have customer complaints on the performance of the applications, FPT can be the first step of performance testing.

Hope this information helps you to device your performance tests. In the next post I will plan to write about automation of FPT with different tools such as QTP, TestPartner, TestComplete  and also without using any tools but VBScript.

--LN

Thursday, May 13, 2010

Impact of Microsoft Updates on Testing – Compatibility Testing

I should thank Santhosh Tuppad who prompted me to write this post. This is quite related to his post Virus or Trojan or Malware or Adware – Follow up testing.

Many of us (TESTERS) know that Microsoft releases one or the other security patch, hot fixes and services packs on Tuesdays. In this post, I will share information about how Microsoft Updates can impact on testing and how we have been able to manage this problem. I do not have intention to use this post to describe about compatibility testing but as it is related will have a note in the end.

What is the Impact of MS updates on Product/Application?
Application starts behaving weirdly. User complaints, affecting the credibility of the product. Testers and Developers cycling to simulate and debug.  In short the impact User Complaints –> Customer Escalations – > Pressure from Management (Follow-up) –> Frustrating Night outs/Weekends for Testers and Developers

How does MS Updates get into User Machine?
Most of the product users computers will have windows auto update option enabled and hence MS can get into the machine (it is good practice to avoid security implications). The other way is to manually install some of the hot-fixes to solve a specific problem. It is worth saying that most of these hot-fixes are not completely tested and Microsoft does warn that “Do not install the hot-fix if you are unsure of it and installation of incorrect hot-fix can cause problem to your machine, etc.,”.  

How does it impact on Testing?
A security patch on IE has all the potential to block the application from working. Here, any patch or hot-fix can restrict or change the behavior of normal application. Usually we do not anticipate problems from these Patches as we(at least most of us, if not all) trust MS.
We would have spent our energy and time when we prove that the bug is not from the application but something else? To find out that something else you would spend some more time. Once you get to know it could be because of the patches and finally find out which patch is causing the problem among so many(MS does not release one patch at a time) patches. This process of bug isolation becomes critical and challenging. Here, the challenge is not only the effort put on to simulating the bug but also other factors such as pressure from the management, pressure from the customer, time constraints etc.,

How to handle this situation, or compatibility of the product?
It is easier to use the Virtual PCs for sorting out these problems.
Have 2 separate Virtual Machines (VM), one with patches updated(with the latest one) and other without the latest one but until the previous patch.
Conduct all your tests on the VM with patches updated while have your Smoke test on the VM without the latest patch. This way you can be ahead of the problem and giving information/bug to the customer rather than getting a complaint/escalation.

We have implemented this for a leading Scandinavian content management and Web publishing product that had to support 46 different combination of Windows OS, IE, SQL Server, Office etc., we had set up Virtual Machines with automation and continuous integration using QTP, QualityCenter and Team Foundation Server. The Continuous Integration takes care of running the automated tests on most of the environments while the Manual exploration of product and bug hunting is done on the latest patch VM.

You cannot practically test all the combination of OS, Web Server,Browser and other components with the common project constraints such as time, budget etc., It is good to consider Pairwise Independent Combinatorial Testing (PICT) Tool or ALLPAIRS to get effective minimal number of combinations.

--LN

Tuesday, April 20, 2010

OWASP Top Ten 2010 Released

 

On April 19, 2010, final version of the OWASP Top 10 for 2010 has been released. You can find more information about it at OWASP Top 10 2010 Press Release and OWASP Top Ten Project.

The OWASP Top 10 Web Application Security Risks for 2010 are:

  • A1: Injection
  • A2: Cross-Site Scripting (XSS)
  • A3: Broken Authentication and Session Management
  • A4: Insecure Direct Object References
  • A5: Cross-Site Request Forgery (CSRF)
  • A6: Security Misconfiguration
  • A7: Insecure Cryptographic Storage
  • A8: Failure to Restrict URL Access
  • A9: Insufficient Transport Layer Protection
  • A10: Unvalidated Redirects and Forwards

The Web Application Security Consortium provides Threat Classification Taxonomy Cross Reference View which gives a clear mapping between WASC Threat Classification, MITRE's Common Weakness Enumeration, SANS Top 25 and OWASP Top Ten.

Regards,

LN

Saturday, January 16, 2010

Evaluating Test Tools for Automation – Criteria to be considered

Before starting with any automation it is very common to evaluate various tools commercial and free tools available in the market and choose the best suited one for the application under test. This article describes the criteria to be considered while evaluating tools for automation and a methodology for evaluating the tools. This article does not aim to provide a comparison for all these tools. However, it clearly indicates the criteria to be considered for evaluation of testing tools.

Following are the most common criteria to be considered while evaluating tools for automation and any application. In addition to these there may some specific criterions to be considered based on the application under test.

Evaluation Criteria:

      1. Technology Support - Support for various controls and technologies used in the application such as iFrames, AJAX controls, PDF forms, tree view, cold fusion controls etc.,
      2. Ease of Script development/enhancement
      3. Reporting Results - The tool under evaluation should have a feature of producing result log which should be easy to analyze and pinpoint the defect.
      4. Test Independence - The failure of one test script should not have any impact on the entire test suite.
      5. Maintenance of Script - As there is very high maintenance overhead for the automated test scripts, the tool should provide ease of script maintenance.
      6. Multi browser support - The tool under evaluation should support different flavors of Windows OS and multiple browsers (at least IE6, IE7 and IE 8.0)
      7. Data Driven Capability - The tool should provide a means to have an external data store to store all the data and read/ write into the data store.
      8. Ease of Object Store Maintenance - There should be a means to have easy maintenance of Object Store. Object store is the repository of all the objects captured by the tool.
      9. Ease of Continuous Integration for nightly regression tests – The tool under evaluation should provide a easy means to integrate the tests to the build environment to have nightly regression tests conducted in a continuous integration environment
      10. Limitations – Limitations of the tool with respect application under test
      11. Advantages – Advantages of the tool with respect application under test
      12. Cost of Licensing - Tool should not be expensive and should have a flexible licensing option.

Evaluation Procedure:

  1. Select the tools that you want to consider for evaluation.The common tools considered for evaluation for automation include QTP, TestPartner, TestComplete, Visual Studio Team Edition, Selenium, Microsoft Lightweight Test Automation Framework, ArtOfTest WebAii etc..,
  2. For each of the tool considered for evaluation identify the pros and cons with respect to each of the criterion listed above
  3. Give a score for each of the tools for each of the criterion. Scale of scoring is
    • 1 - Below Average
    • 2 - Average
    • 3 - Good
    • 4 - Very Good
    • 5 - Out Standing
  4. Prepare a Score card of all the tools for each of the criterion considered. A sample score card is shown below. Please do not consider the data provided below as the actual comparison data. This is provided only to show an example.

    Evaluation Criterion\Tool

    QuickTest Pro

    Test Partner

    Test Complete

    VSTS

    Technology Support

    4

    3

    3

    2

    Ease of Script development

    4

    3.5

    3

    3

    Reporting

    4

    3.5

    3

    3

    Test Independence

    4

    3

    4

    4

    Script Maintenance

    4

    3.5

    3

    4

    Cross-Browser Support

    4

    3

    2

    2

    Data Driven Capability

    4

    4

    4

    4

    Ease of Object Store Maintenance

    4

    3

    3

    NA

    License Cost

    2

    3

    4

    3

    Final Score

    3.78

    3.28

    3.22

    3.13

  5. Provide a Rank for each of the tools considered based on the score provided earlier. Below is a sample example of tool ranking. This ranking does not represent the actual comparison of the tool rather it represents the suitability of the tools for a specific application it was evaluated.

    Tool

    Final Score

    Tool Rank

    QuickTest Pro

    3.78

    1

    TestPartner

    3.33

    2

    TestComplete

    3.22

    3

    Visual Studio 2008 Team Edition for Software Testers

    3.13

    4

  6. Recommend the best suited tool based on the Rank

Although, you can make a recommendation for a specific tool based on technical analysis with specific criteria. It is always not true that your recommendation will win the race. Most of the time it is a business decision based on the cost and budget and hence be open to work any tool and try to find workarounds for the issues.

--LN

Sunday, January 10, 2010

A Checklist for Performance Testing – Requirement Questionnaire

A Checklist for elicitation of the Performance Testing Requirements.

Performance Testing of an application involves the various phases outlined below. This article is an attempt to   provide a list of questionnaire that can help the test leaders/managers and testers to elicitate the performance testing requirements. This information is most important to know before starting with the performance test. Note that the questionnaire provided here is generic list of questions that is common to most of the applications. However, it need to be tailored based on the nature of the application under test. Usually I use this questionnaire by sending it to the stake holders of the application for answering. I have been using this questionnaire successfully for conducting performance tests.


Performance Testing


Performance Testing Requirement Questionnaire

  1. Please provide the URI's and credentials of the application for testing
  2. Please provide the test environment (Hardware and Software) configuration.
  3. Where is the test environment setup? – Inside the Firewall in a isolated LAN environment or outside the firewall.
  4. What technologies have been used to develop the application?
  5. What are the interfaces of the application? e.g., Payment gateways, web services etc.,
  6. Briefly describe the business/domain of the application.
  7. Is the application already in production? - Is this performance testing being conducted pre-production or post-production?
  8. Are the web server logs for the application available? Applicable only if the application is already in production.
  9. What are the critical workflows of the application to be considered as candidates for performance testing?
  10. What is the expected workload (number of simultaneous virtual users) to be tested?
  11. What is the average session duration of a user? - Average time a user would be logged into the application.
  12. How many hours in a day the application would be available/accessed by the users?
  13. Do you have any specific performance objective(SLAs) for this test? E.g. 1000 Invoices to be processed in a Day
  14. Is test data required for performance testing available in adequate quantity in the required format.
  15. Does the test team members have the  necessary privileges for the test server machines.
  16. Do you aware of any performance problems in the application that is experienced by the users or observed by other stake holders?

--LN