Saturday, January 16, 2010

Evaluating Test Tools for Automation – Criteria to be considered

Before starting with any automation it is very common to evaluate various tools commercial and free tools available in the market and choose the best suited one for the application under test. This article describes the criteria to be considered while evaluating tools for automation and a methodology for evaluating the tools. This article does not aim to provide a comparison for all these tools. However, it clearly indicates the criteria to be considered for evaluation of testing tools.

Following are the most common criteria to be considered while evaluating tools for automation and any application. In addition to these there may some specific criterions to be considered based on the application under test.

Evaluation Criteria:

      1. Technology Support - Support for various controls and technologies used in the application such as iFrames, AJAX controls, PDF forms, tree view, cold fusion controls etc.,
      2. Ease of Script development/enhancement
      3. Reporting Results - The tool under evaluation should have a feature of producing result log which should be easy to analyze and pinpoint the defect.
      4. Test Independence - The failure of one test script should not have any impact on the entire test suite.
      5. Maintenance of Script - As there is very high maintenance overhead for the automated test scripts, the tool should provide ease of script maintenance.
      6. Multi browser support - The tool under evaluation should support different flavors of Windows OS and multiple browsers (at least IE6, IE7 and IE 8.0)
      7. Data Driven Capability - The tool should provide a means to have an external data store to store all the data and read/ write into the data store.
      8. Ease of Object Store Maintenance - There should be a means to have easy maintenance of Object Store. Object store is the repository of all the objects captured by the tool.
      9. Ease of Continuous Integration for nightly regression tests – The tool under evaluation should provide a easy means to integrate the tests to the build environment to have nightly regression tests conducted in a continuous integration environment
      10. Limitations – Limitations of the tool with respect application under test
      11. Advantages – Advantages of the tool with respect application under test
      12. Cost of Licensing - Tool should not be expensive and should have a flexible licensing option.

Evaluation Procedure:

  1. Select the tools that you want to consider for evaluation.The common tools considered for evaluation for automation include QTP, TestPartner, TestComplete, Visual Studio Team Edition, Selenium, Microsoft Lightweight Test Automation Framework, ArtOfTest WebAii etc..,
  2. For each of the tool considered for evaluation identify the pros and cons with respect to each of the criterion listed above
  3. Give a score for each of the tools for each of the criterion. Scale of scoring is
    • 1 - Below Average
    • 2 - Average
    • 3 - Good
    • 4 - Very Good
    • 5 - Out Standing
  4. Prepare a Score card of all the tools for each of the criterion considered. A sample score card is shown below. Please do not consider the data provided below as the actual comparison data. This is provided only to show an example.

    Evaluation Criterion\Tool

    QuickTest Pro

    Test Partner

    Test Complete

    VSTS

    Technology Support

    4

    3

    3

    2

    Ease of Script development

    4

    3.5

    3

    3

    Reporting

    4

    3.5

    3

    3

    Test Independence

    4

    3

    4

    4

    Script Maintenance

    4

    3.5

    3

    4

    Cross-Browser Support

    4

    3

    2

    2

    Data Driven Capability

    4

    4

    4

    4

    Ease of Object Store Maintenance

    4

    3

    3

    NA

    License Cost

    2

    3

    4

    3

    Final Score

    3.78

    3.28

    3.22

    3.13

  5. Provide a Rank for each of the tools considered based on the score provided earlier. Below is a sample example of tool ranking. This ranking does not represent the actual comparison of the tool rather it represents the suitability of the tools for a specific application it was evaluated.

    Tool

    Final Score

    Tool Rank

    QuickTest Pro

    3.78

    1

    TestPartner

    3.33

    2

    TestComplete

    3.22

    3

    Visual Studio 2008 Team Edition for Software Testers

    3.13

    4

  6. Recommend the best suited tool based on the Rank

Although, you can make a recommendation for a specific tool based on technical analysis with specific criteria. It is always not true that your recommendation will win the race. Most of the time it is a business decision based on the cost and budget and hence be open to work any tool and try to find workarounds for the issues.

--LN

Sunday, January 10, 2010

A Checklist for Performance Testing – Requirement Questionnaire

A Checklist for elicitation of the Performance Testing Requirements.

Performance Testing of an application involves the various phases outlined below. This article is an attempt to   provide a list of questionnaire that can help the test leaders/managers and testers to elicitate the performance testing requirements. This information is most important to know before starting with the performance test. Note that the questionnaire provided here is generic list of questions that is common to most of the applications. However, it need to be tailored based on the nature of the application under test. Usually I use this questionnaire by sending it to the stake holders of the application for answering. I have been using this questionnaire successfully for conducting performance tests.


Performance Testing


Performance Testing Requirement Questionnaire

  1. Please provide the URI's and credentials of the application for testing
  2. Please provide the test environment (Hardware and Software) configuration.
  3. Where is the test environment setup? – Inside the Firewall in a isolated LAN environment or outside the firewall.
  4. What technologies have been used to develop the application?
  5. What are the interfaces of the application? e.g., Payment gateways, web services etc.,
  6. Briefly describe the business/domain of the application.
  7. Is the application already in production? - Is this performance testing being conducted pre-production or post-production?
  8. Are the web server logs for the application available? Applicable only if the application is already in production.
  9. What are the critical workflows of the application to be considered as candidates for performance testing?
  10. What is the expected workload (number of simultaneous virtual users) to be tested?
  11. What is the average session duration of a user? - Average time a user would be logged into the application.
  12. How many hours in a day the application would be available/accessed by the users?
  13. Do you have any specific performance objective(SLAs) for this test? E.g. 1000 Invoices to be processed in a Day
  14. Is test data required for performance testing available in adequate quantity in the required format.
  15. Does the test team members have the  necessary privileges for the test server machines.
  16. Do you aware of any performance problems in the application that is experienced by the users or observed by other stake holders?

--LN

Thursday, December 24, 2009

Estimating Effort for Automation Test in Agile/Scrum

Effort Estimation has been an art since a long time and it will continue to be an art. Especially estimating time for testing effort is usually hard because of various uncertainties. That too when in an Agile/SCRUM environment where change is quite natural it is very hard to estimate. Here is an estimation methodology that I have used and successful to a good extent.

This methodology is specifically suited for distributed agile teams where development and manual testing of the application is done by one team while test automation is done by another team.(it can be used anywhere else as well) Here, in this scenario the Product Owner of the automation team is a member of the development sprint team who is a subject matter expert.

Although in SCRUM usually the estimations are done during the sprint planning, it is insisted to estimate before the sprint planning meeting during the Product Backlog Items (PBIs) Grooming sessions as the team would be developing automation scripts independent of the development sprints and have reasonably good understanding of the application and does not need the Product Owner’s(PO) presence in estimating or understanding the PBIs. However, in case if there is a need for POs involvement in understanding the PBI it is recommended to estimate in the grooming meeting or after it but necessarily after understanding the PBIs.

Note that the story point estimation does not provide the effort estimation as such rather it represents the complexity and size of the task in terms of relative complexity. Based on the knowledge of Sprint Velocity (Rate of productivity) one can identify the effort in terms of time and cost.

For example: If you estimate 100 story points and have the velocity as 4 story points per hour then your team would need about 25 hours to complete the 100 story points which can be further planned based on the capacity (Available time) of the team for the sprint.

Story Point Estimation

  • PBIs are estimated based on the Agile/SCRUM principles of estimation in terms of story points
  • PBI’s are estimated during the Grooming sessions
  • PBI’s are estimated by all the team members and arrived at an amicable number that is considered as the actual estimate
  • For story points Fibonacci series of numbers 1, 2, 3, 5, 8, 13, 21 are considered
  • PBI with 1 story point is considered the least complex while 21 is considered to be relatively most complex
  • Story points are assigned to reflect the complexity and size of the PBI based on the relative complexities
  • For estimating the story point for a PBI, below criteria needs to be considered
    1. Available (Already developed) Business Utility Libraries that can be reused – More the BULs available for reuse less complex is the PBI
    2. BULs to be developed – Number of BULs to be developed and their complexities are considered
    3. General Utility Libraries to be developed/changed -- Number of GULs to be developed/changed and their complexities are considered
    4. Data Preparation – Amount of Test Data to be prepared and its complexity is considered
    5. Unit Testing -- Complexity of unit testing the script is considered
    6. Checkpoints – The number of checkpoints to be implemented based on the COAs and their complexity is considered
    7. Reviews – Consider the effort for review of the automation script
  • Following Rules are to be followed while estimating story points for automation.
    1. All the team members estimate individually for a given PBI
    2. To avoid influences of one team member to other, Planning Poker is played. Planning Poker is an estimation game where all the team members estimate separately and disclose their estimates together at the same time as per the Scrum Master’s (SM) instruction
    3. Most of the time highest Estimate wins the race - only if all the team members and SM are in agreement
    4. SM is responsible for handling discrepancies in estimates between team members
    5. Discrepancies in estimates between the team is handled by Poker Votes, Reasoning and discussion as applicable and decided by the SM
    6. Sprint Backlog Items(SBI) are to be created for the PBI’s -- one SBI for each script, BUL and GUL to be developed or maintained. This is basically decomposing the PBI into multiple components called as SBIs for automation.
    7. Time estimates are provided for the SBIs – based on the velocity
    8. “Estimate” should be set once and not adjusted
    9. SBI’s not to exceed 8 hours – In case if a SBI cannot be broken any more and on breaking it loses the logic then the estimate can exceed 8hours for a SBI.
    10. Previous sprint Velocity (Rate of Productivity) and the capacity of the team is to considered for projecting the velocity of the current sprint

image

 

-- LN

TESTER by INSTINCT, not by CHANCE.

Sunday, December 20, 2009

Testing for Exceptions in Web Application – An Approach

Last week when I had been to Norway, one of my colleague test manager casually asked question on testing a requirement for a banking application. I think it is interesting challenge to conduct such a test and this article is an attempt to provide testing approach for such a situation.

Requirement: Application to log every exception it encounters and fail gracefully (of course securely without any sensitive information disclosure)

Question: How to test and plan for testing the exceptions in a Banking and Finance application?

Solution:

This is not that easy testing that can be done with ease as simulating exceptions is not always possible.

Before thinking about the solution it is pivotal to understand the requirement clearly. In an effort to understand the requirements we need answer the below questions.

  • How are the exception stack traces logged by the application? Usually it logged to a secure database or logged with audit trail logs.
  • What exceptions are caught as specific exceptions in code and what exceptions are caught as generic exceptions? This is important to understand what need to be done to simulate an exception.

Now, having understood the requirement reasonably well we can think about testing.

It is hard to anticipate different types of exceptions caught on a specific event. However, 2 approaches to test this situation are described below.

Black box approach: This is best suited for the manual functional testers with limited competency in programming. As seen by the black box tester any exception is thrown only on an event such as page submission or request either GET or POST of a request.

  • List all the scenarios in the application where there is GET or POST, predominantly form submissions.
  • For each and every request list down the possible exceptions based on the functionality being processed and data input. For example on page where integer inputs are made it is necessary to consider IntergerOverflow, IndexOutOfBound and NullPointerException
  • Design test cases to simulate each of the exceptions by relevant test input data and expected results. It is important to consider the Post Conditions or Follow-ups after the exception.
  • Execute the test cases and record the results with defects

The advantage of this approach it does not need programming skills for a tester to conduct the tests although it requires basic programming knowledge for designing the tests. The cons of this approach are it does not ensure coverage as many exceptions may not be practically possible to simulate.

Grey box approach: This is the approach that ensures greater coverage and best suited for testers with programming knowledge. I recommend this approach. It is good to plan and estimate for this testing during test design and planning phase.

  • Review the code with focus on the catch clause
  • Verify if all the specific exceptions are handled in the catch clause depending on the scenario under context
  • Verify that there is a generic catch clause that takes care of all the exceptions not handled by specific exceptions
  • Here, it is important to validate the exception reports for accuracy and consistency. Usually the exception are report to a secure database or to a log file stored securely.
  • It is also important to look that exception report for sensitive information that can be juicy for a hacker.
  • The stack trace of the exception should not be transferred as clear text in the HTTP requests and responses. It should be encrypted with a strong cipher.

The testing approach is described based on my experience in testing such scenarios with an intention that it could be of some help to the testers community. Comments and feedback on this are appreciated.

-- LN

TESTER by INSTINCT, not by CHANCE.

Wednesday, December 16, 2009

Blogging after long time

It’s been long time I have written any article on the net. Now I think it is time again to be actively blog on the net. You will be able to see more frequent articles sharing my experience and knowledge from next year. I plan to dedicate some of my time to blogging about Testing especially Automation, Performance and Security Testing.

Regards,

LN  

Thursday, February 28, 2008

How is CreateObject different from GetObject?

One of my junior colleague had asked the question "What is the difference between CreateObject and GetObject and in what situation GetObject is to be used?". This post is the answer that I have given him.

  1. CreateObject creates an instance of a class
  2. GetObject gets the reference of an running object to another object
  3. Always use CreateObject except for Windows Management Instrumentation (WMI) and Active Directory Service Interfaces (ADSI)

The below script demonstrates the use of these two VBScript functions.

Set objExcel = CreateObject("Excel.Application") 
objExcel.Visible = True
Set objWorkbook = objExcel.Workbooks.Add

Set objExcel2 = GetObject(, "Excel.Application")
Set objWorkbook = objExcel2.Workbooks(1)
Set objWorksheet = objWorkbook.Worksheets(1)
objExcel2.Cells(1, 1).Value = "In god I TRUST everything else I TEST."



 



-- Lakshminarasimha Manjunatha Mohan


In God I TRUST everything else I TEST.

Windows Hotfixes and Patches Enumerator

Below is a simple VBScript developed using Windows WMI that can be executed from Windows Script Host to enumerate the installed hotfixes, patches and security updates on any given machine.

You might be wondering what for this script is useful. While product testing that supports different environments it happens so that suddenly one fine day you might notice some mis-behavior or unusual defects in the application. Actually these mis-behaviors or defects are not always related to the application. However, it may be a compatibility issue with one of the patches or security updates made by the windows automatic updates.

As a solution to this problem I am this script to find out all the patches, hot fixes and security updates on a server where the product is working fine and compare the that with another server where the mis-behavior or defects are seen and thus classify a defect from compatibility issue. This is really a very handy script for compatibility testing. Let me know your comments on this. I intend to extend this script for comparing two different lists of hotfixes and patches so as to make it easier to find the differences.

For executing this script it is necessary have an EXCEL file named test.xls at C:\

'========================================================
'Description: This script enumerates the Windows hotfixes
' and other security updates
'Author: Lakshminarasimha M.
'=========================================================
strComputer = "."

Set objWMIService = GetObject("winmgmts:{impersonationLevel=impersonate}!\\" & strComputer & "\root\cimv2")
Set colQuickFixes = objWMIService.ExecQuery ("Select * from Win32_QuickFixEngineering")

sDataTable = "C:\test.xls"
sDataSheet = "Sheet1"

iReqRow = 2
Dim oXLApp 'As Excel.Application
Dim oWorkBook 'As Excel.Workbook
Dim oWorkSheet 'As Excel.Worksheet
Dim iRowsCount 'As Integer
Dim iColsCount 'As Integer
Dim sColHeader 'As String
Dim sReqColumn 'As String

Set oXLApp = CreateObject("Excel.Application")
'oXLApp.Visible = True
Set oWorkBook = oXLApp.Workbooks.Open(sDataTable)
Set oWorkSheet = oWorkBook.Sheets(sDataSheet)
oWorkSheet.Activate
iRowsCount = oWorkSheet.UsedRange.Rows.Count
iColsCount = oWorkSheet.UsedRange.Columns.Count
iReqRow=2
' Set Column Headers
oWorkSheet.Range("A1").Value = "Caption"
oWorkSheet.Range("B1").Value = "CSName"
oWorkSheet.Range("C1").Value = "Description"
oWorkSheet.Range("D1").Value = "FixComments"
oWorkSheet.Range("E1").Value = "HotFixID"
oWorkSheet.Range("F1").Value = "InstallDate"
oWorkSheet.Range("G1").Value = "InstalledBy"
oWorkSheet.Range("H1").Value = "InstalledOn"
oWorkSheet.Range("I1").Value = "Name"
oWorkSheet.Range("J1").Value = "ServicePackInEffect"
oWorkSheet.Range("K1").Value = "Status"

For Each objQuickFix in colQuickFixes
oWorkSheet.Range("A" & iReqRow).Value = objQuickFix.Caption
oWorkSheet.Range("B" & iReqRow).Value = objQuickFix.CSName
oWorkSheet.Range("C" & iReqRow).Value = objQuickFix.Description
oWorkSheet.Range("D" & iReqRow).Value = objQuickFix.FixComments
oWorkSheet.Range("E" & iReqRow).Value = objQuickFix.HotFixID
oWorkSheet.Range("F" & iReqRow).Value = objQuickFix.InstallDate
oWorkSheet.Range("G" & iReqRow).Value = objQuickFix.InstalledBy
oWorkSheet.Range("H" & iReqRow).Value = objQuickFix.InstalledOn
oWorkSheet.Range("I" & iReqRow).Value = objQuickFix.Name
oWorkSheet.Range("J" & iReqRow).Value = objQuickFix.ServicePackInEffect
oWorkSheet.Range("K" & iReqRow).Value = objQuickFix.Status
iReqRow = iReqRow + 1
Next
oWorkBook.Save
oWorkBook.Close
oXLApp.Quit
Set oWorkSheet = Nothing
Set oWorkBook = Nothing
Set oXLApp = Nothing
MsgBox("---Hotfixes Enumerated Sucessfully.---")




-- Lakshminarasimha Manjunatha Mohan



In God I TRUST everything else I TEST.