Thursday, December 24, 2009

Estimating Effort for Automation Test in Agile/Scrum

Effort Estimation has been an art since a long time and it will continue to be an art. Especially estimating time for testing effort is usually hard because of various uncertainties. That too when in an Agile/SCRUM environment where change is quite natural it is very hard to estimate. Here is an estimation methodology that I have used and successful to a good extent.

This methodology is specifically suited for distributed agile teams where development and manual testing of the application is done by one team while test automation is done by another team.(it can be used anywhere else as well) Here, in this scenario the Product Owner of the automation team is a member of the development sprint team who is a subject matter expert.

Although in SCRUM usually the estimations are done during the sprint planning, it is insisted to estimate before the sprint planning meeting during the Product Backlog Items (PBIs) Grooming sessions as the team would be developing automation scripts independent of the development sprints and have reasonably good understanding of the application and does not need the Product Owner’s(PO) presence in estimating or understanding the PBIs. However, in case if there is a need for POs involvement in understanding the PBI it is recommended to estimate in the grooming meeting or after it but necessarily after understanding the PBIs.

Note that the story point estimation does not provide the effort estimation as such rather it represents the complexity and size of the task in terms of relative complexity. Based on the knowledge of Sprint Velocity (Rate of productivity) one can identify the effort in terms of time and cost.

For example: If you estimate 100 story points and have the velocity as 4 story points per hour then your team would need about 25 hours to complete the 100 story points which can be further planned based on the capacity (Available time) of the team for the sprint.

Story Point Estimation

  • PBIs are estimated based on the Agile/SCRUM principles of estimation in terms of story points
  • PBI’s are estimated during the Grooming sessions
  • PBI’s are estimated by all the team members and arrived at an amicable number that is considered as the actual estimate
  • For story points Fibonacci series of numbers 1, 2, 3, 5, 8, 13, 21 are considered
  • PBI with 1 story point is considered the least complex while 21 is considered to be relatively most complex
  • Story points are assigned to reflect the complexity and size of the PBI based on the relative complexities
  • For estimating the story point for a PBI, below criteria needs to be considered
    1. Available (Already developed) Business Utility Libraries that can be reused – More the BULs available for reuse less complex is the PBI
    2. BULs to be developed – Number of BULs to be developed and their complexities are considered
    3. General Utility Libraries to be developed/changed -- Number of GULs to be developed/changed and their complexities are considered
    4. Data Preparation – Amount of Test Data to be prepared and its complexity is considered
    5. Unit Testing -- Complexity of unit testing the script is considered
    6. Checkpoints – The number of checkpoints to be implemented based on the COAs and their complexity is considered
    7. Reviews – Consider the effort for review of the automation script
  • Following Rules are to be followed while estimating story points for automation.
    1. All the team members estimate individually for a given PBI
    2. To avoid influences of one team member to other, Planning Poker is played. Planning Poker is an estimation game where all the team members estimate separately and disclose their estimates together at the same time as per the Scrum Master’s (SM) instruction
    3. Most of the time highest Estimate wins the race - only if all the team members and SM are in agreement
    4. SM is responsible for handling discrepancies in estimates between team members
    5. Discrepancies in estimates between the team is handled by Poker Votes, Reasoning and discussion as applicable and decided by the SM
    6. Sprint Backlog Items(SBI) are to be created for the PBI’s -- one SBI for each script, BUL and GUL to be developed or maintained. This is basically decomposing the PBI into multiple components called as SBIs for automation.
    7. Time estimates are provided for the SBIs – based on the velocity
    8. “Estimate” should be set once and not adjusted
    9. SBI’s not to exceed 8 hours – In case if a SBI cannot be broken any more and on breaking it loses the logic then the estimate can exceed 8hours for a SBI.
    10. Previous sprint Velocity (Rate of Productivity) and the capacity of the team is to considered for projecting the velocity of the current sprint

image

 

-- LN

TESTER by INSTINCT, not by CHANCE.

Sunday, December 20, 2009

Testing for Exceptions in Web Application – An Approach

Last week when I had been to Norway, one of my colleague test manager casually asked question on testing a requirement for a banking application. I think it is interesting challenge to conduct such a test and this article is an attempt to provide testing approach for such a situation.

Requirement: Application to log every exception it encounters and fail gracefully (of course securely without any sensitive information disclosure)

Question: How to test and plan for testing the exceptions in a Banking and Finance application?

Solution:

This is not that easy testing that can be done with ease as simulating exceptions is not always possible.

Before thinking about the solution it is pivotal to understand the requirement clearly. In an effort to understand the requirements we need answer the below questions.

  • How are the exception stack traces logged by the application? Usually it logged to a secure database or logged with audit trail logs.
  • What exceptions are caught as specific exceptions in code and what exceptions are caught as generic exceptions? This is important to understand what need to be done to simulate an exception.

Now, having understood the requirement reasonably well we can think about testing.

It is hard to anticipate different types of exceptions caught on a specific event. However, 2 approaches to test this situation are described below.

Black box approach: This is best suited for the manual functional testers with limited competency in programming. As seen by the black box tester any exception is thrown only on an event such as page submission or request either GET or POST of a request.

  • List all the scenarios in the application where there is GET or POST, predominantly form submissions.
  • For each and every request list down the possible exceptions based on the functionality being processed and data input. For example on page where integer inputs are made it is necessary to consider IntergerOverflow, IndexOutOfBound and NullPointerException
  • Design test cases to simulate each of the exceptions by relevant test input data and expected results. It is important to consider the Post Conditions or Follow-ups after the exception.
  • Execute the test cases and record the results with defects

The advantage of this approach it does not need programming skills for a tester to conduct the tests although it requires basic programming knowledge for designing the tests. The cons of this approach are it does not ensure coverage as many exceptions may not be practically possible to simulate.

Grey box approach: This is the approach that ensures greater coverage and best suited for testers with programming knowledge. I recommend this approach. It is good to plan and estimate for this testing during test design and planning phase.

  • Review the code with focus on the catch clause
  • Verify if all the specific exceptions are handled in the catch clause depending on the scenario under context
  • Verify that there is a generic catch clause that takes care of all the exceptions not handled by specific exceptions
  • Here, it is important to validate the exception reports for accuracy and consistency. Usually the exception are report to a secure database or to a log file stored securely.
  • It is also important to look that exception report for sensitive information that can be juicy for a hacker.
  • The stack trace of the exception should not be transferred as clear text in the HTTP requests and responses. It should be encrypted with a strong cipher.

The testing approach is described based on my experience in testing such scenarios with an intention that it could be of some help to the testers community. Comments and feedback on this are appreciated.

-- LN

TESTER by INSTINCT, not by CHANCE.

Wednesday, December 16, 2009

Blogging after long time

It’s been long time I have written any article on the net. Now I think it is time again to be actively blog on the net. You will be able to see more frequent articles sharing my experience and knowledge from next year. I plan to dedicate some of my time to blogging about Testing especially Automation, Performance and Security Testing.

Regards,

LN