Pages

Sunday, December 14, 2014

Ant in selenium

Lets understand ANT Terminology:

1.<Project> : 
   It is top level element in the ANT Script
   It is has three optional attributes:
   a.name b.default c.basedir
   a.name:  need to define the name of the project
   b.default: default target use when no other target to supplied
   c.basedir: It is where all path calculation are done

2.Target:
   It is set of the task which you want to be executed.
   Each project can be define zero or more target.Target can be depend on the another target.
   Project can be have zero or more targets. when no target is given the project use the default target.
   When the starting the ANT you can choose whichever target you want to be executed. 
   The main Targets are:
   1.Clean:  Clean up
   2.Compile: Compile the source code
   3.Dist: Generate the distribution

 3.Task: 
    It is piece of the code which can be executed.It has multiple attributes.
    There are so many task written for ANT.
    ANT Task:
    1.File Task:  File task are copy,concat,mkdir,delete,get,replace,move,tempfile etc.
    2.Compile Task: Compile task are javac,apt,rmic
    3.Archive Task: Archive task are zip-unzip,jar-unjar,war-unwar 
    4.Testing Task:  Teasting Task are junit,junitreport
    5.Property Task: Property task are dirname,loadfile,propertyfile,uptodate
    6.There are some other task are there like echo,javadoc,sql 

4.Properties:
   It has a name and a value,it's case sensitive.
   The property name specify between the "${"property-name"}" and attribute value too.

Pareto Principle- 80–20 rule of automation testing


80-20 rule that says that 80% of the results are coming from 20% of the subjects.
 
This is a good rule of thumb for determining how much testing is enough. 
you can typically test approximately 80% of the functionality that people actually use by focusing on 20% of the possible test combinations.

This can be applied to any field as follows:

- 80% of the revenue of a company is coming from 20% of the clients

- 80% of the donations for a charity are coming from 20% of the people

- 80% of the books from a bookstore are purchased by 20% of the clients

- 80% of the clients are using 20% of the functionality

- 80% of the bugs are caused by 20% of the functionality

Microsoft noted that by fixing the top 20% of the most-reported bugs, 80% of the related errors and crashes in a given system would be eliminated
In load testing, it is common practice to estimate that 80% of the traffic occurs during 20% of the time

How does this apply to testing?

Well, the project release date is fixed so you cannot test everything well.

So, test only 20% of the application as this is what the majority of the users will use.

Select the 20% of the application's functionalities that have the highest risk and test them well.

Test the remaining 80% of the functionalities by just taking the happy paths.

You think you did a good job, the project manager is happy with the results.

And after the release, the support team receives lots of issues from the clients about the 80% of the application not tested well.

More, the senior management of the company starts noticing problems all over the application too.

The solution is, of course, applying an endless number of patches with bug fixes for the issues discovered by the customers, frustrating the customers as much as possible and wasting as much time as possible for both the development and testing team.

Severity and Priority

Severity and Priority


  • Severity is about the risk if a bug poses. Severity is related to Technical aspects:


  • Priority is related to Business aspect:

    Most organizations have standard criteria for classifying bug severity, such as:

  • Severity 1 – Catastrophic bug or showstopper. Causes system crash, data corruption, irreparable harm, etc.
  • Severity 2 – Critical bug in important function. No reasonable workaround.
  • Severity 3 – Major bugs but has viable workaround.
  • Severity 4 – Minor bug with trivial impact.


Typically, Severity 1 and 2 bugs must be fixed before release, where 3’s and 4’s might not be, depending on how many we have and on plans for their subsequent disposition.



Testing Models

Let us discuss and explore into few of the famous models. 

The following models are addressed: 
  1. Waterfall Model.
  2. Spiral Model. 
  3. 'V' Model. 
  4. 'W' Model, and 
  5. Butterfly Model. 

The Waterfall Model 
This is one of the first models of software development, presented by B.W.Boehm. The Waterfall model is a step-by-step method of achieving tasks. Using this model, one can get on to the next phase of development activity only after completing the current phase. Also one can go back only to the immediate previous phase.
In Waterfall Model each phase of the development activity is followed by the Verification and Validation activities. One phase is completed with the testing activities, then the team proceeds to the next phase. At any point of time, we can move only one step to the immediate previous phase. For example, one cannot move from the Testing phase to the Design phase. 

Spiral Model 
In the Spiral Model, a cyclical and prototyping view of software development is shown. Test are explicitly mentioned (risk analysis, validation of requirements and of the development) and the test phase is divided into stages. The test activities include module, integration and acceptance tests. However, in this model the testing also follows the coding. The exception to this is that the test plan should be constructed after the design of the system. The spiral model also identifies no activities associated with the removal of defects. 

'V' Model

Many of the process models currently used can be more generally connected by the 'V' model where the 'V' describes the graphical arrangement of the individual phases. The 'V' is also a synonym for Verification and Validation.
By the ordering of activities in time sequence and with abstraction levels the connection between development and test activities becomes clear. Oppositely laying activities complement one another (i.e.) server as a base for test activities. For example, the system test is carried out on the basis of the results specification phase. 

The 'W' Model
From the testing point of view, all of the models are deficient in various ways:
The Test activities first start after the implementation. The connection between the various test stages and the basis for the test is not clear.
The tight link between test, debug and change tasks during the test phase is not clear.

Why 'W' Model? 
In the models presented above, there usually appears an unattractive task to be carried out after coding. In order to place testing on an equal footing, a second 'V' dedicated to testing is integrated into the model. Both 'V's put together give the 'W' of the 'W-Model'. 

Butterfly Model of Test Development
Butterflies are composed of three pieces – two wings and a body. Each part represents a piece of software testing, as described hereafter

Test Case

Test Case is nothing but it shows your defined activities and steps which you will perform in your testing. 

In Manual testing test case is written manually but in automation testing it is generated by testing tools and is known as Test Script. Test Case and Test Script both are same because both of them perform same activities but has only one difference that test case is written in simple sentences and test script is generated in form script. Script is light weight programming language.

Test case Format:

Test case ID: Unique ID for each test case. Follow some convention to indicate types of test. E.g. ‘TC_Module_SubModule_01′

Test priority (Low/Medium/High): This is useful while test execution. Test priority should be set by reviewer.

Module Name – Mention name of main module or sub module.

Test Designed By: Name of tester

Test Designed Date: Date when wrote

Test Executed By: Name of tester who executed this test. To be filled after test execution.

Test Execution Date: Date when test executed.

Test Objective/Description: Describe test objective in brief.

Pre-condition: Any prerequisite that must be fulfilled before execution of this test case. List all pre-conditions in order to successfully execute this test case.

Dependencies: Mention any dependencies on other test cases or test requirement.

Test Steps: Write every Action and corresponding result in Expected Result

Test Data: Use of test data as an input for this test case.

Expected Result:   Describe the expected result in detail including message/error that should be displayed on screen for your every action.

Post-condition: What should be the state of the system after executing this test case?


Actual result: Actual test result should be filled after test execution.

Status (Pass/Fail): If actual result is not as per the expected result mark this test as failed. Otherwise update as passed.

Notes/Comments/Questions: To support above fields if there are some special conditions which can’t be described in any of the above fields or there are questions related to expected or actual results mention those here.

Add following fields if necessary:

Defect ID/Link: If test status is fail, then include the link to defect log or mention the defect number.

Test Type/Keywords: This field can be used to classify tests based on test types. E.g. functional, usability, business rules etc.

Requirements: Requirements for which this test case is being written. Preferably the exact section number of the requirement doc.

Attachments/References: This field is useful for complex test scenarios. To explain test steps or expected result using a visio diagram as a reference. Provide the link or location to the actual path of the diagram or document.


Automation? (Yes/No): Whether this test case is automated or not. Useful to track automation status when test cases are automated.

Bug Life Cycle

Introduction:
Bug can be defined as the abnormal behavior of the software. No software exists without a bug.

A mistake in coding is called error ,error found by a tester is called defect,  defect accepted by development team then it is called bug ,build does not meet the requirements then it is failure.

Bug Life Cycle:
In software development process, the bug has a life cycle. The bug should go through the life cycle to be closed. A specific life cycle ensures that the process is standardized. The bug attains different states in the life cycle. The life cycle of the bug can be shown diagrammatically as follows:


The different states of a bug can be summarized as follows:


1. New
2. Open
3. Assign
4. Test
5. Verified
6. Deferred
7. Reopened
8. Duplicate
9. Rejected and
10. Closed

Description of Various Stages:
1. New: When the bug is posted for the first time, its state will be “NEW”. This means that the bug is not yet approved.

2. Open: After a tester has posted a bug, the lead of the tester approves that the bug is genuine and he changes the state as “OPEN”.

3. Assign: Once the lead changes the state as “OPEN”, he assigns the bug to corresponding developer or developer team. The state of the bug now is changed to “ASSIGN”.

4. Test: Once the developer fixes the bug, he has to assign the bug to the testing team for next round of testing. Before he releases the software with bug fixed, he changes the state of bug to “TEST”. It specifies that the bug has been fixed and is released to testing team.

5. Deferred: The bug, changed to deferred state means the bug is expected to be fixed in next releases. The reasons for changing the bug to this state have many factors. Some of them are priority of the bug may be low, lack of time for the release or the bug may not have major effect on the software.

6. Rejected: If the developer feels that the bug is not genuine, he rejects the bug. Then the state of the bug is changed to “REJECTED”.

7. Duplicate: If the bug is repeated twice or the two bugs mention the same concept of the bug, then one bug status is changed to “DUPLICATE”.

8. Verified: Once the bug is fixed and the status is changed to “TEST”, the tester tests the bug. If the bug is not present in the software, he approves that the bug is fixed and changes the status to “VERIFIED”.

9. Reopened: If the bug still exists even after the bug is fixed by the developer, the tester changes the status to “REOPENED”. The bug traverses the life cycle once again.

10. Closed: Once the bug is fixed, it is tested by the tester. If the tester feels that the bug no longer exists in the software, he changes the status of the bug to “CLOSED”. This state means that the bug is fixed, tested and approved.

While defect prevention is much more effective and efficient in reducing the number of defects, most organization conducts defect discovery and removal. Discovering and removing defects is an expensive and inefficient process. It is much more efficient for an organization to conduct activities that prevent defects.

Guidelines on deciding the Severity of Bug:
Indicate the impact each defect has on testing efforts or users and administrators of the application under test. This information is used by developers and management as thebasis for assigning priority of work on defects.

A sample guideline for assignment of Priority Levels during the product test phase includes:

  1. Critical / Show Stopper — An item that prevents further testing of the product or function under test can be classified as Critical Bug. No workaround is possible for such bugs. Examples of this include a missing menu option or security permission required to access a function under test.
    .
  2. Major / High — A defect that does not function as expected/designed or cause other functionality to fail to meet requirements can be classified as Major Bug. The workaround can be provided for such bugs. Examples of this include inaccurate calculations; the wrong field being updated, etc.
    .
  3. Average / Medium — The defects which do not conform to standards and conventions can be classified as Medium Bugs. Easy workarounds exists to achieve functionality objectives. Examples include matching visual and text links which lead to different end points.
    .
  4. Minor / Low — Cosmetic defects which does not affect the functionality of the system can be classified as Minor Bugs.

Roles and Responsibilities for Sr Quality Analyst (Automation)

Roles and Responsibilities for Sr Quality Analyst (Automation)

·  Prepare Reusable functions, which improve the robustness, re-usability, and maintainability of their test scripts.
·  The framework should be designed in such a way that it increases and speeds up their productivity.
·  The engineer also must support the framework/s, for example, supporting dev/qa with issues using the tool. The engineer will implement automation test scripts. Integration with the test management tool is also, planned.
·  The Senior Test Automation Engineer must be able to take on leadership responsibilities and influence the direction of the automation effort, and its schedule and prioritization.
·  The engineer will work with management, developers, and quality assurance personnel, to meet these goals.
·  Additionally the Test Automation Engineer will also be involved in supporting the build master implement/improve build test processes, environments, and scripts. These build tests ensure that the code drops to quality assurance are of the highest quality.

·  He will provide a practical approach to complex product testing,  in the areas of the automation of test cases for the purposes of regression testing.
· He will be a creative and proactive thinker and you will make use of current technologies to provide extensible automation infrastructures.
·  He will review product requirements, functional and design specifications to determine and prepare automated test cases.
·  He will work closely with other QC team members to automate the execution and verification of reports created by the various company products.
·  He will be part of a team focusing on automation of an identified set of migration tests, checking they run correctly and working within the infrastructure. The team would focus on develop and test these automation buckets which would be executed by other teams

Tuesday, September 23, 2014

Test Automation Life Cycle

Test Automation Life Cycle

Following are the phases of Test Automation Life Cycle. It can be varied from project to project.
1.      Automation Feasibility Analysis
1.1.   Decision to Automate Test
1.2.   Overcoming False Expectations for Automated Testing
2.      Test Strategy
3.      Environment Set up
4.      Test Script Development
5.      Test Script Execution
6.      Test Result Generation and Analysis

1.      Automation Feasibility Analysis

Before kicking off implementing test automation, it is mandatory to analyze the feasibility of the application under test (AUT). Whether AUT is a right candidate or not for the test automation?

1.1 Decision to Automate Test

In this phase it is important for the test team to manage automated testing expectations and to outline the potential benefits of automated testing when implemented correctly. A test tool proposal needs to be outlined, which will be helpful in acquiring management support.

1.2 Overcoming False Expectations for Automated Testing


Automated testing is valuable and can produce a return on investment, there isn't always an immediate payback on investment.
Some myths regarding automation testing should be cleared. People often see test automation as a silver bullet, but they have to understand that test automation requires a significant investment of time and energy to achieve a long-term return on investment (ROI) of faster and cheaper regression testing.  Automation testing should be introduced correctly into a project to manage the valid expectations.
Also, feasibility analysis should be done on the manual test case pack which enables automation engineers to design the test scripts.
Following are the feasibility check to be done to begin test automation:
  • AUT automation feasibility
  • Test Case automation feasibility
  • Tool feasibility

1.3 Proof of Concept phase (POC)


Objective of this phase is to do POC for customer after customer is identified as a lead.
Activities –
         Identification and evaluation of test automation tool.
         Identification of FW and changes as needed.
         POC / Test cases development and scripting to be done as needed by the customer.
         Typically this takes 1 weeks’ time.
         Arrange a demo for the customer.
         There is no guarantee that the project will come to.
Deliverables
         POC done.
         Demo to customer.

2. Test Strategy

This phase defines how to approach and accomplish the mission. First in test strategy is selection of test automation framework.
Following are the types of test automation framework:
  1. Record and Playback Framework
  2. Functional Decomposition Framework
  3. Keyword/Table Driven Framework
  4. Data Driven Framework
  5. Hybrid Framework
Other factors which involves in test strategy as follows
  1. Schedule
  2. Number of resources
  3. Mode of communication process
  4. Defining in-scope and out-of-scope
  5. Return on Investment analysis

3. Environment Set up

It is ideal to execute test automation scripts in regression environment. Test environment set up phase has following tasks:
  1. Sufficient tool licenses
  2. Sufficient add-ins licenses
  3. Sufficient utilities like comparison tools, advance text editors etc.
  4. Implementation of automation framework
  5. AUT access and valid credentials

4. Test Script Development

Activities of Automation test engineers are as follow:
  1. Object Identification
  2. Creating Function Libraries
  3. Building the scripts
  4. Unit testing the scripts
  5. Test execution

 5.  Test Script Execution

Following are the tasks involved with test script execution team.
  1. Test script execution
  2. Defect Logging
  3. Modifying scripts as developer changes occur

 6. Test Result Generation and Analysis

Result generation and analysis is the last phase and important deliverables in test automation. Results must be base lined and signed-off. Following are the important activities in this phase:
  1. Result analysis
  2. Report generation
  3. Documenting the issues and knowledge gained
  4. Preparation of client presentation

Friday, August 8, 2014

Test Cases for a Text Box

Validation Criteria for Test /String Fields
While checking a text field following points should be taken in to consideration

1. Aesthetic (Visual) Conditions
2. Validation Conditions
3. Navigation Conditions
4. Usability Conditions
5. Data Integrity Conditions
6. Modes (Editable Read-only) Conditions
7. General Conditions
8. Specific Field Tests
       8.1. Date Field Checks
       8.2. Numeric Fields
       8.3. Alpha Field Checks

1. Aesthetic Conditions

•Check that the text field has a caption.
•The label is not editable.
•Check the spelling of the label.
•Move the Mouse Cursor over all Enter able Text Boxes. Cursor should change from arrow to Insert Bar
•If it doesn't then the text in the box should be grey or non-update able.
•Are the field prompts the correct color?
•Are the field backgrounds the correct color?
•In read-only mode, are the field prompts the correct color?
•In read-only mode, are the field backgrounds the correct color?
•Are all the field prompts aligned perfectly on the screen?
•Are all the field edits boxes aligned perfectly on the screen?
•Are all the field prompts spelt correctly?
•Are all character or alpha-numeric fields left justified? This is the default unless otherwise specified.
•Are all numeric fields right justified? This is the default unless otherwise specified.
•Is all the micro help text spelt correctly on this screen?
•Is all the error message text spelt correctly on this screen?
•Is all users input captured in UPPER case or lower case consistently?
•Assure that the password entered is visible in encrypted format.

2. Validation Conditions

•Does a failure of validation on every field cause a sensible user error message?
•Is the user required to fix entries which have failed validation tests?
•Have any fields got multiple validation rules and if so are all rules being applied?
•If the user enters an invalid value and clicks on the OK button (i.e. does not TAB off the field) is the invalid entry identified and highlighted correctly with an error message?
•Validation consistently applied at screen level unless specifically required at field level?
•For all numeric fields check whether negative numbers can and should be able to be entered.
•For all numeric fields check the minimum and maximum values and also some mid-range values 
allowable?
•For all character/alphanumeric fields check the field to ensure that there is a character limit specified and that this limit is exactly correct for the specified database size?
•Do all mandatory fields require user input?

3. Navigation Conditions

•Does the Tab Order specified on the screen go in sequence from Top Left to bottom right? This is the default unless otherwise specified.

4. Usability Condition

•Is all date entry required in the correct format?
•Are all read-only fields avoided in the TAB sequence?
•Are all disabled fields avoided in the TAB sequence?
•Can the cursor be placed in the micro help text box by clicking on the text box with the mouse?
•Can the cursor be placed in read-only fields by clicking in the field with the mouse?
•Is the cursor positioned in the first input field or control when the screen is opened?
•SHIFT and Arrow should Select Characters. Selection should also be possible with mouse. Double Click should select all text in box.

5. Data Integrity Conditions

•Check the maximum field lengths to ensure that there are no truncated characters?
•Where the database requires a value (other than null) then this should be defaulted into fields. The user must either enter an alternative valid value or leave the default value intact.
•Check maximum and minimum field values for numeric fields?
•If numeric fields accept negative values can these be stored correctly on the database and does it make sense for the field to accept negative numbers?
•If a particular set of data is saved to the database check that each value gets saved fully to the database. i.e. Beware of truncation (of strings) and rounding of numeric values.

6. Modes (Editable Read-only) Conditions

•Are the screen and field colors adjusted correctly for read-only mode?
•Are all fields and controls disabled in read-only mode?
•Check that no validation is performed in read-only mode. 

7. General Conditions

•Assure that the Tab key sequence which traverses the screens does so in a logical way.
•Errors on continue will cause user to be returned to the tab and the focus should be on the field causing the error. (i.e. the tab is opened, highlighting the field with the error on it)
•All fonts to be the same

8. Specific Field Tests

8.1. Date Field Checks

•Assure that leap years are validated correctly &amp; do not cause errors/miscalculations
•Assure that month code 00 and 13 are validated correctly &amp; do not cause errors/miscalculations
•Assure that 00 and 13 are reported as errors
•Assure that day values 00 and 32 are validated correctly &amp; do not cause errors/miscalculations
•Assure that Feb. 28, 29, 30 are validated correctly &amp; do not cause errors/ miscalculations
•Assure that Feb. 30 is reported as an error
•Assure that century change is validated correctly &amp; does not cause errors/ miscalculations
•Assure that out of cycle dates are validated correctly &amp; do not cause errors/miscalculations 

8.2. Numeric Fields

•Assure that lowest and highest values are handled correctly
•Assure that invalid values are logged and reported
•Assure that valid values are handles by the correct procedure
•Assure that numeric fields with a blank in position 1 are processed or reported as an error
•Assure that fields with a blank in the last position are processed or reported as an error an error
•Assure that both + and - values are correctly processed
•Assure that division by zero does not occur
•Include value zero in all calculations
•Include at least one in-range value
•Include maximum and minimum range values
•Include out of range values above the maximum and below the minimum
•Assure that upper and lower values in ranges are handled correctly 

8.3. Alpha Field Checks

•Use blank and non-blank data
•Include lowest and highest values
•Include invalid characters &amp; symbols
•Include valid characters
•Include data items with first position blank
•Include data items with last position blank 
•Use html tags.