Pages

Sunday, December 14, 2014

Ant in selenium

Lets understand ANT Terminology:

1.<Project> : 
   It is top level element in the ANT Script
   It is has three optional attributes:
   a.name b.default c.basedir
   a.name:  need to define the name of the project
   b.default: default target use when no other target to supplied
   c.basedir: It is where all path calculation are done

2.Target:
   It is set of the task which you want to be executed.
   Each project can be define zero or more target.Target can be depend on the another target.
   Project can be have zero or more targets. when no target is given the project use the default target.
   When the starting the ANT you can choose whichever target you want to be executed. 
   The main Targets are:
   1.Clean:  Clean up
   2.Compile: Compile the source code
   3.Dist: Generate the distribution

 3.Task: 
    It is piece of the code which can be executed.It has multiple attributes.
    There are so many task written for ANT.
    ANT Task:
    1.File Task:  File task are copy,concat,mkdir,delete,get,replace,move,tempfile etc.
    2.Compile Task: Compile task are javac,apt,rmic
    3.Archive Task: Archive task are zip-unzip,jar-unjar,war-unwar 
    4.Testing Task:  Teasting Task are junit,junitreport
    5.Property Task: Property task are dirname,loadfile,propertyfile,uptodate
    6.There are some other task are there like echo,javadoc,sql 

4.Properties:
   It has a name and a value,it's case sensitive.
   The property name specify between the "${"property-name"}" and attribute value too.

Pareto Principle- 80–20 rule of automation testing


80-20 rule that says that 80% of the results are coming from 20% of the subjects.
 
This is a good rule of thumb for determining how much testing is enough. 
you can typically test approximately 80% of the functionality that people actually use by focusing on 20% of the possible test combinations.

This can be applied to any field as follows:

- 80% of the revenue of a company is coming from 20% of the clients

- 80% of the donations for a charity are coming from 20% of the people

- 80% of the books from a bookstore are purchased by 20% of the clients

- 80% of the clients are using 20% of the functionality

- 80% of the bugs are caused by 20% of the functionality

Microsoft noted that by fixing the top 20% of the most-reported bugs, 80% of the related errors and crashes in a given system would be eliminated
In load testing, it is common practice to estimate that 80% of the traffic occurs during 20% of the time

How does this apply to testing?

Well, the project release date is fixed so you cannot test everything well.

So, test only 20% of the application as this is what the majority of the users will use.

Select the 20% of the application's functionalities that have the highest risk and test them well.

Test the remaining 80% of the functionalities by just taking the happy paths.

You think you did a good job, the project manager is happy with the results.

And after the release, the support team receives lots of issues from the clients about the 80% of the application not tested well.

More, the senior management of the company starts noticing problems all over the application too.

The solution is, of course, applying an endless number of patches with bug fixes for the issues discovered by the customers, frustrating the customers as much as possible and wasting as much time as possible for both the development and testing team.

Severity and Priority

Severity and Priority


  • Severity is about the risk if a bug poses. Severity is related to Technical aspects:


  • Priority is related to Business aspect:

    Most organizations have standard criteria for classifying bug severity, such as:

  • Severity 1 – Catastrophic bug or showstopper. Causes system crash, data corruption, irreparable harm, etc.
  • Severity 2 – Critical bug in important function. No reasonable workaround.
  • Severity 3 – Major bugs but has viable workaround.
  • Severity 4 – Minor bug with trivial impact.


Typically, Severity 1 and 2 bugs must be fixed before release, where 3’s and 4’s might not be, depending on how many we have and on plans for their subsequent disposition.



Testing Models

Let us discuss and explore into few of the famous models. 

The following models are addressed: 
  1. Waterfall Model.
  2. Spiral Model. 
  3. 'V' Model. 
  4. 'W' Model, and 
  5. Butterfly Model. 

The Waterfall Model 
This is one of the first models of software development, presented by B.W.Boehm. The Waterfall model is a step-by-step method of achieving tasks. Using this model, one can get on to the next phase of development activity only after completing the current phase. Also one can go back only to the immediate previous phase.
In Waterfall Model each phase of the development activity is followed by the Verification and Validation activities. One phase is completed with the testing activities, then the team proceeds to the next phase. At any point of time, we can move only one step to the immediate previous phase. For example, one cannot move from the Testing phase to the Design phase. 

Spiral Model 
In the Spiral Model, a cyclical and prototyping view of software development is shown. Test are explicitly mentioned (risk analysis, validation of requirements and of the development) and the test phase is divided into stages. The test activities include module, integration and acceptance tests. However, in this model the testing also follows the coding. The exception to this is that the test plan should be constructed after the design of the system. The spiral model also identifies no activities associated with the removal of defects. 

'V' Model

Many of the process models currently used can be more generally connected by the 'V' model where the 'V' describes the graphical arrangement of the individual phases. The 'V' is also a synonym for Verification and Validation.
By the ordering of activities in time sequence and with abstraction levels the connection between development and test activities becomes clear. Oppositely laying activities complement one another (i.e.) server as a base for test activities. For example, the system test is carried out on the basis of the results specification phase. 

The 'W' Model
From the testing point of view, all of the models are deficient in various ways:
The Test activities first start after the implementation. The connection between the various test stages and the basis for the test is not clear.
The tight link between test, debug and change tasks during the test phase is not clear.

Why 'W' Model? 
In the models presented above, there usually appears an unattractive task to be carried out after coding. In order to place testing on an equal footing, a second 'V' dedicated to testing is integrated into the model. Both 'V's put together give the 'W' of the 'W-Model'. 

Butterfly Model of Test Development
Butterflies are composed of three pieces – two wings and a body. Each part represents a piece of software testing, as described hereafter

Test Case

Test Case is nothing but it shows your defined activities and steps which you will perform in your testing. 

In Manual testing test case is written manually but in automation testing it is generated by testing tools and is known as Test Script. Test Case and Test Script both are same because both of them perform same activities but has only one difference that test case is written in simple sentences and test script is generated in form script. Script is light weight programming language.

Test case Format:

Test case ID: Unique ID for each test case. Follow some convention to indicate types of test. E.g. ‘TC_Module_SubModule_01′

Test priority (Low/Medium/High): This is useful while test execution. Test priority should be set by reviewer.

Module Name – Mention name of main module or sub module.

Test Designed By: Name of tester

Test Designed Date: Date when wrote

Test Executed By: Name of tester who executed this test. To be filled after test execution.

Test Execution Date: Date when test executed.

Test Objective/Description: Describe test objective in brief.

Pre-condition: Any prerequisite that must be fulfilled before execution of this test case. List all pre-conditions in order to successfully execute this test case.

Dependencies: Mention any dependencies on other test cases or test requirement.

Test Steps: Write every Action and corresponding result in Expected Result

Test Data: Use of test data as an input for this test case.

Expected Result:   Describe the expected result in detail including message/error that should be displayed on screen for your every action.

Post-condition: What should be the state of the system after executing this test case?


Actual result: Actual test result should be filled after test execution.

Status (Pass/Fail): If actual result is not as per the expected result mark this test as failed. Otherwise update as passed.

Notes/Comments/Questions: To support above fields if there are some special conditions which can’t be described in any of the above fields or there are questions related to expected or actual results mention those here.

Add following fields if necessary:

Defect ID/Link: If test status is fail, then include the link to defect log or mention the defect number.

Test Type/Keywords: This field can be used to classify tests based on test types. E.g. functional, usability, business rules etc.

Requirements: Requirements for which this test case is being written. Preferably the exact section number of the requirement doc.

Attachments/References: This field is useful for complex test scenarios. To explain test steps or expected result using a visio diagram as a reference. Provide the link or location to the actual path of the diagram or document.


Automation? (Yes/No): Whether this test case is automated or not. Useful to track automation status when test cases are automated.

Bug Life Cycle

Introduction:
Bug can be defined as the abnormal behavior of the software. No software exists without a bug.

A mistake in coding is called error ,error found by a tester is called defect,  defect accepted by development team then it is called bug ,build does not meet the requirements then it is failure.

Bug Life Cycle:
In software development process, the bug has a life cycle. The bug should go through the life cycle to be closed. A specific life cycle ensures that the process is standardized. The bug attains different states in the life cycle. The life cycle of the bug can be shown diagrammatically as follows:


The different states of a bug can be summarized as follows:


1. New
2. Open
3. Assign
4. Test
5. Verified
6. Deferred
7. Reopened
8. Duplicate
9. Rejected and
10. Closed

Description of Various Stages:
1. New: When the bug is posted for the first time, its state will be “NEW”. This means that the bug is not yet approved.

2. Open: After a tester has posted a bug, the lead of the tester approves that the bug is genuine and he changes the state as “OPEN”.

3. Assign: Once the lead changes the state as “OPEN”, he assigns the bug to corresponding developer or developer team. The state of the bug now is changed to “ASSIGN”.

4. Test: Once the developer fixes the bug, he has to assign the bug to the testing team for next round of testing. Before he releases the software with bug fixed, he changes the state of bug to “TEST”. It specifies that the bug has been fixed and is released to testing team.

5. Deferred: The bug, changed to deferred state means the bug is expected to be fixed in next releases. The reasons for changing the bug to this state have many factors. Some of them are priority of the bug may be low, lack of time for the release or the bug may not have major effect on the software.

6. Rejected: If the developer feels that the bug is not genuine, he rejects the bug. Then the state of the bug is changed to “REJECTED”.

7. Duplicate: If the bug is repeated twice or the two bugs mention the same concept of the bug, then one bug status is changed to “DUPLICATE”.

8. Verified: Once the bug is fixed and the status is changed to “TEST”, the tester tests the bug. If the bug is not present in the software, he approves that the bug is fixed and changes the status to “VERIFIED”.

9. Reopened: If the bug still exists even after the bug is fixed by the developer, the tester changes the status to “REOPENED”. The bug traverses the life cycle once again.

10. Closed: Once the bug is fixed, it is tested by the tester. If the tester feels that the bug no longer exists in the software, he changes the status of the bug to “CLOSED”. This state means that the bug is fixed, tested and approved.

While defect prevention is much more effective and efficient in reducing the number of defects, most organization conducts defect discovery and removal. Discovering and removing defects is an expensive and inefficient process. It is much more efficient for an organization to conduct activities that prevent defects.

Guidelines on deciding the Severity of Bug:
Indicate the impact each defect has on testing efforts or users and administrators of the application under test. This information is used by developers and management as thebasis for assigning priority of work on defects.

A sample guideline for assignment of Priority Levels during the product test phase includes:

  1. Critical / Show Stopper — An item that prevents further testing of the product or function under test can be classified as Critical Bug. No workaround is possible for such bugs. Examples of this include a missing menu option or security permission required to access a function under test.
    .
  2. Major / High — A defect that does not function as expected/designed or cause other functionality to fail to meet requirements can be classified as Major Bug. The workaround can be provided for such bugs. Examples of this include inaccurate calculations; the wrong field being updated, etc.
    .
  3. Average / Medium — The defects which do not conform to standards and conventions can be classified as Medium Bugs. Easy workarounds exists to achieve functionality objectives. Examples include matching visual and text links which lead to different end points.
    .
  4. Minor / Low — Cosmetic defects which does not affect the functionality of the system can be classified as Minor Bugs.

Roles and Responsibilities for Sr Quality Analyst (Automation)

Roles and Responsibilities for Sr Quality Analyst (Automation)

·  Prepare Reusable functions, which improve the robustness, re-usability, and maintainability of their test scripts.
·  The framework should be designed in such a way that it increases and speeds up their productivity.
·  The engineer also must support the framework/s, for example, supporting dev/qa with issues using the tool. The engineer will implement automation test scripts. Integration with the test management tool is also, planned.
·  The Senior Test Automation Engineer must be able to take on leadership responsibilities and influence the direction of the automation effort, and its schedule and prioritization.
·  The engineer will work with management, developers, and quality assurance personnel, to meet these goals.
·  Additionally the Test Automation Engineer will also be involved in supporting the build master implement/improve build test processes, environments, and scripts. These build tests ensure that the code drops to quality assurance are of the highest quality.

·  He will provide a practical approach to complex product testing,  in the areas of the automation of test cases for the purposes of regression testing.
· He will be a creative and proactive thinker and you will make use of current technologies to provide extensible automation infrastructures.
·  He will review product requirements, functional and design specifications to determine and prepare automated test cases.
·  He will work closely with other QC team members to automate the execution and verification of reports created by the various company products.
·  He will be part of a team focusing on automation of an identified set of migration tests, checking they run correctly and working within the infrastructure. The team would focus on develop and test these automation buckets which would be executed by other teams