Friday, July 30, 2010


White-Box Testing
  • White-box test design allows one to peek inside the "box“
  • Synonyms for White-box are Structural, Glass-box and Clear-box
  • White-box Testing assumes that the path of logic in a unit or program is known
  • White-box Testing consists of testing paths, branch by branch, to produce predictable results
  • Focuses specifically on using internal knowledge of the software to guide the selection of test data
White-Box Testing Techniques
Statement Coverage
execute all statements at least once
Decision Coverage
execute each decision direction at least once
Condition Coverage
execute each decision with all possible outcomes at least once
Decision/Condition Coverage
execute all possible combinations of condition outcomes in each decision
Multiple Condition Coverage
invoke each point of entry at least once

 Choose the combinations of techniques appropriate for the application

Statement Coverage
Necessary but not sufficient, doesn’t address all outcomes of decisions
For example
…….
Begin()
If (function1())
 OpenFile1()
Else
 Shutdown();
………
Here if the first IF statement is true then Shutdown will never occur

Decision Coverage
Validates the Branch Statements in software
Overcomes the drawbacks of statement coverage
Each decision is tested for a True & False value
Each branch direction must be traversed at least once
Branches like if…else, while, for do...while are to be evaluated for both true and false
Test cases will be arrived with the help of a Decision table

Decision Table
Table which helps to derive the test cases
Steps
Identify the variables which are responsible for decision
Identify the total number of decisions (Give numbers to them like IF1, IF2… While1, While2 etc)
Put the variables as rows and decisions as verticals
Start to map the values for each variables corresponding to each decisions
An Example
Procedure liability (age, sex, married, premium);
begin
 premium=500;
  If ((age< IF-1
25) and (sex=male) and (not married)) 
  then
  premium=Premium+1500;
  else
 IF-2
  (if (married or (sex=female))       
  then
  premium=premium-200;
  If( (age>45) and (age< IF-3
65))       
  then
  premium= premium-100;
end;
Here variables are age, sex and married
Decisions are IF1; IF2 ; IF3 



White-Box Testing
  • White-box test design allows one to peek inside the "box“
  • Synonyms for White-box are Structural, Glass-box and Clear-box
  • White-box Testing assumes that the path of logic in a unit or program is known
  • White-box Testing consists of testing paths, branch by branch, to produce predictable results
  • Focuses specifically on using internal knowledge of the software to guide the selection of test data
White-Box Testing Techniques
Statement Coverage
execute all statements at least once
Decision Coverage
execute each decision direction at least once
Condition Coverage
execute each decision with all possible outcomes at least once
Decision/Condition Coverage
execute all possible combinations of condition outcomes in each decision
Multiple Condition Coverage
invoke each point of entry at least once

 Choose the combinations of techniques appropriate for the application

Statement Coverage
Necessary but not sufficient, doesn’t address all outcomes of decisions
For example
…….
Begin()
If (function1())
 OpenFile1()
Else
 Shutdown();
………
Here if the first IF statement is true then Shutdown will never occur

Decision Coverage
Validates the Branch Statements in software
Overcomes the drawbacks of statement coverage
Each decision is tested for a True & False value
Each branch direction must be traversed at least once
Branches like if…else, while, for do...while are to be evaluated for both true and false
Test cases will be arrived with the help of a Decision table

Decision Table
Table which helps to derive the test cases
Steps
Identify the variables which are responsible for decision
Identify the total number of decisions (Give numbers to them like IF1, IF2… While1, While2 etc)
Put the variables as rows and decisions as verticals
Start to map the values for each variables corresponding to each decisions
An Example
Procedure liability (age, sex, married, premium);
begin
 premium=500;
  If ((age< IF-1
25) and (sex=male) and (not married)) 
  then
  premium=Premium+1500;
  else
 IF-2
  (if (married or (sex=female))       
  then
  premium=premium-200;
  If( (age>45) and (age< IF-3
65))       
  then
  premium= premium-100;
end;
Here variables are age, sex and married
Decisions are IF1; IF2 ; IF3 


Black-Box Testing
• Black Box Testing is testing technique having no knowledge of the internal functionality/structure of the system
• Synonyms for Black-Box are Behavioral, Functional, Opaque-Box, Closed-Box etc.
• Black-box Testing focuses on testing the function of the program or application against its specification
• Determines whether combinations of inputs and operations produce expected results
• When black box testing is applied to software engineering, the tester would only know the "legal" inputs and what the expected outputs should be, but not how the program actually arrives at those outputs
Focus of Black-Box Testing
In this technique, we do not use the code to determine a test suite; rather, knowing the problem that we're trying to solve, we come up with four types of test data:
• Easy-to-compute data
• Typical data
• Boundary / extreme data
• Bogus data
Black-Box Testing Techniques

Equivalence Partitioning
Boundary Value Analysis
Error Guessing
Cause-Effect Graphing 

Equivalence Partitioning
An equivalence class is a subset of data that is representative of a larger class
Equivalence partitioning is a technique for testing equivalence classes rather than undertaking exhaustive testing of each value of the larger class
Example - EP

For example, a program which edits credit limits within a given range ($10,000 - $15,000) would have three equivalence classes
< $10,000 (invalid) Between $10,000 and $15,000 (valid) > $15,000 (invalid)

Boundary Analysis

This technique consists of developing test cases and data that focus on the input and output boundaries of a given function
In the same credit limit example, boundary analysis would test.
Low boundary -/+ one ($9,999 and $10,001)
On the boundary ($10,000 and $15,000)
Upper boundary -/+ one ($14,999 and $15,001)

Error Guessing
• Test cases can be developed based upon the intuition and experience of the tester
• For example, where one of the inputs is the date, a tester may try February 29, 2001
Cause-Effect Graphing
Cause-effect graphing is a technique for developing test cases for programs from the high-level specifications (A high-level specification states desired characteristics of the system)
These characteristics can be used to derive test data
Example – Cause Effect
For example, a program that has specified responses to eight characteristic stimuli (called causes) given some input has 256 "types" of input (i.e., those with characteristics 1 & 3; 5, 7 & 8 etc.).
A poor approach is to generate 256 test cases.
A more methodical approach is to use the program specifications to analyze the program's effect on the various types of inputs.
The program's output domain can be partitioned into various classes called effects.
For example, inputs with characteristic 2 might be subsumed by those with characteristics 3 & 4. Hence, it would not be necessary to test inputs with characteristic 2 and characteristics 3 & 4, for they cause the same effect.
This analysis results in a partitioning of the causes according to their corresponding effects
A limited entry decision table is then constructed from the directed graph reflecting these dependencies (i.e., causes 2 & 3 result in effect 4; causes 2, 3 & 5 result in effect 6 etc.)
The decision table is then reduced and test cases chosen to exercise each column of the table.
Since many aspects of the cause-effect graphing can be automated, it is an attractive tool for aiding in the generation of Functional Test cases.

Advantages of Black-Box Testing
• More effective on larger units of code than glass box testing
• Tester needs no knowledge of implementation, including specific programming languages
• Tester and programmer are independent of each other
• Tests are done from a user's point of view
• Will help to expose any ambiguities or inconsistencies in the specifications
• Test cases can be designed as soon as the specifications are complete

Disadvantages of Black-Box Testing

• Only a small number of possible inputs can actually be tested, to test every possible input stream would take nearly forever
• Without clear and concise specifications, test cases are hard to design
• There may be unnecessary repetition of test inputs if the tester is not informed of test cases the programmer has already tried
• May leave many program paths untested
• Cannot be directed toward specific segments of code which may be very complex (and therefore more error prone)

Smoke & Sanity Software Testing


Smoke Testing: Software Testing done to ensure that whether the build can be accepted for through software testing or not. Basically, it is done to check the stability of the build received for software testing.

Sanity testing: After receiving a build with minor changes in the code or functionality, a subset of regression test cases are executed that to check whether it rectified the software bugs or issues and no other software bug is introduced by the changes. Sometimes, when multiple cycles of regression testing are executed, sanity testing of the software can be done at later cycles after through regression test cycles. If we are moving a build from staging / testing server to production server, sanity testing of the software application can be done to check that whether the build is sane enough to move to further at production server or not.

Difference between Smoke & Sanity Software Testing:

    * Smoke testing is a wide approach where all areas of the software application are tested without getting into too deep. However, a sanity software testing is a narrow regression testing with a focus on one or a small set of areas of functionality of the software application.
    * The test cases for smoke testing of the software can be either manual or automated. However, a sanity test is generally without test scripts or test cases.
    * Smoke testing is done to ensure whether the main functions of the software application are working or not. During smoke testing of the software, we do not go into finer details. However, sanity testing is a cursory software testing type. It is done whenever a quick round of software testing can prove that the software application is functioning according to business / functional requirements.
    * Smoke testing of the software application is done to check whether the build can be accepted for through software testing. Sanity testing of the software is to ensure whether the requirements are met or not.