Saturday, July 31, 2010

Testing Definition

Maintainability
A test focus area defined as the ability to locate and fix an error in the system. Can also be the ability to make dynamic changes to the system environment without making system changes.

Master Test Plan
A plan that addresses testing from a high-level system viewpoint. It ties together all levels of testing (unit test, integration test, system test, acceptance test, systems integration, and operability). It includes test objectives, test team organization and responsibilities, high-level schedule, test scope, test focus, test levels and types, test facility requirements, and test management procedures and controls.

Operability
A test focus area defined as the effort required (of support personnel) to learn and operate a manual or automated system. Contrast with Usability.

Operability Testing
A level of dynamic testing in which the oper ations of the system are validated in the real or closely simulated production environ ment. This includes verification of produc tion JCL, installation procedures and operations proc edures. Operability Testing con siders such fac¬tors as performance, resource con sumption, adherence to standards, etc. Operability Testing is normally performed by Operations to assess the readiness of the system for implementation in the produc tion environment.

Operational Testing
A structural type of test that verifies the ability of the application to operate at an acceptable level of service in the production-like environment.

Parallel Testing
A functional type of test, which verifies that the same input on “old” and “new” systems, produces the same results. It is more of an implementation that a testing strategy.

Path Testing
A white box testing technique that requires all code or logic paths to be executed once. Complete path testing is usually impractical and often uneconomical.

Performance
A test focus area defined as the ability of the system to perform certain functions within a prescribed time.

Performance Testing
A structural type of test which verifies that the application meets the expected level of performance in a production-like environment.

Portability
A test focus area defined as ability for a system to operate in multiple operating environments.
Problem
(1) A call or report from a user. The call or report may or may not be defect oriented. (2) A software or process deficiency found during development. (3) The inhibitors and other factors that hinder an organization’s ability to achieve its goals and critical success factors. (4) An issue that a project manager has the authority to resolve without escalation. Compare to ‘defect’ or ‘error’.

Quality Plan
A document which describes the organization, activities, and project factors that have been put in place to achieve the target level of quality for all work products in the application domain. It defines the approach to be taken when planning and tracking the quality of the application development work products to ensure conformance to specified requirements and to ensure the client’s expectations are met. A

Regression Testing
A functional type of test, which verifies that changes to one part of the system have not caused unintended adverse effects to other parts.

Reliability
A test focus area defined as the extent to which the system will provide the intended function without failing.

Requirement
(1) A condition or capability needed by the user to solve a problem or achieve an objective. (2) A condition or capability that must be met or possessed by a system or system component to satisfy a contract, standard, specification, or other formally imposed document. The set of all requirements forms the basis for subsequent development of the system or system component.

Review
A process or meeting during which a work product, or set of work products, is presented to project personnel, managers, users or other interested parties for comment or approval.
Root Cause Analysis
See Causal Analysis.
Scaffolding
Temporary programs may be needed to create or receive data from the speci fic pro gram under test. This approach is called scaf fold¬ing.

Security
A test focus area defined as the assurance that the system/data resources will be protected against accidental and/or intentional modification or misuse.

Security Testing
A structural type of test which verifies that the application provides an adequate level of protection for confidential information and data belonging to other systems.

Software Quality
(1) The totality of features and characteristics of a software product that bear on its ability to satisfy given needs; for example, conform to specifications. (2)The degree to which software possesses a desired combination of attributes. (3)The degree to which a customer or user perceives that software meets his or her composite expectations. (4)The composite characteristics of software that determine the degree to which the software in use will meet the expectations of the customer.

Software Reliability
(1) The probability that software will not cause the failure of a system for a specified time under specified conditions. The probability is a function of the inputs to and use of the system as well as a function of the existence of faults in the software. The inputs to the system determine whether existing faults, if any, are encountered. (2) The ability of a program to perform a required function under stated conditions for a stated period of time.

Statement Testing
A white box testing technique that requires all code or logic statements to be executed at least once.

Static Testing
(1) The detailed examination of a work product's characteristics to an expected set of attributes, experiences and standards. The product under scrutiny is static and not exercized and therefore its behaviour to changing inputs and environments cannot be assessed. (2) The process of evaluating a program without executing the program. See also desk checking, inspection, walk-through.

Stress / Volume Testing
A structural type of test that verifies that the application has acceptable performance characteristics under peak load conditions.

Structural Function
Structural functions describe the technical attributes of a system.

Structural Test Types
Those kinds of tests that may be used to assure that the system is techni cally sound.

Stub
(1) A dummy program element or module used during the development and testing of a higher level element or module. (2) A program statement substituting for the body of a program unit and indicating that the unit is or will be defined elsewhere.
The inverse of Scaffolding.
Sub-system
(1) A group of assemblies or components or both combined to perform a single function. (2) A group of functionally related components that are defined as elements of a system but not separately packaged.
System
A collection of components organized to accomplish a specific function or set of functions.

Systems Integration Testing
A dynamic level of testing which ensures that the systems integration activities appropriately address the integration of application subsystems, integration of applications with the infrastructure, and impact of change on the current live environment.

System Testing
A dynamic level of testing in which all the components that comprise a system are tested to verify that the system functions together as a whole.

Test Bed
(1) A test environment contaning the hardware, instrumentation tools, simulators, and other support software necessary for testing a system or system component. (2) A set of test files, (including databases and reference files), in a known state, used with input test data to test one or more test conditions, measuring against expected results.

Test Case
(1) A set of test inputs, execution conditions, and expected results developed for a particular objective, such as to exercize a particular program path or to verify compliance with a specific requirement. (2) The detailed objectives, data, pro cedures and expected results to conduct a test or part of a test.

Test Condition
A functional or structural attribute of an application, system, network, or component thereof to be tested.
Test Conditions Matrix
A worksheet used to formulate the test conditions that, if met, will produce the expected result. It is a tool used to assist in the design of test cases.

Test Conditions Coverage Matrix
A worksheet that is used for planning and for illustrating that all test conditions are covered by one or more test cases. Each test set has a Test Conditions Coverage Matrix. Rows are used to list the test conditions and columns are used to list all test cases in the test set.

Test Coverage Matrix
A worksheet used to plan and cross check to ensure all requirements and functions are covered adequately by test cases.

Test Data
The input data and file conditions associated with a specific test case.

Test Environment
The external conditions or factors that can directly or indirectly influence the execution and results of a test. This includes the physical as well as the operational environments. Examples of what is included in a test environ ment are: I/O and storage devices, data files, programs, JCL, communication lines, access control and security, databases, reference tables and files (version controlled), etc.

Test Focus Areas
Those attributes of an application that must be tested in order to assure that the business and structural requirements are satisfied.
Test Level
See Level of Testing.

Test Log
A chronological record of all relevant details of a testing activity

Test Matrices
A collection of tables and matrices used to relate functions to be tested with the test cases that do so. Worksheets used to assist in the design and verification of test cases..

Test Objectives
The tangible goals for assur ing that the Test Focus areas previously selected as being relevant to a particular Business or Struc tural Function are being validated by the test.

Test Plan
A document prescribing the approach to be taken for intended testing activities. The plan typically identifies the items to be tested, the test objectives, the testing to be performed, test schedules, entry / exit criteria, personnel requirements, reporting requirements, evaluation criteria, and any risks requiring contingency planning.

Test Procedure
Detailed instructions for the setup, operation, and evaluation of results for a given test. A set of associated procedures is often combined to form a test procedures document.

Test Report
A document describing the conduct and results of the testing carried out for a system or system component.

Test Run
A dated, time-stamped execution of a set of test cases.

Test Scenario
A high-level description of how a given business or technical requirement will be tested, including the expected outcome; later decomposed into sets of test conditions, each in turn, containing test cases.

Test Script
A sequence of actions that executes a test case. Test scripts include detailed instructions for set up, execution, and evaluation of results for a given test case.


Test Set
A collection of test conditions. Test sets are created for purposes of test execution only. A test set is created such that its size is manageable to run and its grouping of test conditions facilitates testing. The grouping reflects the application build strategy.

Test Sets Matrix
A worksheet that relates the test conditions to the test set in which the condition is to be tested. Rows list the test conditions and columns list the test sets. A checkmark in a cell indicates the test set will be used for the corresponding test condition.

Test Specification
A set of documents that define and describe the actual test architecture, elements, approach, data and expected results. Test Specification uses the various functional and non-functional requirement documents along with the quality and test plans. It provides the complete set of test cases and all supporting detail to achieve the objectives documented in the detailed test plan.

Test Strategy
A high level description of major system-wide activities which collectively achieve the overall desired result as expressed by the testing objectives, given the constraints of time and money and the target level of quality. It outlines the approach to be used to ensure that the critical attributes of the system are tested adequately.
Test Type
See Type of Testing.

Testability
(1) The extent to which software facilitates both the establishment of test criteria and the evaluation of the software with respect to those criteria. (2) The extent to which the definition of requirements facilitates analysis of the requirements to establish test criteria.
Testing
The process of exercising or evaluating a program, product, or system, by manual or automated means, to verify that it satisfies specified requirements, to identify differences between expected and actual results.

Testware
The elements that are produced as part of the testing process. Testware includes plans, designs, test cases, test logs, test reports, etc.

Top-down
Approach to integration testing where the component at the top of the component hierarchy is tested first, with lower level components being simulated by stubs. Tested components are then used to test lower level components. The process is repeated until the lowest level components have been tested.

Transaction Flow Testing
A functional type of test that verifies the proper and complete processing of a transaction from the time it enters the system to the time of its completion or exit from the system.
Type of Testing
Tests a functional or structural attribute of the system. E.g. Error Handling, Usability. (Also known as test type.)

Unit Testing
The first level of dynamic testing and is the verification of new or changed code in a module to determine whether all new or modified paths function correctly.

Usability
A test focus area defined as the end-user effort required to learn and use the system. Contrast with Operability.

Usability Testing
A functional type of test which verifies that the final product is user-friendly and easy to use.
User Acceptance Testing
See Acceptance Testing.
Validation
(1) The act of demonstrating that a work item is in compliance with the original require ment. For example, the code of a module would be validated against the input requirements it is intended to imple¬ment. Validation answers the question "Is the right system being built?” (2) Confirmation by examination and provision of objective evidence that the particular requirements for a specific intended use have been fulfilled. See "Verifica tion".

Variance
A mismatch between the actual and expected results occurring in testing. It may result from errors in the item being tested, incor rect expected results, invalid test data, etc. See "Error".

Verification
(1) The act of demonstrating that a work item is satisfactory by using its predecessor work item. For example, code is verified against module level design. Verification answers the question "Is the system being built right?” (2) Confirmation by examination and provision of objective evidence that specified requirements have been fulfilled. See "Validation".

Walkthrough
A review technique characterized by the author of the object under review guiding the progression of the review. Observations made in the review are documented and addressed. Less formal evaluation technique than an inspection.

White Box Testing
Evaluation techniques that are executed with the knowledge of the implementation of the program. The objective of white box testing is to test the program's state ments, code paths, conditions, or data flow paths.
Work Item
A software development lifecycle work product.
Work Product
(1) The result produced by performing a single task or many tasks. A work product, also known as a project artifact, is part of a major deliverable that is visible to the client. Work products may be internal or external. An internal work product may be produced as an intermediate step for future use within the project, while an external work product is produced for use outside the project as part of a major deliverable. (2) As related to test, a software deliverable that is the object of a test, a test work item.

Friday, July 30, 2010


White-Box Testing
  • White-box test design allows one to peek inside the "box“
  • Synonyms for White-box are Structural, Glass-box and Clear-box
  • White-box Testing assumes that the path of logic in a unit or program is known
  • White-box Testing consists of testing paths, branch by branch, to produce predictable results
  • Focuses specifically on using internal knowledge of the software to guide the selection of test data
White-Box Testing Techniques
Statement Coverage
execute all statements at least once
Decision Coverage
execute each decision direction at least once
Condition Coverage
execute each decision with all possible outcomes at least once
Decision/Condition Coverage
execute all possible combinations of condition outcomes in each decision
Multiple Condition Coverage
invoke each point of entry at least once

 Choose the combinations of techniques appropriate for the application

Statement Coverage
Necessary but not sufficient, doesn’t address all outcomes of decisions
For example
…….
Begin()
If (function1())
 OpenFile1()
Else
 Shutdown();
………
Here if the first IF statement is true then Shutdown will never occur

Decision Coverage
Validates the Branch Statements in software
Overcomes the drawbacks of statement coverage
Each decision is tested for a True & False value
Each branch direction must be traversed at least once
Branches like if…else, while, for do...while are to be evaluated for both true and false
Test cases will be arrived with the help of a Decision table

Decision Table
Table which helps to derive the test cases
Steps
Identify the variables which are responsible for decision
Identify the total number of decisions (Give numbers to them like IF1, IF2… While1, While2 etc)
Put the variables as rows and decisions as verticals
Start to map the values for each variables corresponding to each decisions
An Example
Procedure liability (age, sex, married, premium);
begin
 premium=500;
  If ((age< IF-1
25) and (sex=male) and (not married)) 
  then
  premium=Premium+1500;
  else
 IF-2
  (if (married or (sex=female))       
  then
  premium=premium-200;
  If( (age>45) and (age< IF-3
65))       
  then
  premium= premium-100;
end;
Here variables are age, sex and married
Decisions are IF1; IF2 ; IF3 



White-Box Testing
  • White-box test design allows one to peek inside the "box“
  • Synonyms for White-box are Structural, Glass-box and Clear-box
  • White-box Testing assumes that the path of logic in a unit or program is known
  • White-box Testing consists of testing paths, branch by branch, to produce predictable results
  • Focuses specifically on using internal knowledge of the software to guide the selection of test data
White-Box Testing Techniques
Statement Coverage
execute all statements at least once
Decision Coverage
execute each decision direction at least once
Condition Coverage
execute each decision with all possible outcomes at least once
Decision/Condition Coverage
execute all possible combinations of condition outcomes in each decision
Multiple Condition Coverage
invoke each point of entry at least once

 Choose the combinations of techniques appropriate for the application

Statement Coverage
Necessary but not sufficient, doesn’t address all outcomes of decisions
For example
…….
Begin()
If (function1())
 OpenFile1()
Else
 Shutdown();
………
Here if the first IF statement is true then Shutdown will never occur

Decision Coverage
Validates the Branch Statements in software
Overcomes the drawbacks of statement coverage
Each decision is tested for a True & False value
Each branch direction must be traversed at least once
Branches like if…else, while, for do...while are to be evaluated for both true and false
Test cases will be arrived with the help of a Decision table

Decision Table
Table which helps to derive the test cases
Steps
Identify the variables which are responsible for decision
Identify the total number of decisions (Give numbers to them like IF1, IF2… While1, While2 etc)
Put the variables as rows and decisions as verticals
Start to map the values for each variables corresponding to each decisions
An Example
Procedure liability (age, sex, married, premium);
begin
 premium=500;
  If ((age< IF-1
25) and (sex=male) and (not married)) 
  then
  premium=Premium+1500;
  else
 IF-2
  (if (married or (sex=female))       
  then
  premium=premium-200;
  If( (age>45) and (age< IF-3
65))       
  then
  premium= premium-100;
end;
Here variables are age, sex and married
Decisions are IF1; IF2 ; IF3 


Black-Box Testing
• Black Box Testing is testing technique having no knowledge of the internal functionality/structure of the system
• Synonyms for Black-Box are Behavioral, Functional, Opaque-Box, Closed-Box etc.
• Black-box Testing focuses on testing the function of the program or application against its specification
• Determines whether combinations of inputs and operations produce expected results
• When black box testing is applied to software engineering, the tester would only know the "legal" inputs and what the expected outputs should be, but not how the program actually arrives at those outputs
Focus of Black-Box Testing
In this technique, we do not use the code to determine a test suite; rather, knowing the problem that we're trying to solve, we come up with four types of test data:
• Easy-to-compute data
• Typical data
• Boundary / extreme data
• Bogus data
Black-Box Testing Techniques

Equivalence Partitioning
Boundary Value Analysis
Error Guessing
Cause-Effect Graphing 

Equivalence Partitioning
An equivalence class is a subset of data that is representative of a larger class
Equivalence partitioning is a technique for testing equivalence classes rather than undertaking exhaustive testing of each value of the larger class
Example - EP

For example, a program which edits credit limits within a given range ($10,000 - $15,000) would have three equivalence classes
< $10,000 (invalid) Between $10,000 and $15,000 (valid) > $15,000 (invalid)

Boundary Analysis

This technique consists of developing test cases and data that focus on the input and output boundaries of a given function
In the same credit limit example, boundary analysis would test.
Low boundary -/+ one ($9,999 and $10,001)
On the boundary ($10,000 and $15,000)
Upper boundary -/+ one ($14,999 and $15,001)

Error Guessing
• Test cases can be developed based upon the intuition and experience of the tester
• For example, where one of the inputs is the date, a tester may try February 29, 2001
Cause-Effect Graphing
Cause-effect graphing is a technique for developing test cases for programs from the high-level specifications (A high-level specification states desired characteristics of the system)
These characteristics can be used to derive test data
Example – Cause Effect
For example, a program that has specified responses to eight characteristic stimuli (called causes) given some input has 256 "types" of input (i.e., those with characteristics 1 & 3; 5, 7 & 8 etc.).
A poor approach is to generate 256 test cases.
A more methodical approach is to use the program specifications to analyze the program's effect on the various types of inputs.
The program's output domain can be partitioned into various classes called effects.
For example, inputs with characteristic 2 might be subsumed by those with characteristics 3 & 4. Hence, it would not be necessary to test inputs with characteristic 2 and characteristics 3 & 4, for they cause the same effect.
This analysis results in a partitioning of the causes according to their corresponding effects
A limited entry decision table is then constructed from the directed graph reflecting these dependencies (i.e., causes 2 & 3 result in effect 4; causes 2, 3 & 5 result in effect 6 etc.)
The decision table is then reduced and test cases chosen to exercise each column of the table.
Since many aspects of the cause-effect graphing can be automated, it is an attractive tool for aiding in the generation of Functional Test cases.

Advantages of Black-Box Testing
• More effective on larger units of code than glass box testing
• Tester needs no knowledge of implementation, including specific programming languages
• Tester and programmer are independent of each other
• Tests are done from a user's point of view
• Will help to expose any ambiguities or inconsistencies in the specifications
• Test cases can be designed as soon as the specifications are complete

Disadvantages of Black-Box Testing

• Only a small number of possible inputs can actually be tested, to test every possible input stream would take nearly forever
• Without clear and concise specifications, test cases are hard to design
• There may be unnecessary repetition of test inputs if the tester is not informed of test cases the programmer has already tried
• May leave many program paths untested
• Cannot be directed toward specific segments of code which may be very complex (and therefore more error prone)

Smoke & Sanity Software Testing


Smoke Testing: Software Testing done to ensure that whether the build can be accepted for through software testing or not. Basically, it is done to check the stability of the build received for software testing.

Sanity testing: After receiving a build with minor changes in the code or functionality, a subset of regression test cases are executed that to check whether it rectified the software bugs or issues and no other software bug is introduced by the changes. Sometimes, when multiple cycles of regression testing are executed, sanity testing of the software can be done at later cycles after through regression test cycles. If we are moving a build from staging / testing server to production server, sanity testing of the software application can be done to check that whether the build is sane enough to move to further at production server or not.

Difference between Smoke & Sanity Software Testing:

    * Smoke testing is a wide approach where all areas of the software application are tested without getting into too deep. However, a sanity software testing is a narrow regression testing with a focus on one or a small set of areas of functionality of the software application.
    * The test cases for smoke testing of the software can be either manual or automated. However, a sanity test is generally without test scripts or test cases.
    * Smoke testing is done to ensure whether the main functions of the software application are working or not. During smoke testing of the software, we do not go into finer details. However, sanity testing is a cursory software testing type. It is done whenever a quick round of software testing can prove that the software application is functioning according to business / functional requirements.
    * Smoke testing of the software application is done to check whether the build can be accepted for through software testing. Sanity testing of the software is to ensure whether the requirements are met or not.

Wednesday, July 28, 2010

think before you sleep!!!!!!!!

காலையில் மலரும்  பூவானது மாலையில் வாடும் முன் சாமிக்கு மாலையாகவும் ,பெண்களுக்கு  பூவாகவும் , வண்டுக்கு தேனாகவும் பயன்படுகிறது ஆனால் , பல ஆண்டுகாலம் வாழ்கின்ற நமோ ?

 
    எழுதியவர்  :  மணிகண்டன்.கா

Tuesday, July 27, 2010

Debugging

• Debugging (removal of a defect) occurs as a consequence of successful testing.
• Some people better at debugging than others.
• Is the cause of the bug reproduced in another part of the program?
• What “next bug” might be introduced by the fix that is being proposed?
• What could have been done to prevent this bug in the first place?


Debugging Approaches
• Brute force
– memory dumps and run-time traces are examined for clues to error causes
• Backtracking
– source code is examined by looking backwards from symptom to potential causes of errors
• Cause elimination
– uses binary partitioning to reduce the number of locations potential where errors can exist

Test Documentation Needed

• Requirement being tested.
• Design verification methodology.
• Code verification methodology.

Document Each Test Case
• Requirement tested.
• Facet / feature / path tested.
• Person & date.
• Tools & code needed.
• Test data & instructions.
• Expected results.
• Actual test results & analysis
• Correction, schedule, and signoff.

Test Team Members

• Professional testers.
• Analysts.
• System designers.
• Configuration management specialists.
• Users.

Testing Tools


Simulators.
Monitors.
Analyzers.
Test data generators.


Testing Life Cycle

• Establish test objectives.
• Design criteria (review criteria).
– Correct.
– Feasible.
– Coverage.
– Demonstrate functionality .
• Writing test cases.
• Testing test cases.
• Execute test cases.
• Evaluate test results.

Performance Testing

• Stress test.
• Volume test.
• Configuration test (hardware & software).
• Compatibility.
• Regression tests.
• Security tests.
• Timing tests.
• Environmental tests.
• Quality tests.
• Recovery tests.
• Maintenance tests.
• Documentation tests.
• Human factors tests.

System Testing

• Recovery testing
– checks system’s ability to recover from failures
• Security testing
– verifies that system protection mechanism prevents improper penetration or data alteration
• Stress testing
– program is checked to see how well it deals with abnormal resource demands
• Performance testing
– tests the run-time performance of software

Acceptance Testing

• Making sure the software works correctly for intended user in his or her normal work environment.
• Alpha test
– version of the complete software is tested by customer under the supervision of the developer at the developer’s site
• Beta test
– version of the complete software is tested by customer at his or her own site without the developer being present

Acceptance Testing Approaches
• Benchmark test.
• Pilot testing.
• Parallel testing.

Validation Testing


Ensure that each function or performance characteristic conforms to its specification.
Deviations (deficiencies) must be negotiated with the customer to establish a means for resolving the errors.
Configuration review or audit is used to ensure that all elements of the software configuration have been properly developed, cataloged, and documented to allow its support during its maintenance phase.

Smoke Testing

• Software components already translated into code are integrated into a build.
• A series of tests designed to expose errors that will keep the build from performing its functions are created.
• The build is integrated with the other builds and the entire product is smoke tested daily using either top-down or bottom integration.

Integration Testing


Integration Testing
Bottom - up testing (test harness).
Top - down testing (stubs).
Modified top - down testing - test levels independently.
Big Bang.
Sandwich testing.

Top-Down Integration Testing
Main program used as a test driver and stubs are substitutes for components directly subordinate to it.
Subordinate stubs are replaced one at a time with real components (following the depth-first or breadth-first approach).
Tests are conducted as each component is integrated.
On completion of each set of tests and other stub is replaced with a real component.
Regression testing may be used to ensure that new errors not introduced. 

Bottom-Up Integration Testing
Low level components are combined in clusters that perform a specific software function.
A driver (control program) is written to coordinate test case input and output.
The cluster is tested.
Drivers are removed and clusters are combined moving upward in the program structure.




Regression Testing

• Check for defects propagated to other modules by changes made to existing program
– Representative sample of existing test cases is used to exercise all software functions.
– Additional test cases focusing software functions likely to be affected by the change.
– Tests cases that focus on the changed software components.

Generating Test Data

• Ideally want to test every permutation of valid and invalid inputs
• Equivalence partitioning it often required to reduce to infinite test case sets
– Every possible input belongs to one of the equivalence classes.
– No input belongs to more than one class.
– Each point is representative of class.

Unit Testing Details

• Interfaces tested for proper information flow.
• Local data are examined to ensure that integrity is maintained.
• Boundary conditions are tested.
• Basis path testing should be used.
• All error handling paths should be tested.
• Drivers and/or stubs need to be developed to test incomplete software.

Unit Testing

• Program reviews.
• Formal verification.
• Testing the program itself.
– black box and white box testing.

Stages of Testing

• Module or unit testing.
• Integration testing,
• Function testing.
• Performance testing.
• Acceptance testing.
• Installation testing

Strategic Testing Issues

• Specify product requirements in a quantifiable manner before testing starts.
• Specify testing objectives explicitly.
• Identify the user classes of the software and develop a profile for each.
• Develop a test plan that emphasizes rapid cycle testing.
• Build robust software that is designed to test itself (e.g. use anti-bugging).
• Use effective formal reviews as a filter prior to testing.
• Conduct formal technical reviews to assess the test strategy and test cases

Strategic Approach to Testing


         Testing begins at the component level and works outward toward the integration of the entire computer-based system.
         Different testing techniques are appropriate at different points in time.
         The developer of the software conducts testing and may be assisted by independent test groups for large projects.
         The role of the independent tester is to remove the conflict of interest inherent when the builder is testing his or her own product.
         Testing and debugging are different activities.
         Debugging must be accommodated in any testing strategy.
         Need to consider verification issues
          are we building the product right?
          Need to Consider validation issues
         are we building the right product?

உள்ளம் என்றும் எப்போதும் உடைந்து போக கூடாது

உள்ளம் என்றும் எப்போதும் உடைந்து போக கூடாது
என்ன இந்த வாழ்க்கை என்ற எண்ணம் தோன்ற கூடாது
எந்த மனித நெஞ்சுக்குள்
...காயம் இல்லை சொல்லுங்கள்
காலபோக்கில் காயமெல்லாம்
மறைந்து போக்கும் மாயங்கள்
 

உழி தாங்கும் கற்கள் தானே மண்மீது சிலையாகும்
வலி தாங்கும் உள்ளம் தானே நிலையான சுகம் காணும் யாருக்கில்லை போரட்டம்
கண்ணில் என்ன நீரோட்டம்
ஒரு கனவு கண்டால்
அதை தினம்முயன்றால்
...ஒரு நாளில் நிஜமாகும் 

அச்சமில்லை யச்சமில்லை

அச்சமில்லை யச்சமில்லை அச்சமென்ப தில்லையே, இச்சகத்து ளோரெல்லாம் எதிர்த்துநின்ற போதினும், அச்சமில்லை யச்சமில்லை அச்சமென்ப தில்லையே. துச்சமாக வெண்ணிநம்மைச் தூறுசெய்த போதினும், அச்சமில்லை யச்சமில்லை அச்சமென்ப தில்லையே. பிச்சைவாங்கி யுண்ணும்வாழ்க்கை பெற்றுவிட்ட போதினும், அச்சமில்லை யச்சமில்லை அச்சமென்ப தில்லையே. இச்சைகொண்ட பொருளெலாம் இழந்துவிட்ட போதினும், அச்சமில்லை யச்சமில்லை அச்சமென்ப தில்லையே

Thursday, July 8, 2010

Shakira : Waka Waka Lyrics

Oooeeeeeeeeeeeeeeeehh

You're a good soldier
Choosing your battles
Pick yourself up
And dust yourself off
Get back in the saddle

You're on the front line
Everyone's watching
You know it's serious
We are getting closer
This isn't over

The pressure is on
You feel it
But you got it all
Believe it

When you fall get up, oh oh
If you fall get up, eh eh
Tsamina mina zangalewa
Cuz this is Africa
Tsamina mina, eh eh
Waka waka, eh eh
Tsamina mina zangalewa
This time for Africa

Listen to your God
This is our motto
Your time to shine
Don't wait in line
Y vamos por todo

People are raising
Their expectations
Go on and feed them
This is your moment
No hesitations

Today's your day
I feel it
You paved the way
Believe it

If you get down get up, oh oh
When you get down get up, eh eh
Tsamina mina zangalewa
This time for Africa
Tsamina mina, eh eh
Waka waka, eh eh
Tsamina mina zangalewa
Anawa a a
Tsamina mina, eh eh
Waka waka, eh eh
Tsamina mina zangalewa
This time for Africa

Awela Majoni Biggie Biggie Mama One A To Zet
Athi sithi LaMajoni Biggie Biggie Mama From East To West
Bathi . . . Waka Waka Ma Eh Eh Waka Waka Ma Eh Eh
Zonke zizwe mazi buye
Cuz this is Africa

Voice: Tsamina mina, Anawa a a
Tsamina mina
Tsamina mina, Anawa a a

Tsamina mina, eh eh
Waka waka, eh eh
Tsamina mina zangalewa
Anawa a a
Tsamina mina, eh eh
Waka waka, eh eh
Tsamina mina zangalewa
This time for Africa

Django eh eh
Django eh eh
Tsamina mina zangalewa
Anawa a a

Django eh eh
Django eh eh
Tsamina mina zangalewa
Anawa a a

(2x) This time for Africa

(2x) We're all Africa