Skip to main content

Testing Strategies

Testing help us to measure the quality of software inters of the number of defects found, the test run, and system covered by the test. Software testing can be defined as a process to test an application with the intension of finding error is called Testing. Another popular definition is: The process consisting of all life cycle activities both static and dynamic, concerned with planning, preparation and evaluation of software products and related work products to determine that they satisfy specified requirements, to demonstrate that they are fit for purpose and detect detects.

What is software testing?
The process of finding evidence of defects in software systems and establishing confidence that a program does what it is supposed to do.

Objectives of Testing:
Glen Myers [MYE79] states a number of rules that can serve well as testing objectives:
  • Testing is a process of executing a program with the intent of finding an error.
  • A good test case is one that has a high probability of finding an as yet undiscovered error.
  • A successful test is one that uncovers an as-yet-undiscovered error.

Testing attributes
  1. A good test has a high probability of finding an error. Software Testing engineering should understand the software and attempt to develop a model for the possible failures of software failure.
  2. A good test is not redundant. Software testing is bound by cost, resources and time limitations. So the tester has to take care that there is no redundancy in testing methodologies with same purpose
  3. A good test should be “best of breed” In a group of tests that have a similar intent, time and resource limitations may mitigate toward the execution of only a subset of these tests.
  4. A good test should be neither too simple nor too complex. At times it is possible to combine a series of tests into one test case; the possible side effects associated with this approach may mask errors. Each test should be executed separately

Causes of software defects:
Error (Mistake):
A human action that produces an incorrect result is known as Error or Mistake.

Defect (Bug/Fault):
A flaw in a component or system that can cause the component or system to fail to perform its required function, Example: An incorrect statement or date definition. A defect, if encountered during execution, makes cause a failure of the component or system.

Failure:
Deviation of the component or system from its expected delivery, service or result to disc tub.
Root causes of failure:
  • Lack of understanding of end user requirements.
  • Late discovery of serious project flaws.
  • Untrustworthy build & release process.
  • Implementation team’s chaos.

It is to be noted that cost of failure grows as we move software process moves up the hierarchy in the software development lifecycle. This is demonstrated in the figure below:



Importance of testing:
The significance of testing can be best understood from the failed projects in the past. In exemplifying the significance of testing WASHINGTON (COMPUTERWORLD) stated that “Software bugs are costing the U.S. economy an estimated $59.5 billion each year, with more than half of the cost borne by end users and the remainder by developers and vendors, according to a new federal study. Improvements in testing could reduce this cost by about a third, or $22.5 billion, but it won't eliminate all software errors, the study said. Of the total $59.5 billion cost, users incurred 64% of the cost and developers 36%. “

Case study 1:
The Lion King Animated Storybook, first Disney’s multimedia CD-ROM game for kids was released at Christmas season. This turned out to be disaster for Disney as the CD was testing only for specific PC platform. This is due to lack of proper testing adaptations on different platforms before its release.

Case study 2:
NASA Mars Polar Lander, 1999: 3rd December 1999, Mars Polar Lander disappeared during its landing attempt. The Failure Review Board has concluded the likely reason for its failure was the unexpected setting of a single data bit. This resulted because of inappropriate internal testing.

Casestudy3:
Malaysia Airlines Jetliner, August 2005, between Perth, Australia and Kuala Lampur, Malaysia zoomed 3,000 feet upwards, due to a defective software program that had provided incorrect data about the aircraft’s speed and acceleration, confusing flight computers.

Case study: NASA Mars Climate Orbiter crashing
Incident Date: 9/23/1999
Price Tag: $125 million


WASHINGTON (AP) news reports this news as:
“For nine months, the Mars Climate Orbiter was speeding through space and speaking to NASA in metric. But the engineers on the ground were replying in non-metric English.

It was a mathematical mismatch that was not caught until after the $125-million spacecraft, a key part of NASA's Mars exploration program, was sent crashing too low and too fast into the Martian atmosphere. The craft has not been heard from since.

. Noel Henners of Lockheed Martin Astronautics, the prime contractor for the Mars craft, said at a news conference it was up to his company's engineers to assure the metric systems used in one computer program were compatible with the English system used in another program. The simple conversion check was not done, he said”

Case Study: Ariane 5 Explosion
Incident Date: 9/1997 Price Tag: $500 million
Reporting reasons on this explosion James Gleick stated that:
“It took the European Space Agency 10 years and $7 billion to produce Ariane 5, a giant rocket capable of hurling a pair of three-ton satellites into orbit with each launch and intended to give Europe overwhelming supremacy in the commercial space business.

All it took to explode that rocket less than a minute into its maiden voyage last June, scattering fiery rubble across the mangrove swamps of French Guiana, was a small computer program trying to stuff a 64-bit number into a 16-bit space.

This shutdown occurred 36.7 seconds after launch, when the guidance system's own computer tried to convert one piece of data -- the sideways velocity of the rocket -- from a 64-bit format to a 16-bit format. The number was too big, and an overflow error resulted. When the guidance system shut down, it passed control to an identical, redundant unit, which was there to provide backup in case of just such a failure. But the second unit had failed in the identical manner a few milliseconds before. And why not? It was running the same software. “

For more examples please refer: Click Here


Testing Principles
  1. All tests should be traceable to customer requirements: This principle means that the errors are those, when the program fails to meet to the customer requirement.
  2. Tests should be planned long before testing begins: It means that all the test design models should be generated before they are set for coding stage.
  3. The Pareto principle applies to software testing: The Pareto principle implies that 80 percent of all errors uncovered during testing will likely be traceable to 20 percent of all program components. The problem, of course, is to isolate these suspect components and to thoroughly test them.
  4. Testing should begin “in the small” and progress toward testing “in the large.”
  5. Exhaustive testing is not possible.
  6. To be most effective, testing should be conducted by an independent third party.

Black Box Testing:
Black box testing is a software testing techniques in which functionality of the software under test is tested without looking at the internal code structure, implementation details and knowledge of internal paths of the software. This type of testing is based entirely on the software requirements and specifications.

Types of Black Box Testing
These tests can be functional or non-functional, though usually functional.
  • Functional testing: This sort is pivoted to functional requirements of a system and done by software testers.
  • Non-functional testing: is not related to testing of a specific functionality, but non-functional requirements such as performance, scalability, usability.
  • Regression testing: Regression testing is done after code fixes, upgrades or any other system maintenance to check the new code has not affected the existing code.

Test Design Techniques:
Typical black-box test design techniques include:
  • Decision table testing
  • All-pairs testing
  • State transition analysis
  • Equivalence partitioning
  • Boundary value analysis
  • Cause–effect graph
  • Error guessing

Above all below are mentioned are the prominent in usage:
Equivalence Class Testing:
It is used to minimize the number of possible test cases to an optimum level while maintains reasonable test coverage.

Boundary Value Testing: Boundary value testing is focused on the values at boundaries. This technique determines whether a certain range of values are acceptable by the system or not. It is very useful in reducing the number of test cases. It is mostly suitable for the systems where input is within certain ranges.

Decision Table Testing: A decision table puts causes and their effects in a matrix. There is unique combination in each column.


Advantages
  • Testing carried from a user’s point of view and will assit in revealing incongruity in the specifications.
  • Testing professional need not know programming languages or how the software has been implemented.
  • Tests can be conducted by a body independent from the developers, allowing for an objective perspective and the avoidance of developer-bias.
  • Test cases can be designed as soon as the specifications are complete.

Disadvantages
  • Only a small number of possible inputs can be tested and many program paths will be left untested.
  • Without clear specifications, which are the situation in many projects, test cases will be difficult to design.
  • Tests can be redundant if the software designer/ developer has already run a test case.
  • Ever wondered why a soothsayer closes the eyes when foretelling events? So is almost the case in Black Box Testing.

White box testing:
White-box testing is testing that takes into account the internal mechanism of a system or component (IEEE, 1990). This technique allows one to test internal structures or workings of an application, as opposed to its functionality.

White-box testing, sometimes called glass-box testing is a test case design method that uses the control structure of the procedural design to derive test cases. Using white-box testing methods, the software engineer can derive test cases that
  • Guarantee that all independent paths within a module have been exercised at least once,
  • Exercise all logical decisions on their true and false sides,
  • Execute all loops at their boundaries and within their operational bounds, and
  • Exercise internal data structures to ensure their validity.

The white-box testing techniques facilitate a software engineer to design test cases that:
  • Exercise independent paths within a module or unit;
  • Exercise logical decisions on both their true and false side
  • Execute loops at their boundaries and within their operational bounds; and
  • Exercise internal data structures to ensure their validity (Pressman, 2001).

White-box test design techniques include:
  • Control flow testing
  • Data flow testing
  • Branch testing
  • Statement coverage
  • Decision coverage
  • Modified condition/decision coverage
  • Prime path testing
  • Path testing

Disadvantages:
  • White-box testing brings complexity to testing because the tester must have knowledge of the program, including being a programmer. White-box testing requires a programmer with a high-level of knowledge due to the complexity of the level of testing that needs to be done.
  • On some occasions, it is not realistic to be able to test every single existing condition of the application and some conditions will be untested.
  • The tests focus on the software as it exists, and missing functionality may not be discovered.

Advantages
  • Having the knowledge of the source code is beneficial to thorough testing.
  • Optimization of code by revealing hidden errors and being able to remove these possible defects.
  • Gives the programmer introspection because developers carefully describe any new implementation.
  • Provides traceability of tests from the source, allowing future changes to the software to be easily captured in changes to the tests.
  • White box tests are easy to automate.
  • White box testing gives clear, engineering-based, rules for when to stop testing.

Unit Testing:
In unit testing, the software is divided into components usually terms as software units or modules and each unit is tested the testing is conducted for each The unit test is white-box oriented, and the step can be conducted in parallel for multiple components. Using the component-level design description as a guide, important control paths are tested to uncover errors within the boundary of the module.
Unit test consists of:
  • Unit Test Considerations
  • Unit Test Procedures

Unit Test Considerations
  • Module interface - information properly flows into and out of the program unit under test.
  • local data structure - data stored temporarily maintains its integrity.
  • Boundary conditions -module operates properly at boundaries established to limit or restrict processing
  • Independent paths - all statements in a module have been executed at least once.
  • And finally, all error handling paths are tested.
  • Module interface are required before any other test is initiated because If data do not enter and exit properly, all other tests are debatable.
  • In addition, local data structures should be exercised and the local impact on global data should be discover during unit testing.
  • Selective testing of execution paths is an essential task during the unit test. Test cases should be designed to uncover errors due to
    • Computations,
    • Incorrect comparisons, or
    • Improper control flow

  • Basis path and loop testing are effective techniques for uncovering a broad array of path errors.

Errors are commonly found during unit testing
  • More common errors in computation are
    • misunderstood or incorrect arithmetic precedence
    • mixed mode operations,
    • incorrect initialization,
    • precision inaccuracy,
    • Incorrect symbolic representation of an expression.

  • Comparison and control flow are closely coupled to one another
    • Comparison of different data types,
    • Incorrect logical operators or precedence,
    • Incorrect comparison of variables
    • Improper or nonexistent loop termination,
    • Failure to exit when divergent iteration is encountered
    • Improperly modified loop variables.

Unit Test Procedures
Engineering Study Material

Drivers and stubs represent overhead. That is, both are software that must be written but that is not delivered with the final software product. In such cases, complete testing can be postponed until the integration test step. Unit testing is simplified when a component with high cohesion is designed. When only one function is addressed by a component, the number of test cases is reduced and errors can be more easily predicted and uncovered.

Validation Test
Validation testing succeeds when software functions in a manner that can be reasonably expected by the customer. Like all other testing steps, validation tries to uncover errors, but the focus is at the requirements level— on things that will be immediately apparent to the end-user.

Validation testing comprises of
  • Validation Test criteria
  • Configuration review
  • Alpha & Beta Testing

It is achieved through a series of tests that demonstrate agreement with requirements. A test plan outlines the classes of tests to be conducted and a test procedure defines specific test cases that will be used to demonstrate agreement with requirements.

Both the plan and procedure are designed to ensure that
  • all functional requirements are satisfied,
  • all behavioral characteristics are achieved,
  • all performance requirements are attained,
  • documentation is correct,
  • other requirements are met

After each validation test case has been conducted, one of two possible conditions exists:
  1. The function or performance characteristics conform to specification and are accepted
  2. A deviation from specification is uncovered and a deficiency list is created

THE ART OF DEBUGGING
Debugging is an art of orderly processing in error removal Debugging is not testing but always occurs as a consequence of testing.

Debugging process:
Engineering Study Material

In debugging process the results are examined and a lack of correspondence between expected and actual performance is encountered (due to cause of error). Debugging process attempts to match symptom with cause, thereby leading to error correction.

One of two outcomes always comes from debugging process:
  • The cause will be found and corrected,
  • The cause will not be found.
  • The person performing debugging may suspect a cause, design a test case to help validate that doubt, and work toward error correction in an iterative fashion.

Why is debugging so difficult?
  1. The symptom may disappear (temporarily) when another error is corrected.
  2. The symptom may actually be caused by non-errors (e.g., round-off inaccuracies).
  3. The symptom may be caused by human error that is not easily traced (e.g. wrong input, wrongly configure the system)
  4. The symptom may be a result of timing problems, rather than processing problems.( e.g. taking so much time to display result).
  5. It may be difficult to accurately reproduce input conditions (e.g., a real-time application in which input ordering is indeterminate).
  6. The symptom may be intermittent (connection irregular or broken). This is particularly common in embedded systems that couple hardware and software
  7. The symptom may be due to causes that are distributed across a number of tasks running on different processors

As the consequences of an error increase, the amount of pressure to find the cause also increases. Often, pressure sometimes forces a software developer to fix one error and at the same time introduce two more.

Debugging Approaches or strategies
There are three categories for debugging approaches
  1. Brute force
  2. Backtracking
  3. Cause elimination

  1. Brute Force:
    • Probably the most common and least efficient method for isolating the cause of a software error.
    • Apply brute force debugging methods when all else fails.
    • Using a "let the computer find the error" philosophy, memory dumps are taken, run-time traces are invoked, and the program is loaded with WRITE or PRINT statements
    • It more frequently leads to wasted effort and time.

  2. Backtracking:
    • common debugging approach that can be used successfully in small programs.
    • Beginning at the site where a symptom has been open, the source code is traced backward (manually) until the site of the cause is found.

  3. Cause elimination
    • Is cleared by induction or deduction and introduces the concept of binary partitioning (i.e. valid and invalid).
    • A list of all possible causes is developed and tests are conducted to eliminate each.
Published date : 02 Apr 2015 12:40PM

Photo Stories