Thursday, July 24, 2008

7 basic tips for testing multi-lingual web sites

These days a number of web sites are deployed in multiple languages. As companies perform more and more business in other countries, the number of such global multi-lingual web applications will continue to increase.

Testing web sites supporting multiple languages has its own fair share of challenges. In this article, I will share seven tips with you that will enable you to test the multi-lingual browser-based applications in a complete way:

Tip # 1 – Prepare and use the required test environment

If a web site is hosted in English and Japanese languages, it is not enough to simply change the default browser language and perform identical tests in both the languages. Depending on its implementation, a web site may figure out the correct language for its interface from the browser language setting, the regional and language settings of the machine, a configuration in the web application or other factors. Therefore, in order to perform a realistic test, it is imperative that the web site be tested from two machines – one with the English operating system and one with the Japanese operating system. You might want to keep the default settings on each machine since many users do not change the default settings on their machines.

Tip # 2 – Acquire correct translations

A native speaker of the language, belonging to the same region as the users, is usually the best resource to provide translations that are accurate in both meaning as well as context. If such a person is not available to provide you the translations of the text, you might have to depend on automated web translations available on web sites like wordreference.com and dictionary.com. It is a good idea to compare automated translations from multiple sources before using them in the test.

Tip # 3 – Get really comfortable with the application

Since you might not know the languages supported by the web site, it is always a good idea for you to be very conversant with the functionality of the web site. Execute the test cases in the English version of the site a number of times. This will help you find your way easily within the other language version. Otherwise, you might have to keep the English version of the site open in another browser in order to figure out how to proceed in the other language version (and this could slow you down).

Tip # 4 – Start with testing the labels

You could start testing the other language version of the web site by first looking at all the labels. Labels are the more static items in the web site. English labels are usually short and translated labels tend to expand. It is important to spot any issues related to label truncation, overlay on/ under other controls, incorrect word wrapping etc. It is even more important to compare the labels with their translations in the other language.

Tip # 5 – Move on to the other controls

Next, you could move on to checking the other controls for correct translations and any user interface issues. It is important that the web site provides correct error messages in the other language. The test should include generating all the error messages. Usually for any text that is not translated, three possibilities exist. The text will be missing or its English equivalent will be present or you will see junk characters in its place.

Tip # 6 – Do test the data

Usually, multi-lingual web sites store the data in the UTF-8 Unicode encoding format. To check the character encoding for your website in mozilla: go to View -> Character Encoding and in IE go to View -> Encoding. Data in different languages can be easily represented in this format. Make sure to check the input data. It should be possible to enter data in the other language in the web site. The data displayed by the web site should be correct. The output data should be compared with its translation.

Tip # 7 – Be aware of cultural issues

A challenge in testing multi-lingual web sites is that each language might be meant for users from a particular culture. Many things such as preferred (and not preferred) colors, text direction (this can be left to right, right to left or top to bottom), format of salutations and addresses, measures, currency etc. are different in different cultures. Not only should the other language version of the web site provide correct translations, other elements of the user interface e.g. text direction, currency symbol, date format etc. should also be correct.

As you might have gathered from the tips given above, using the correct test environment and acquiring correct translations is critical in performing a successful test of other language versions of a web site.

It would be interesting to know your experience on testing multi-language web sites.

Tuesday, July 22, 2008

how to get abug with out "invalid bug" label

I hate “Invalid bug” label from developers for the bugs reported by me, do you? I think every tester should try to get his/her 100% bugs resolved. This requires bug reporting skill. See my previous post on “How to write a good bug report? Tips and Tricks” to report bugs professionally and without any ambiguity.

The main reason for bug being marked as invalid is “Insufficient troubleshooting” by tester before reporting the bug. In this post I will focus only on troubleshooting to find main cause of the bug. Troubleshooting will help you to decide whether the ambiguity you found in your application under test is really a bug or any test setup mistake.

Yes, 50% bugs get marked as “Invalid bugs” only due to testers incomplete testing setup. Let’s say you found an ambiguity in application under test. You are now preparing the steps to report this ambiguity as a bug. But wait! Have you done enough troubleshooting before reporting this bug? Or have you confirmed if it is really a bug?

What troubleshooting you need to perform before reporting any bug?

Troubleshooting of:

  • What’s not working?
  • Why it’s not working?
  • How can you make it work?
  • What are the possible reasons for the failure?

Answer for the first question “what’s not working?” is sufficient for you to report the bug steps in bug tracking system. Then why to answer remaining three questions? Think beyond your responsibilities. Act smarter, don’t be a dumb person who only follow his routine steps and don’t even think outside of that. You should be able to suggest all possible solutions to resolve the bug and efficiency as well as drawbacks of each solution. This will increase your respect in your team and will also reduce the possibility of getting your bugs rejected, not due to this respect but due to your troubleshooting skill.

Before reporting any bug, make sure it isn’t your mistake while testing, you have missed any important flag to set or you might have not configured your test setup properly.

Troubleshoot the reasons for the failure in application. On proper troubleshooting report the bug. I have complied a troubleshooting list. Check it out - what can be different reasons for failure.

Reasons of failure:
1) If you are using any configuration file for testing your application then make sure this file is upto date as per the application requirements: Many times some global configuration file is used to pick or set some application flags. Failure to maintain this file as per your software requirements will lead to malfunctioning of your application under test. You can’t report it as bug.

2) Check if your database is proper: Missing table is main reason that your application will not work properly.
I have a classic example for this: One of my projects was querying many monthly user database tables for showing the user reports. First table existence was checked in master table (This table was maintaining only monthly table names) and then data was queried from different individual monthly tables. Many testers were selecting big date range to see the user reports. But many times it was crashing the application as those tables were not present in database of test machine server, giving SQL query error and they were reporting it as bug which subsequently was getting marked as invalid by developers.

3) If you are working on automation testing project then debug your script twice before coming to conclusion that the application failure is a bug.

4) Check if you are not using invalid access credentials for authentication.

5) Check if software versions are compatible.

6) Check if there is any other hardware issue that is not related to your application.

7) Make sure your application hardware and software prerequisites are correct.

8 ) Check if all software components are installed properly on your test machine. Check whether registry entries are valid.

9) For any failure look into ‘system event viewer’ for details. You can trace out many failure reasons from system event log file.

10) Before starting to test make sure you have uploaded all latest version files to your test environment.

These are all small and common mistakes but can mostly impact on your relations and credibility in your team. When you will find that your bug is marked as invalid and the invalid bug reason is from above mentioned list – it will be a silly mistake and it will definitely hurt you. (At least to me!)

Share mistakes done by you while reporting any bug. This will help other readers to learn from your experience!

Wednesday, July 16, 2008

career in software testing,be confident!!

“What is the future of software testing business?” “Should I consider software testing as my career option?”

Infosys Chief Executive and Managing Director, Mr. Kris Gopalakrishnan estimated global software testing business to reach $13 Billion by 2010. Out of these approximately half of the testing work will be outsourced to India - according to Mr. Gopalakrishnan. That’s great news for Indian software testers.

Mr.Gopalakrishnan was speaking in inauguration of International software testing conference in Bangalore organized by STeP-IN. The conference was aimed to discuss on various software testing opportunities and future of Indian test community.

Currently Indian software testing community is the largest in the industry and hence there is tremendous business competition in Indian corporations. This competition finally leads to quality work and this is helping India to satisfy global customers. India is having immense talent and customers are coming to India for superior work quality.

Software testers and analysts are now key part of any product team. Indian IT giants like Infosys is deriving upto 10 per cent of revenue from software testing services and significantly growing each year.

Irrespective of software testing global or Indian business I have always suggested candidates to choose your career according to your interest. One can make a good career in any field if you have interest and goal to pioneer in that field. Without interest not a single career option will work for you!!!

How Domain knowledge is Important for testers?

First of all I would like to introduce three dimensional testing career mentioned by Danny R. Faught. There are three categories of skill that need to be judged before hiring any software tester. What are those three skill categories?
1) Testing skill
2) Domain knowledge
3) Technical expertise.

No doubt that any tester should have the basic testing skills like Manual testing and Automation testing. Tester having the common sense can even find most of the obvious bugs in the software. Then would you say that this much testing is sufficient? Would you release the product on the basis of this much testing done? Certainly not. You will certainly have a product look by the domain expert before the product goes into the market.

While testing any application you should think like a end-user. But every human being has the limitations and one can’t be the expert in all of the three dimensions mentioned above. (If you are the experts in all of the above skills then please let me know ;-)) So you can’t assure that you can think 100% like how the end-user going to use your application. User who is going to use your application may be having a good understanding of the domain he is working on. You need to balance all these skill activities so that all product aspects will get addressed.

Nowadays you can see the professional being hired in different companies are more domain experts than having technical skills. Current software industry is also seeing a good trend that many professional developers and domain experts are moving into software testing.

We can observe one more reason why domain experts are most wanted! When you hire fresh engineers who are just out of college you cannot expect them to compete with the experienced professionals. Why? Because experienced professional certainly have the advantage of domain and testing experience and they have better understandings of different issues and can deliver the application better and faster.

Here are some of the examples where you can see the distinct edge of domain knowledge:
1) Mobile application testing.
2) Wireless application testing
3) VoIP applications
4) Protocol testing
5) Banking applications
6) Network testing

How will you test such applications without knowledge of specific domain? Are you going to test the BFSI applications (Banking, Financial Services and Insurance) just for UI or functionality or security or load or stress? You should know what are the user requirements in banking, working procedures, commerce background, exposure to brokerage etc and should test application accordingly, then only you can say that your testing is enough - Here comes the need of subject-matter experts.

Let’s take example of my current project: I am currently working on search engine application. Where I need to know the basic of search engine terminologies and concepts. Many times I see some other team tester’s asking me questions like what is ‘publishers’ and ‘advertisers’, what is the difference and what they do? Do you think they can test the application based on current online advertising and SEO? Certainly not. Unless and until they get well familiar with these terminologies and functionalities.

When I know the functional domain better I can better write and execute more test cases and can effectively simulate the end user actions which is distinctly a big advantage.

Here is the big list of the required testing knowledge:

  • Testing skill
  • Bug hunting skill
  • Technical skill
  • Domain knowledge
  • Communication skill
  • Automation skill
  • Some programming skill
  • Quick grasping
  • Ability to Work under pressure …

That is going to be a huge list. So you will certainly say, do I need to have these many skills? Its’ depends on you. You can stick to one skill or can be expert in one skill and have good understanding of other skills or balanced approach of all the skills. This is the competitive market and you should definitely take advantage of it. Make sure to be expert in at least one domain before making any move.

What if you don’t have enough domain knowledge?
You will be posted on any project and company can assign any work to you. Then what if you don’t have enough domain knowledge of that project? You need to quickly grasp as many concepts as you can. Try to understand the product as if you are the customer and what customer will do with application. Visit the customer site if possible know how they work with the product, Read online resources about the domain you want to test the application, participate in events addressing on such domain, meet the domain experts. Or either company will provide all this in-house training before assigning any domain specific task to testers.

There is no specific stage where you need this domain knowledge. You need to apply your domain knowledge in each and every software testing life cycle.

If you are reading this article till this point then I would like to hear on which domain you are working on? So that our readers can get better idea of different domains and projects. Comment your domain below.

Update: As per the request from many readers I have updated our software testing resource page for the BFSI domain online documents and articles for downloads.

Career options for software test professionals

Career options for software test professionals:

Software testing career options
Of late, testing is looked after as a good professional career for many of the aspiring youths. As mentioned above, from being test engineer one can move to senior test engineer, test lead to test manager; else can become QA lead, QA Manager. The options available in the testing tools side are enormous. There are numbers of functional, performance, security testing tools besides test management tools like Quality Center from HP, CQTM from IBM etc.,

The demand for niche skills like SOA testers, Security testers are on the increase. There is dearth of skills in test automation areas - scripting skills in the tools languages like VB, Java and other scripting languages like Perl, Shell, Python etc., Technical resources with capabilities to evaluate automation tools, create automation framework and reusable components are on demand. Always there is demand for good performance testers who can analyze the performance test results, identify the bottlenecks and suggest tuning techniques.

Specialization has come to stay in testing career - following are some of the key areas where one need to specialize to move ahead in career path in testing apart from good knowledge in software lifecycle testing process.

1) Domain Knowledge - Good knowledge in domain area of the application adds value to the testing professionals. There are ever living domains like BFSI, Telecom, Health care, manufacturing, embedded etc. Numbers of certifications are available for each of these areas where the tester can get them certified.

2) Automation Testing Tools Knowledge - There is great demand for automation and performance testers. A good skill on scripting languages of these tools is basic necessity for succeeding in test automation. Knowledge on creation, validation and enhancement of test automation framework is very much required.

3) Certifications - QAI, ASQ, ISQTB and several other institutes are offering testing specific certifications. These certifications improve the confidence of the clients on the testing professionals. CQTM, PMP are some managerial certifications, which help the testers to scale up in the professional ladder. Certifications on the testing tools offered by vendors like HP increases the technical competency of the individual.

4) Niche areas in Testing - Experts predict that the niche areas like SOA testing, Security testing are gaining momentum in the testing space. Many tools are emerging in these areas. As testing professionals we should be aware of where the industry is heading and update our knowledge in those areas.

Knowledge updation is a continuous process. Several website like stickyminds, QAForums offer excellent insight into various facets of the testing arena. I always request my team members to spend at least two hours in a week in these selected websites to update themselves to the current happenings and events.

As the saying goes “you need to run continuously to keep yourself in the same place”, as testing professionals we should always work towards sharpening our testing skills to succeed in this competitive environment.

Professional WebTest Plan

Web Test Plan Development
The objective of a test plan is to provide a roadmap so that the Web site can be evaluated through requirements or design statements. A test plan is a document that describes objectives and the scope of a Web site project. When you prepare a test plan, you should think through the process of the Web site test. The plan should be written so that it can successfully give the reader a complete picture of the Web site project and should be thorough enough to be useful. Following are some of the items that might be included in a test plan. Keep in mind thatthe items may vary depending on the Web site project.
The Web Testing Process
  • Internet
  • Web Browser
  • Web Server
PROJECT
  • Title of the project:
  • Date:
  • Prepared by:
PURPOSE OF DOCUMENT
  • Objective of testing: Why are you testing the application? Who, what, when, where, why, and how should be some of the questions you ask in this section of the test plan.
  • Overview of the application: What is the purpose of the application? What are the specifications of the project?
TEST TEAM
  • Responsible parties: Who is responsible and in charge of the testing?
  • List of test team: What are the names and titles of the people on the test team?
RISK ASSUMPTIONS
  • Anticipated risks: What types of risks are involved that could cause the test to fail?
  • Similar risks from previous releases: Have there been documented risks from previous tests that may be helpful in setting up the current test?
SCOPE OF TESTING
  • Possible limitations of testing: Are there any factors that may inhibit the test, such as resources and budget?
  • Impossible testing: What are the considerations involved that could prevent the tests that are planned?
  • Anticipated output: What are the anticipated outcomes of the test and have they been documented for comparison?
  • Anticipated input: What are the anticipated outcomes that need to be compared to the test documentation?
TEST ENVIRONMENT: Hardware:
  • What are the operating systems that will be used?
  • What is the compatibility of all the hardware being used?

Software:

  • What data configurations are needed to run the software?
  • Have all the considerations of the required interfaces to other systems been used?
  • Are the software and hardware compatible?
TEST DATA
  • Database setup requirements: Does test data need to be generated or will a specific data from production be captured and used for testing?
  • Setup requirements: Who will be responsible for setting up the environment and maintaining it throughout the testing process?
TEST TOOLS
  • Automated:Will automated tools be used?
  • Manual:Will manual testing be done?
DOCUMENTATION
  • Test cases: Are there test cases already prepared or will they need to be prepared?
  • Test scripts: Are there test scripts already prepared or will they need to be prepared?

PROBLEM TRACKING
  • Tools: What type of tools will be selected?
  • Processes: Who will be involved in the problem tracking process?
REPORTING REQUIREMENTS
  • Testing deliverables: What are the deliverables for the test?
  • Retests: How will the retesting reporting be documented?
PERSONNEL RESOURCES
  • Training:Will training be provided?
  • Implementation: How will training be implemented?
ADDITIONAL DOCUMENTATION
  • Appendix:Will samples be included?
  • Reference materials:Will there be a glossary, acronyms, and/or data dictionary?
Once you have written your test plan, you should address some of the following issues and questions:
  • Verify plan. Make sure the plan is workable, the dates are realistic, and that the plan is published. How will the test plan be implemented and what are the deliverables provided to verify the test?
  • Validate changes. Changes should be recorded by a problem tracking system and assigned to a developer to make revisions, retest, and sign off on changes that have been made.
  • Acceptance testing. Acceptance testing allows the end users to verify that the system works according to their expectation and the documentation. Certification of the Web site should be recorded and signed off by the end users, testers, and management.

Test reports. Reports should be generated and the data should be checked and validated by the test team and users.


Recent Major computer system failures caused by Software bugs

  • n March of 2002 it was reported that software bugs in Britain’s national tax system resulted in more than 100,000 erroneous tax overcharges. The problem was partly attibuted to the difficulty of testing the integration of multiple systems.
  • A newspaper columnist reported in July 2001 that a serious flaw was found in off-the-shelf software that had long been used in systems for tracking certain U.S. nuclear materials. The same software had been recently donated to another country to be used in tracking their own nuclear materials, and it was not until scientists in that country discovered the problem, and shared the information, that U.S. officials became aware of the problems.
  • According to newspaper stories in mid-2001, a major systems development contractor was fired and sued over problems with a large retirement plan management system. According to the reports, the client claimed that system deliveries were late, the software had excessive defects, and it caused other systems to crash.
  • In January of 2001 newspapers reported that a major European railroad was hit by the aftereffects of the Y2K bug. The company found that many of their newer trains would not run due to their inability to recognize the date ‘31/12/2000′; the trains were started by altering the control system’s date settings.
  • News reports in September of 2000 told of a software vendor settling a lawsuit with a large mortgage lender; the vendor had reportedly delivered an online mortgage processing system that did not meet specifications, was delivered late, and didn’t work.
  • In early 2000, major problems were reported with a new computer system in a large suburban U.S. public school district with 100,000+ students; problems included 10,000 erroneous report cards and students left stranded by failed class registration systems; the district’s CIO was fired. The school district decided to reinstate it’s original 25-year old system for at least a year until the bugs were worked out of the new system by the software vendors.
  • In October of 1999 the $125 million NASA Mars Climate Orbiter spacecraft was believed to be lost in space due to a simple data conversion error. It was determined that spacecraft software used certain data in English units that should have been in metric units. Among other tasks, the orbiter was to serve as a communications relay for the Mars Polar Lander mission, which failed for unknown reasons in December 1999. Several investigating panels were convened to determine the process failures that allowed the error to go undetected.
  • Bugs in software supporting a large commercial high-speed data network affected 70,000 business customers over a period of 8 days in August of 1999. Among those affected was the electronic trading system of the largest U.S. futures exchange, which was shut down for most of a week as a result of the outages.
  • In April of 1999 a software bug caused the failure of a $1.2 billion military satellite launch, the costliest unmanned accident in the history of Cape Canaveral launches. The failure was the latest in a string of launch failures, triggering a complete military and industry review of U.S. space launch programs, including software integration and testing processes. Congressional oversight hearings were requested.
  • A small town in Illinois received an unusually large monthly electric bill of $7 million in March of 1999. This was about 700 times larger than its normal bill. It turned out to be due to bugs in new software that had been purchased by the local power company to deal with Y2K software issues.
  • In early 1999 a major computer game company recalled all copies of a popular new product due to software problems. The company made a public apology for releasing a product before it was ready.
  • The computer system of a major online U.S. stock trading service failed during trading hours several times over a period of days in February of 1999 according to nationwide news reports. The problem was reportedly due to bugs in a software upgrade intended to speed online trade confirmations.
  • In April of 1998 a major U.S. data communications network failed for 24 hours, crippling a large part of some U.S. credit card transaction authorization systems as well as other large U.S. bank, retail, and government data systems. The cause was eventually traced to a software bug.
  • January 1998 news reports told of software problems at a major U.S. telecommunications company that resulted in no charges for long distance calls for a month for 400,000 customers. The problem went undetected until customers called up with questions about their bills.
  • In November of 1997 the stock of a major health industry company dropped 60% due to reports of failures in computer billing systems, problems with a large database conversion, and inadequate software testing. It was reported that more than $100,000,000 in receivables had to be written off and that multi-million dollar fines were levied on the company by government agencies.
  • A retail store chain filed suit in August of 1997 against a transaction processing system vendor (not a credit card company) due to the software’s inability to handle credit cards with year 2000 expiration dates.
  • In August of 1997 one of the leading consumer credit reporting companies reportedly shut down their new public web site after less than two days of operation due to software problems. The new site allowed web site visitors instant access, for a small fee, to their personal credit reports. However, a number of initial users ended up viewing each others’ reports instead of their own, resulting in irate customers and nationwide publicity. The problem was attributed to “…unexpectedly high demand from consumers and faulty software that routed the files to the wrong computers.”
  • In November of 1996, newspapers reported that software bugs caused the 411 telephone information system of one of the U.S. RBOC’s to fail for most of a day. Most of the 2000 operators had to search through phone books instead of using their 13,000,000-listing database. The bugs were introduced by new software modifications and the problem software had been installed on both the production and backup systems. A spokesman for the software vendor reportedly stated that ‘It had nothing to do with the integrity of the software. It was human error.’
  • On June 4 1996 the first flight of the European Space Agency’s new Ariane 5 rocket failed shortly after launching, resulting in an estimated uninsured loss of a half billion dollars. It was reportedly due to the lack of exception handling of a floating-point error in a conversion from a 64-bit integer to a 16-bit signed integer.
  • Software bugs caused the bank accounts of 823 customers of a major U.S. bank to be credited with $924,844,208.32 each in May of 1996, according to newspaper reports. The American Bankers Association claimed it was the largest such error in banking history. A bank spokesman said the programming errors were corrected and all funds were recovered.
  • Software bugs in a Soviet early-warning monitoring system nearly brought on nuclear war in 1983, according to news reports in early 1999. The software was supposed to filter out false missile detections caused by Soviet satellites picking up sunlight reflections off cloud-tops, but failed to do so. Disaster was averted when a Soviet commander, based on a what he said was a ‘…funny feeling in my gut’, decided the apparent missile attack was a false alarm. The filtering software code was rewritten.

A-Z InTesting

A

acceptance criteria: The exit criteria that a component or system must satisfy in order to be accepted by a user, customer, or other authorized entity. [IEEE 610]

acceptance testing: Formal testing with respect to user needs, requirements, and business processes conducted to determine whether or not a system satisfies the acceptance criteria and to enable the user, customers or other authorized entity to determine whether or not to accept the system. [After IEEE 610]

accessibility testing: Testing to determine the ease by which users with disabilities can use a component or system. [Gerrard]

accuracy: The capability of the software product to provide the right or agreed results or effects with the needed degree of precision. [ISO 9126] See also functionality testing.

actual result: The behavior produced/observed when a component or system is tested.

ad hoc testing: Testing carried out informally; no formal test preparation takes place, no recognized test design technique is used, there are no expectations for results and randomness guides the test execution activity.

adaptability: The capability of the software product to be adapted for different specified environments without applying actions or means other than those provided for this purpose for the software considered. [ISO 9126] See also portability testing.

agile testing: Testing practice for a project using agile methodologies, such as extreme programming (XP), treating development as the customer of testing and emphasizing the test-first design paradigm.

alpha testing: Simulated or actual operational testing by potential users/customers or an independent test team at the developers’ site, but outside the development organization. Alpha testing is often employed as a form of internal acceptance testing.

analyzability: The capability of the software product to be diagnosed for deficiencies or causes of failures in the software, or for the parts to be modified to be identified. [ISO 9126] See also maintainability testing.

anomaly: Any condition that deviates from expectation based on requirements specifications, design documents, user documents, standards, etc. or from someone’s perception or experience. Anomalies may be found during, but not limited to, reviewing, testing, analysis, compilation, or use of software products or applicable documentation. [IEEE 1044] See also defect, deviation, error, fault, failure, incident, problem.

attractiveness: The capability of the software product to be attractive to the user. [ISO 9126]

audit: An independent evaluation of software products or processes to ascertain compliance to standards, guidelines, specifications, and/or procedures based on objective criteria, including documents that specify:

(1) the form or content of the products to be produced

(2) the process by which the products shall be produced

(3) how compliance to standards or guidelines shall be measured. [IEEE 1028]

audit trail: A path by which the original input to a process (e.g. data) can be traced back through the process, taking the process output as a starting point. This facilitates defect analysis and allows a process audit to be carried out. [After TMap]

automated testware: Testware used in automated testing, such as tool scripts.

availability: The degree to which a component or system is operational and accessible when required for use. Often expressed as a percentage. [IEEE 610]

B

back-to-back testing: Testing in which two or more variants of a component or system are executed with the same inputs, the outputs compared, and analyzed in cases of discrepancies. [IEEE 610]

baseline: A specification or software product that has been formally reviewed or agreed upon, that thereafter serves as the basis for further development, and that can be changed only through a formal change control process. [After IEEE 610]

basic block: A sequence of one or more consecutive executable statements containing no branches.

basis test set: A set of test cases derived from the internal structure or specification to ensure that 100% of a specified coverage criterion is achieved.

behavior: The response of a component or system to a set of input values and preconditions.

benchmark test: (1) A standard against which measurements or comparisons can be made. (2) A test that is be used to compare components or systems to each other or to a standard as in (1). [After IEEE 610]

bespoke software: Software developed specifically for a set of users or customers. The opposite is off-the-shelf software.

best practice: A superior method or innovative practice that contributes to the improved performance of an organization under given context, usually recognized as ‘best’ by other peer organizations.

beta testing: Operational testing by potential and/or existing users/customers at an external site not otherwise involved with the developers, to determine whether or not a component or system satisfies the user/customer needs and fits within the business processes. Beta testing is often employed as a form of external acceptance testing in order to acquired feedback from the market.

big-bang testing: A type of integration testing in which software elements, hardware elements, or both are combined all at once into a component or an overall system, rather than in stages. [After IEEE 610] See also integration testing.

black box testing: Testing, either functional or non-functional, without reference to the internal structure of the component or system.

black box test design techniques: Documented procedure to derive and select test cases based on an analysis of the specification, either functional or non-functional, of a component or system without reference to its internal structure.

blocked test case: A test case that cannot be executed because the preconditions for its execution are not fulfilled.

bottom-up testing: An incremental approach to integration testing where the lowest level components are tested first, and then used to facilitate the testing of higher level components. This process is repeated until the component at the top of the hierarchy is tested. See also integration testing.

boundary value: An input value or output value which is on the edge of an equivalence partition or at the smallest incremental distance on either side of an edge, for example the minimum or maximum value of a range.

boundary value analysis: A black box test design technique in which test cases are designed based on boundary values.

boundary value coverage: The percentage of boundary values that have been exercised by a test suite.

branch: A basic block that can be selected for execution based on a program construct in which one of two or more alternative program paths are available, e.g. case, jump, go to, ifthen- else.

branch coverage: The percentage of branches that have been exercised by a test suite. 100% branch coverage implies both 100% decision coverage and 100% statement coverage.

branch testing: A white box test design technique in which test cases are designed to execute branches.

business process-based testing: An approach to testing in which test cases are designed based on descriptions and/or knowledge of business processes.

C

Capability Maturity Model (CMM): A five level staged framework that describes the key elements of an effective software process. The Capability Maturity Model covers practices for planning, engineering and managing software development and maintenance. [CMM]

Capability Maturity Model Integration (CMMI): A framework that describes the key elements of an effective product development and maintenance process. The Capability Maturity Model Integration covers practices for planning, engineering and managing product development and maintenance. CMMI is the designated successor of the CMM. [CMMI]

capture/playback tool: A type of test execution tool where inputs are recorded during manual testing in order to generate automated test scripts that can be executed later (i.e. replayed). These tools are often used to support automated regression testing.

CASE: Acronym for Computer Aided Software Engineering.

CAST: Acronym for Computer Aided Software Testing. See also test automation.

cause-effect graph: A graphical representation of inputs and/or stimuli (causes) with their associated outputs (effects), which can be used to design test cases.

cause-effect graphing: A black box test design technique in which test cases are designed from cause-effect graphs. [BS 7925/2]

certification: The process of confirming that a component, system or person complies with its specified requirements, e.g. by passing an exam.

changeability: The capability of the software product to enable specified modifications to be implemented. [ISO 9126] See also maintainability.

classification tree method: A black box test design technique in which test cases, described by means of a classification tree, are designed to execute combinations of representatives of input and/or output domains. [Grochtmann]

code coverage: An analysis method that determines which parts of the software have been executed (covered) by the test suite and which parts have not been executed, e.g. statement coverage, decision coverage or condition coverage.

co-existence: The capability of the software product to co-exist with other independent software in a common environment sharing common resources. [ISO 9126] See portability testing.

complexity: The degree to which a component or system has a design and/or internal structure that is difficult to understand, maintain and verify. See also cyclomatic complexity.

compliance: The capability of the software product to adhere to standards, conventions or regulations in laws and similar prescriptions. [ISO 9126]

compliance testing: The process of testing to determine the compliance of component or system.

component: A minimal software item that can be tested in isolation.

component integration testing: Testing performed to expose defects in the interfaces and interaction between integrated components.

component specification: A description of a component’s function in terms of its output values for specified input values under specified conditions, and required non-functional behavior (e.g. resource-utilization).

component testing: The testing of individual software components. [After IEEE 610]

compound condition: Two or more single conditions joined by means of a logical operator (AND, OR or XOR), e.g. ‘A>B AND C>1000’.

concurrency testing: Testing to determine how the occurrence of two or more activities within the same interval of time, achieved either by interleaving the activities or by simultaneous execution, is handled by the component or system. [After IEEE 610]

condition: A logical expression that can be evaluated as True or False, e.g. A>B. See also test condition.

condition coverage: The percentage of condition outcomes that have been exercised by a test suite. 100% condition coverage requires each single condition in every decision statement to be tested as True and False.

condition determination coverage: The percentage of all single condition outcomes that independently affect a decision outcome that have been exercised by a test case suite. 100% condition determination coverage implies 100% decision condition coverage.

condition determination testing: A white box test design technique in which test cases are designed to execute single condition outcomes that independently affect a decision outcome.

condition testing: A white box test design technique in which test cases are designed to execute condition outcomes.

condition outcome: The evaluation of a condition to True or False.

configuration: The composition of a component or system as defined by the number, nature, and interconnections of its constituent parts.

configuration auditing: The function to check on the contents of libraries of configuration items, e.g. for standards compliance. [IEEE 610]

configuration control: An element of configuration management, consisting of the evaluation, co-ordination, approval or disapproval, and implementation of changes to configuration items after formal establishment of their configuration identification. [IEEE 610]

configuration identification: An element of configuration management, consisting of selecting the configuration items for a system and recording their functional and physical characteristics in technical documentation. [IEEE 610]

configuration item: An aggregation of hardware, software or both, that is designated for configuration management and treated as a single entity in the configuration management process. [IEEE 610]

configuration management: A discipline applying technical and administrative direction and surveillance to: identify and document the functional and physical characteristics of a configuration item, control changes to those characteristics, record and report change processing and implementation status, and verify compliance with specified requirements. [IEEE 610]

consistency: The degree of uniformity, standardization, and freedom from contradiction among the documents or parts of a component or system. [IEEE 610]

control flow: An abstract representation of all possible sequences of events (paths) in the execution through a component or system.

conversion testing: Testing of software used to convert data from existing systems for use in replacement systems.

COTS: Acronym for Commercial Off-The-Shelf software.

coverage: The degree, expressed as a percentage, to which a specified coverage item has been exercised by a test suite.

coverage analysis: Measurement of achieved coverage to a specified coverage item during test execution referring to predetermined criteria to determine whether additional testing is required and if so, which test cases are needed.

coverage item: An entity or property used as a basis for test coverage, e.g. equivalence partitions or code statements.

coverage tool: A tool that provides objective measures of what structural elements, e.g. statements, branches have been exercised by the test suite.

cyclomatic complexity: The number of independent paths through a program. Cyclomatic complexity is defined as: L – N + 2P, where - L = the number of edges/links in a graph - N = the number of nodes in a graph - P = the number of disconnected parts of the graph (e.g. a calling graph and a subroutine). [After McCabe]

D

data definition: An executable statement where a variable is assigned a value.

data driven testing: A scripting technique that stores test input and expected results in a table or spreadsheet, so that a single control script can execute all of the tests in the table. Data driven testing is often used to support the application of test execution tools such as capture/playback tools. [Fewster and Graham] See also keyword driven testing.

data flow: An abstract representation of the sequence and possible changes of the state of data objects, where the state of an object is any of: creation, usage, or destruction. [Beizer]

data flow analysis: A form of static analysis based on the definition and usage of variables.

data flow coverage: The percentage of definition-use pairs that have been exercised by a test case suite.

data flow test: A white box test design technique in which test cases are designed to execute definition and use pairs of variables.

debugging: The process of finding, analyzing and removing the causes of failures in software.

debugging tool: A tool used by programmers to reproduce failures, investigate the state of programs and find the corresponding defect. Debuggers enable programmers to execute programs step by step, to halt a program at any program statement and to set and examine program variables.

decision: A program point at which the control flow has two or more alternative routes. A node with two or more links to separate branches.

decision condition coverage: The percentage of all condition outcomes and decision outcomes that have been exercised by a test suite. 100% decision condition coverage implies both 100% condition coverage and 100% decision coverage.

decision condition testing: A white box test design technique in which test cases are designed to execute condition outcomes and decision outcomes.

decision coverage: The percentage of decision outcomes that have been exercised by a test suite. 100% decision coverage implies both 100% branch coverage and 100% statement coverage.

decision table: A table showing combinations of inputs and/or stimuli (causes) with their associated outputs and/or actions (effects), which can be used to design test cases.

decision table testing: A black box test design techniques in which test cases are designed to execute the combinations of inputs and/or stimuli (causes) shown in a decision table. [Veenendaal]

decision testing: A white box test design technique in which test cases are designed to execute decision outcomes.

decision outcome: The result of a decision (which therefore determines the branches to be taken).

defect: A flaw in a component or system that can cause the component or system to fail to perform its required function, e.g. an incorrect statement or data definition. A defect, if encountered during execution, may cause a failure of the component or system.

defect density: The number of defects identified in a component or system divided by the size of the component or system (expressed in standard measurement terms, e.g. lines-ofcode, number of classes or function points).

Defect Detection Percentage (DDP): the number of defects found by a test phase, divided by the number found by that test phase and any other means afterwards.

defect report: A document reporting on any flaw in a component or system that can cause the component or system to fail to perform its required function. [After IEEE 829]

defect management: The process of recognizing, investigating, taking action and disposing of defects. It involves recording defects, classifying them and identifying the impact. [After IEEE 1044]

defect masking: An occurrence in which one defect prevents the detection of another. [After IEEE 610]

definition-use pair: The association of the definition of a variable with the use of that variable. Variable uses include computational (e.g. multiplication) or to direct the execution of a path (“predicate” use).

deliverable: Any (work) product that must be delivered to someone other that the (work) product’s author.

design-based testing: An approach to testing in which test cases are designed based on the architecture and/or detailed design of a component or system (e.g. tests of interfaces between components or systems).

desk checking: Testing of software or specification by manual simulation of its execution.

development testing: Formal or informal testing conducted during the implementation of a component or system, usually in the development environment by developers. [After IEEE 610]

documentation testing: Testing the quality of the documentation, e.g. user guide or installation guide.

domain: The set from which valid input and/or output values can be selected.

driver: A software component or test tool that replaces a component that takes care of the control and/or the calling of a component or system. [After TMap]

dynamic analysis: The process of evaluating behavior, e.g. memory performance, CPU usage, of a system or component during execution. [After IEEE 610]

dynamic comparison: Comparison of actual and expected results, performed while the software is being executed, for example by a test execution tool.

dynamic testing: Testing that involves the execution of the software of a component or system.

E

efficiency: The capability of the software product to provide appropriate performance, relative to the amount of resources used under stated conditions. [ISO 9126]

efficiency testing: The process of testing to determine the efficiency of a software product.

elementary comparison testing: A black box test design techniques in which test cases are designed to execute combinations of inputs using the concept of condition determination coverage. [TMap]

emulator: A device, computer program, or system that accepts the same inputs and produces the same outputs as a given system. [IEEE 610] See also simulator.

entry criteria: the set of generic and specific conditions for permitting a process to go forward with a defined task, e.g. test phase. The purpose of entry criteria is to prevent a task from starting which would entail more (wasted) effort compared to the effort needed to remove the failed entry criteria. [Gilb and Graham]

entry point: The first executable statement within a component.

equivalence partition: A portion of an input or output domain for which the behavior of a component or system is assumed to be the same, based on the specification.

equivalence partition coverage: The percentage of equivalence partitions that have been exercised by a test suite.

equivalence partitioning: A black box test design technique in which test cases are designed to execute representatives from equivalence partitions. In principle test cases are designed to cover each partition at least once.

error: A human action that produces an incorrect result. [After IEEE 610]

error guessing: A test design technique where the experience of the tester is used to anticipate what defects might be present in the component or system under test as a result of errors made, and to design tests specifically to expose them.

error seeding: The process of intentionally adding known defects to those already in the component or system for the purpose of monitoring the rate of detection and removal, and estimating the number of remaining defects. [IEEE 610]

error tolerance: The ability of a system or component to continue normal operation despite the presence of erroneous inputs. [After IEEE 610].

exception handling: Behavior of a component or system in response to erroneous input, from either a human user or from another component or system, or to an internal failure.

executable statement: A statement which, when compiled, is translated into object code, and which will be executed procedurally when the program is running and may perform an action on data.

exercised: A program element is said to be exercised by a test case when the input value causes the execution of that element, such as a statement, decision, or other structural element.

exhaustive testing: A test approach in which the test suite comprises all combinations of input values and preconditions.

exit criteria: The set of generic and specific conditions, agreed upon with the stakeholders, for permitting a process to be officially completed. The purpose of exit criteria is to prevent a task from being considered completed when there are still outstanding parts of the task which have not been finished. Exit criteria are used by testing to report against and to plan when to stop testing. [After Gilb and Graham]

exit point: The last executable statement within a component.

expected result: The behavior predicted by the specification, or another source, of the component or system under specified conditions.

exploratory testing: Testing where the tester actively controls the design of the tests as those tests are performed and uses information gained while testing to design new and better tests. [Bach]

F

fail: A test is deemed to fail if its actual result does not match its expected result.

failure: Actual deviation of the component or system from its expected delivery, service or result. [After Fenton]

failure mode: The physical or functional manifestation of a failure. For example, a system in failure mode may be characterized by slow operation, incorrect outputs, or complete termination of execution.

Failure Mode and Effect Analysis (FMEA): A systematic approach to risk identification and analysis of identifying possible modes of failure and attempting to prevent their occurrence.

failure rate: The ratio of the number of failures of a given category to a given unit of measure, e.g. failures per unit of time, failures per number of transactions, failures per number of computer runs. [IEEE 610]

fault tolerance: The capability of the software product to maintain a specified level of performance in cases of software faults (defects) or of infringement of its specified interface. [ISO 9126] See also reliability.

fault tree analysis: A method used to analyze the causes of faults (defects).

feasible path: A path for which a set of input values and preconditions exists which causes it to be executed.

feature: An attribute of a component or system specified or implied by requirements documentation (for example reliability, usability or design constraints). [After IEEE 1008]

finite state machine: A computational model consisting of a finite number of states and transitions between those states, possibly with accompanying actions. [IEEE 610]

formal review: A review characterized by documented procedures and requirements, e.g. inspection.

frozen test basis: A test basis document that can only be amended by a formal change control process. See also baseline.

Function Point Analysis (FPA): Method aiming to measure the size of the functionality of an information system. The measurement is independent of the technology. This measurement may be used as a basis for the measurement of productivity, the estimation of the needed resources, and project control.

functional integration: An integration approach that combines the components or systems for the purpose of getting a basic functionality working early. See also integration testing.

functional requirement: A requirement that specifies a function that a component or system must perform. [IEEE 610]

functional test design technique: Documented procedure to derive and select test cases based on an analysis of the specification of the functionality of a component or system without reference to its internal structure. See also black box test design technique.

functional testing: Testing based on an analysis of the specification of the functionality of a component or system. See also black box testing.

functionality: The capability of the software product to provide functions which meet stated and implied needs when the software is used under specified conditions. [ISO 9126]

functionality testing: The process of testing to determine the functionality of a software product.

G

glass box testing: See white box testing.

H

heuristic evaluation: A static usability test technique to determine the compliance of a user interface with recognized usability principles (the so-called “heuristics”).

high level test case: A test case without concrete (implementation level) values for input data and expected results.

horizontal traceability: The tracing of requirements for a test level through the layers of test documentation (e.g. test plan, test design specification, test case specification and test procedure specification).

I

impact analysis: The assessment of change to the layers of development documentation, test documentation and components, in order to implement a given change to specified requirements.

incremental development model: A development life cycle where a project is broken into a series of increments, each of which delivers a portion of the functionality in the overal project requirements. The requirements are prioritized and delivered in priority order in the appropriate increment. In some (but not all) versions of this life cycle model, each subproject follows a ‘mini V-model’ with its own design, coding and testing phases.

incremental testing: Testing where components or systems are integrated and tested one or some at a time, until all the components or systems are integrated and tested.

incident: Any event occurring during testing that requires investigation. [After IEEE 1008]

incident management: The process of recognizing, investigating, taking action and disposing of incidents. It involves recording incidents, classifying them and identifying the impact. [After IEEE 1044]

incident management tool: A tool that facilitates the recording and status tracking of incidents found during testing. They often have workflow-oriented facilities to track and control the allocation, correction and re-testing of incidents and provide reporting facilities.

incident report: A document reporting on any event that occurs during the testing which requires investigation. [After IEEE 829]

independence: Separation of responsibilities, which encourages the accomplishment of objective testing. [After DO-178b]

infeasible path: A path that cannot be exercised by any set of possible input values.

informal review: A review not based on a formal (documented) procedure.

input: A variable (whether stored within a component or outside) that is read by a component.

input domain: The set from which valid input values can be selected.. See also domain.

input value: An instance of an input. See also input.

inspection: A type of review that relies on visual examination of documents to detect defects, e.g. violations of development standards and non-conformance to higher level documentation. The most formal review technique and therefore always based on a documented procedure. [After IEEE 610, IEEE 1028]

installability: The capability of the software product to be installed in a specified environment [ISO 9126]. See also portability.

installability testing: The process of testing the installability of a software product. See also portability testing.

installation guide: Supplied instructions on any suitable media, which guides the installer through the installation process. This may be a manual guide, step-by-step procedure, installation wizard, or any other similar process description.

installation wizard: Supplied software on any suitable media, which leads the installer through the installation process. It normally runs the installation process, provides feedback on installation results, and prompts for options.

instrumentation: The insertion of additional code into the program in order to collect information about program behavior during execution.

instrumenter: A software tool used to carry out instrumentation.

intake test: A special instance of a smoke test to decide if the component or system is ready for detailed and further testing. An intake test is typically carried out at the start of the test execution phase.

integration: The process of combining components or systems into larger assemblies.

integration testing: Testing performed to expose defects in the interfaces and in the interactions between integrated components or systems. See also component integration testing, system integration testing.

interface testing: An integration test type that is concerned with testing the interfaces between components or systems.

interoperability: The capability of the software product to interact with one or more specified components or systems. [After ISO 9126] See also functionality.

interoperability testing: The process of testing to determine the interoperability of a software product. See also functionality testing.

invalid testing: Testing using input values that should be rejected by the component or system. See also error tolerance.

isolation testing: Testing of individual components in isolation from surrounding components, with surrounding components being simulated by stubs and drivers, if needed.

K

keyword driven testing: A scripting technique that uses data files to contain not only test data and expected results, but also keywords related to the application being tested. The keywords are interpreted by special supporting scripts that are called by the control script for the test. See also data driven testing.

L

LCSAJ: A Linear Code Sequence And Jump, consisting of the following three items (conventionally identified by line numbers in a source code listing): the start of the linear sequence of executable statements, the end of the linear sequence, and the target line to which control flow is transferred at the end of the linear sequence.

LCSAJ coverage: The percentage of LCSAJs of a component that have been exercised by a test suite. 100% LCSAJ coverage implies 100% decision coverage.

LCSAJ testing: A white box test design technique in which test cases are designed to execute LCSAJs.

learnability: The capability of the software product to enable the user to learn its application. [ISO 9126] See also usability.

load test: A test type concerned with measuring the behavior of a component or system with increasing load, e.g. number of parallel users and/or numbers of transactions to determine what load can be handled by the component or system.

low level test case: A test case with concrete (implementation level) values for input data and expected results.

M

maintenance: Modification of a software product after delivery to correct defects, to improve performance or other attributes, or to adapt the product to a modified environment. [IEEE 1219]

maintenance testing: Testing the changes to an operational system or the impact of a changed environment to an operational system.

maintainability: The ease with which a software product can be modified to correct defects, modified to meet new requirements, modified to make future maintenance easier, or adapted to a changed environment. [ISO 9126]

maintainability testing: The process of testing to determine the maintainability of a software product.

management review: A systematic evaluation of software acquisition, supply, development, operation, or maintenance process, performed by or on behalf of management that monitors progress, determines the status of plans and schedules, confirms requirements and heir system allocation, or evaluates the effectiveness of management approaches to achieve fitness for purpose. [After IEEE 610, IEEE 1028]

maturity: (1) The capability of an organization with respect to the effectiveness and efficiency of its processes and work practices. See also Capability Maturity Model, Test Maturity Model. (2) The capability of the software product to avoid failure as a result of defects in the software. [ISO 9126] See also reliability.

measure: The number or category assigned to an attribute of an entity by making a measurement [ISO 14598].

measurement: The process of assigning a number or category to an entity to describe an attribute of that entity. [ISO 14598]

measurement scale: A scale that constrains the type of data analysis that can be performed on it. [ISO 14598]

memory leak: A defect in a program’s dynamic store allocation logic that causes it to fail to reclaim memory after it has finished using it, eventually causing the program to fail due to lack of memory.

metric: A measurement scale and the method used for measurement. [ISO 14598]

milestone: A point in time in a project at which defined (intermediate) deliverables and results should be ready.

moderator: The leader and main person responsible for an inspection or other review process.

monitor: A software tool or hardware device that run concurrently with the component or system under test and supervises, records and/or analyses the behavior of the component or system. [After IEEE 610]

multiple condition coverage: The percentage of combinations of all single condition outcomes within one statement that have been exercised by a test suite. 100% multiple condition coverage implies 100% condition determination coverage.

multiple condition testing: A white box test design technique in which test cases are designed to execute combinations of single condition outcomes (within one statement).

mutation analysis: A method to determine test suite thoroughness by measuring the extent to which a test suite can discriminate the program from slight variants (mutants) of the program.

N

N-switch coverage: The percentage of sequences of N+1 transitions that have been exercised by a test suite. [Chow]

N-switch testing: A form of state transition testing in which test cases are designed to execute all valid sequences of N+1 transitions. [Chow] See also state transition testing.

negative testing: Tests aimed at showing that a component or system does not work. Negative testing is related to the testers’ attitude rather than a specific test approach or test design technique. [After Beizer].

non-conformity: Non fulfillment of a specified requirement. [ISO 9000]

non-functional requirement: A requirement that does not relate to functionality, but to attributes of such as reliability, efficiency, usability, maintainability and portability.

non-functional testing: Testing the attributes of a component or system that do not relate to functionality, e.g. reliability, efficiency, usability, maintainability and portability.

non-functional test design techniques: Methods used to design or select tests for nonfunctional testing.

O

off-the-shelf software: A software product that is developed for the general market, i.e. for a large number of customers, and that is delivered to many customers in identical format.

operability: The capability of the software product to enable the user to operate and control it. [ISO 9126] See also usability.

operational environment: Hardware and software products installed at users’ or customers’ sites where the component or system under test will be used. The software may include operating systems, database management systems, and other applications.

operational profile testing: Statistical testing using a model of system operations (short duration tasks) and their probability of typical use. [Musa]

operational testing: Testing conducted to evaluate a component or system in its operational environment. [IEEE 610]

output: A variable (whether stored within a component or outside) that is written by a component.

output domain: The set from which valid output values can be selected. See also domain.

output value: An instance of an output. See also output.

P

pair programming: A software development approach whereby lines of code (production and/or test) of a component are written by two programmers sitting at a single computer. This implicitly means ongoing real-time code reviews are performed.

pair testing: Two testers work together to find defects. Typically, they share one computer and trade control of it while testing.

Pass: A test is deemed to pass if its actual result matches its expected result.

pass/fail criteria: Decision rules used to determine whether a test item (function) or feature has passed or failed a test. [IEEE 829]

path: A sequence of events, e.g. executable statements, of a component or system from an entry point to an exit point.

path coverage: The percentage of paths that have been exercised by a test suite. 100% path coverage implies 100% LCSAJ coverage.

path sensitizing: Choosing a set of input values to force the execution of a given path.

path testing: A white box test design technique in which test cases are designed to execute paths.

performance: The degree to which a system or component accomplishes its designated functions within given constraints regarding processing time and throughput rate. [After IEEE 610] See efficiency.

performance indicator: A high level metric of effectiveness and/or efficiency used to guide and control progressive development, e.g. Defect Detection Percentage (DDP) for testing. [CMMI]

performance testing: The process of testing to determine the performance of a software product. See efficiency testing.

performance testing tool: A tool to support performance testing and that usually has two main facilities: load generation and test transaction measurement. Load generation can simulate either multiple users or high volumes of input data. During execution, response time measurements are taken from selected transactions and these are logged. Performance testing tools normally provide reports based on test logs and graphs of load against response times.

phase test plan: A test plan that typically addresses one test level.

portability: The ease with which the software product can be transferred from one hardware or software environment to another. [ISO 9126]

portability testing: The process of testing to determine the portability of a software product.

postcondition: Environmental and state conditions that must be fulfilled after the execution of a test or test procedure.

post-execution comparison: Comparison of actual and expected results, performed after the software has finished running.

precondition: Environmental and state conditions that must be fulfilled before the component or system can be executed with a particular test or test procedure.

Priority: The level of (business) importance assigned to an item, e.g. defect.

process cycle test: A black box test design technique in which test cases are designed toexecute business procedures and processes. [TMap]

process: A set of interrelated activities, which transform inputs into outputs. [ISO 12207]

project: A project is a unique set of coordinated and controlled activities with start and finish dates undertaken an objective conforming to specific requirements, including the constraints of time, cost and resources. [ISO 9000]

project test plan: A test plan that typically addresses multiple test levels.

pseudo-random: A series which appears to be random but is in fact generated according to some prearranged sequence.

Q

quality: The degree to which a component, system or process meets specified requirements and/or user/customer needs and expectations. [After IEEE 610]

quality assurance: Part of quality management focused on providing confidence that quality requirements will be fulfilled. [ISO 9000]

quality attribute: A feature or characteristic that affects an item’s quality. [IEEE 610]

quality management: Coordinated activities to direct and control an organization with regard to quality. Direction and control with regard to quality generally includes the establishment of the quality policy and quality objectives, quality planning, quality control, quality assurance and quality improvement. [ISO 9000]

R

random testing: A black box test design technique where test cases are selected, possibly using a pseudo-random generation algorithm, to match an operational profile. This technique can be used for testing non-functional attributes such as reliability and performance.

recoverability: The capability of the software product to re-establish a specified level of performance and recover the data directly affected in case of failure. [ISO 9126] See also reliability.

recoverability testing: The process of testing to determine the recoverability of a software product. See also reliability testing.

regression testing: Testing of a previously tested program following modification to ensure that defects have not been introduced or uncovered in unchanged areas of the software, as a result of the changes made. It is performed when the software or its environment is changed.

release note: A document identifying test items, their configuration, current status and other delivery information delivered by development to testing, and possibly other stakeholders, at the start of a test execution phase. [After IEEE 829]

reliability: The ability of the software product to perform its required functions under stated conditions for a specified period of time, or for a specified number of operations. [ISO 9126]

reliability testing: The process of testing to determine the reliability of a software product.

replaceability: The capability of the software product to be used in place of another specified software product for the same purpose in the same environment. [ISO 9126] See also portability.

requirement: A condition or capability needed by a user to solve a problem or achieve an objective that must be met or possessed by a system or system component to satisfy a contract, standard, specification, or other formally imposed document. [After IEEE 610]

requirements-based testing: An approach to testing in which test cases are designed based on test objectives and test conditions derived from requirements, e.g. tests that exercise specific functions or probe non-functional attributes such as reliability or usability.

requirements management tool: A tool that supports the recording of requirements, requirements attributes (e.g. priority, knowledge responsible) and annotation, and facilitates traceability through layers of requirements and requirements change management. Some requirements management tools also provide facilities for static analysis, such as consistency checking and violations to pre-defined requirements rules.

requirements phase: The period of time in the software life cycle during which the equirements for a software product are defined and documented. [IEEE 610]

resource utilization: The capability of the software product to use appropriate amounts and types of resources, for example the amounts of main and secondary memory used by the program and the sizes of required temporary or overflow files, when the software performs its function under stated conditions. [After ISO 9126] See also efficiency.

resource utilization testing: The process of testing to determine the resource-utilization of a software product.

result: The consequence/outcome of the execution of a test. It includes outputs to screens, changes to data, reports, and communication messages sent out. See also actual result, expected result.

resumption criteria: The testing activities that must be repeated when testing is re-started after a suspension. [After IEEE 829]

re-testing: Testing that runs test cases that failed the last time they were run, in order to verify the success of corrective actions.

review: An evaluation of a product or project status to ascertain discrepancies from planned results and to recommend improvements. Examples include management review, informal review, technical review, inspection, and walkthrough. [After IEEE 1028]

reviewer: The person involved in the review who shall identify and describe anomalies in the product or project under review. Reviewers can be chosen to represent different viewpoints and roles in the review process.

risk: A factor that could result in future negative consequences; usually expressed as impact and likelihood.

risk analysis: The process of assessing identified risks to estimate their impact and probability of occurrence (likelihood).

risk-based testing: Testing oriented towards exploring and providing information about product risks. [After Gerrard]

risk control: The process through which decisions are reached and protective measures are implemented for reducing risks to, or maintaining risks within, specified levels.

risk identification: The process of identifying risks using techniques such as brainstorming, checklists and failure history.

risk management: Systematic application of procedures and practices to the tasks of identifying, analyzing, prioritizing, and controlling risk.

robustness: The degree to which a component or system can function correctly in the presence of invalid inputs or stressful environmental conditions. [IEEE 610] See also errortolerance, fault-tolerance.

root cause: An underlying factor that caused a non-conformance and possibly should be permanently eliminated through process improvement.

S

safety: The capability of the software product to achieve acceptable levels of risk of harm to people, business, software, property or the environment in a specified context of use. [ISO 9126]

safety testing: The process of testing to determine the safety of a software product.

scalability: The capability of the software product to be upgraded to accommodate increased loads. [After Gerrard]

scalability testing: Testing to determine the scalability of the software product.

scribe: The person who has to record each defect mentioned and any suggestions for improvement during a review meeting, on a logging form. The scribe has to make ensure that the logging form is readable and understandable.

scripting language: A programming language in which executable test scripts are written, used by a test execution tool (e.g. a capture/replay tool).

security: Attributes of software products that bear on its ability to prevent unauthorized access, whether accidental or deliberate, to programs and data. [ISO 9126]

security testing: Testing to determine the security of the software product.

severity: The degree of impact that a defect has on the development or operation of a component or system. [After IEEE 610]

simulation: The representation of selected behavioral characteristics of one physical or abstract system by another system. [ISO 2382/1]

simulator: A device, computer program or system used during testing, which behaves or operates like a given system when provided with a set of controlled inputs. [After IEEE 610, DO178b] See also emulator.

smoke test: A subset of all defined/planned test cases that cover the main functionality of a component or system, to ascertaining that the most crucial functions of a program work, but not bothering with finer details. A daily build and smoke test is among industry best practices. See also intake test.

software quality: The totality of functionality and features of a software product that bear on its ability to satisfy stated or implied needs. [After ISO 9126]

specification: A document that specifies, ideally in a complete, precise and verifiable manner, the requirements, design, behavior, or other characteristics of a component or system, and, often, the procedures for determining whether these provisions have been satisfied. [After IEEE 610]

specification-based test design technique: See black box test design technique.

specified input: An input for which the specification predicts a result.

stability: The capability of the software product to avoid unexpected effects from modifications in the software. [ISO 9126] See also maintainability.

state diagram: A diagram that depicts the states that a component or system can assume, and shows the events or circumstances that cause and/or result from a change from one state to another. [IEEE 610]

state table: A grid showing the resulting transitions for each state combined with each possible event, showing both valid and invalid transitions.

state transition: A transition between two states of a component or system.

state transition testing: A black box test design technique in which test cases are designed to execute valid and invalid state transitions. See also N-switch testing.

statement: An entity in a programming language, which is typically the smallest indivisible unit of execution.

statement coverage: The percentage of executable statements that have been exercised by a test suite.

statement testing: A white box test design technique in which test cases are designed toexecute statements.

static analysis: Analysis of software artifacts, e.g. requirements or code, carried out without execution of these software artifacts.

static analyzer: A tool that carries out static analysis.

static code analysis: Analysis of program source code carried out without execution of that software.

static code analyzer: A tool that carries out static code analysis. The tool checks source code, for certain properties such as conformance to coding standards, quality metrics or data flow anomalies.

static testing: Testing of a component or system at specification or implementation level without execution of that software, e.g. reviews or static code analysis.

statistical testing: A test design technique in which a model of the statistical distribution of the input is used to construct representative test cases. See also operational profile testing.

status accounting: An element of configuration management, consisting of the recording andreporting of information needed to manage a configuration effectively. This information includes a listing of the approved configuration identification, the status of proposed changes to the configuration, and the implementation status of the approved changes. [IEEE 610]

Stress testing: Testing conducted to evaluate a system or component at or beyond the limits of its specified requirements. [IEEE 610]

structural coverage: Coverage measures based on the internal structure of the component.

structural test design technique: See white box test design technique.

stub: A skeletal or special-purpose implementation of a software component, used to develop or test a component that calls or is otherwise dependent on it. It replaces a called component. [After IEEE 610]

subpath: A sequence of executable statements within a component.

suspension criteria: The criteria used to (temporarily) stop all or a portion of the testing activities on the test items. [After IEEE 829]

suitability: The capability of the software product to provide an appropriate set of functions for specified tasks and user objectives. [ISO 9126] See also functionality.

Software Usability Measurement Inventory (SUMI): A questionnaire based usability test technique to evaluate the usability, e.g. user-satisfaction, of a component or system. [Veenendaal]

syntax testing: A black box test design technique in which test cases are designed based upon the definition of the input domain and/or output domain.

system: A collection of components organized to accomplish a specific function or set of functions. [IEEE 610]

system integration testing: Testing the integration of systems and packages; testing interfaces to external organizations (e.g. Electronic Data Interchange, Internet).

system testing: The process of testing an integrated system to verify that it meets specified requirements. [Hetzel]

T

technical review: A peer group discussion activity that focuses on achieving consensus on the technical approach to be taken. A technical review is also known as a peer review. [Gilb and Graham, IEEE 1028]

test approach: The implementation of the test strategy for a specific project. It typically includes the decisions made that follow based on the (test) project’s goal and the risk assessment carried out, starting points regarding the test process, the test design techniques to be applied, exit criteria and test types to be performed.

test automation: The use of software to perform or support test activities, e.g. test management, test design, test execution and results checking.

test basis: All documents from which the requirements of a component or system can be inferred. The documentation on which the test cases are based. If a document can be amended only by way of formal amendment procedure, then the test basis is called a frozen test basis. [After TMap]

test case: A set of input values, execution preconditions, expected results and execution postconditions, developed for a particular objective or test condition, such as to exercise a particular program path or to verify compliance with a specific requirement. [After IEEE 610]

test case specification: A document specifying a set of test cases (objective, inputs, test actions, expected results, and execution preconditions) for a test item. [After IEEE 829]

test charter: A statement of test objectives, and possibly test ideas. Test charters are amongst other used in exploratory testing. See also exploratory testing.

test comparator: A test tool to perform automated test comparison.

test comparison: The process of identifying differences between the actual results produced by the component or system under test and the expected results for a test. Test comparison can be performed during test execution (dynamic comparison) or after test execution.

test condition: An item or event of a component or system that could be verified by one or more test cases, e.g. a function, transaction, quality attribute, or structural element.

test data: Data that exists (for example, in a database) before a test is executed, and thataffects or is affected by the component or system under test.

test data preparation tool: A type of test tool that enables data to be selected from existing databases or created, generated, manipulated and edited for use in testing.

test design specification: A document specifying the test conditions (coverage items) for a test item, the detailed test approach and identifying the associated high level test cases. [After IEEE 829]

test design tool: A tool that support the test design activity by generating test inputs from a specification that may be held in a CASE tool repository, e.g. requirements management tool, or from specified test conditions held in the tool itself.

test design technique: A method used to derive or select test cases.

test environment: An environment containing hardware, instrumentation, simulators, software tools, and other support elements needed to conduct a test. [After IEEE 610]

test evaluation report: A document produced at the end of the test process summarizing all testing activities and results. It also contains an evaluation of the test process and lessons learned.

test execution: The process of running a test by the component or system under test, producing actual result(s).

test execution automation: The use of software, e.g. capture/playback tools, to control the execution of tests, the comparison of actual results to expected results, the setting up of test preconditions, and other test control and reporting functions.

test execution phase: The period of time in a software development life cycle during which the components of a software product are executed, and the software product is evaluated to determine whether or not requirements have been satisfied. [IEEE 610]

test execution schedule: A scheme for the execution of test procedures. The test procedures are included in the test execution schedule in their context and in the order in which they are to be executed.

test execution technique: The method used to perform the actual test execution, either manually or automated.

test execution tool: A type of test tool that is able to execute other software using an automated test script, e.g. capture/playback. [Fewster and Graham]

test harness: A test environment comprised of stubs and drivers needed to conduct a test.

test infrastructure: The organizational artifacts needed to perform testing, consisting of test environments, test tools, office environment and procedures.

test item: The individual element to be tested. There usually is one test object and many test items. See also test object.

test level: A group of test activities that are organized and managed together. A test level is linked to the responsibilities in a project. Examples of test levels are component test, integration test, system test and acceptance test. [After TMap]

test log: A chronological record of relevant details about the execution of tests. [IEEE 829]

test logging: The process of recording information about tests executed into a test log.

test manager: The person responsible for testing and evaluating a test object. The individual, who directs, controls, administers plans and regulates the evaluation of a test object.

test management: The planning, estimating, monitoring and control of test activities, typically carried out by a test manager.

Test Maturity Model (TMM): A five level staged framework for test process improvement, related to the Capability Maturity Model (CMM) that describes the key elements of an effective test process.

Test Process Improvement (TPI): A continuous framework for test process improvement that describes the key elements of an effective test process, especially targeted at system testing and acceptance testing.

test object: The component or system to be tested. See also test item.

test objective: A reason or purpose for designing and executing a test.

test oracle: A source to determine expected results to compare with the actual result of the software under test. An oracle may be the existing system (for a benchmark), a user manual, or an individual’s specialized knowledge, but should not be the code. [After Adrion]

test performance indicator: A metric, in general high level, indicating to what extent a certain target value or criterion is met. Often related to test process improvement objectives, e.g. Defect Detection Percentage (DDP).

test phase: A distinct set of test activities collected into a manageable phase of a project, e.g. the execution activities of a test level. [After Gerrard]

test plan: A document describing the scope, approach, resources and schedule of intended test activities. It identifies amongst others test items, the features to be tested, the testing tasks, who will do each task, degree of tester independence, the test environment, the test design techniques and test measurement techniques to be used, and the rationale for their choice, and any risks requiring contingency planning. It is a record of the test planning process [After IEEE 829]

test planning: The activity of establishing or updating a test plan.

test policy: A high level document describing the principles, approach and major objectives of the organization regarding testing.

test point analysis (TPA): A formula based test estimation method based on function point analysis. [TMap]

test procedure: See test procedure specification.

test procedure specification: A document specifying a sequence of actions for the execution of a test. Also known as test script or manual test script. [After IEEE 829]

test process: The fundamental test process comprises planning, specification, execution, recording and checking for completion. [BS 7925/2]

test repeatability: An attribute of a test indicating whether the same results are produced each time the test is executed.

test run: Execution of a test on a specific version of the test object.

test script: Commonly used to refer to a test procedure specification, especially an automated one.

test specification: A document that consists of a test design specification, test case specification and/or test procedure specification.

test strategy: A high-level document defining the test levels to be performed and the testing within those levels for a programme (one or more projects).

test suite: A set of several test cases for a component or system under test, where the post condition of one test is often used as the precondition for the next one.

test summary report: A document summarizing testing activities and results. It also contains an evaluation of the corresponding test items against exit criteria. [After IEEE 829]

test target: A set of exit criteria.

test tool: A software product that supports one or more test activities, such as planning and control, specification, building initial files and data, test execution and test analysis. [TMap] See also CAST.

test type: A group of test activities aimed at testing a component or system regarding one or more interrelated quality attributes. A test type is focused on a specific test objective, i.e. reliability test, usability test, regression test etc., and may take place on one or more test levels or test phases. [After TMap]

testability: The capability of the software product to enable modified software to be tested. [ISO 9126] See also maintainability.

testability review: A detailed check of the test basis to determine whether the test basis is at an adequate quality level to act as an input document for the test process. [After TMap]

testable requirements: The degree to which a requirement is stated in terms that permit establishment of test designs (and subsequently test cases) and execution of tests to determine whether the requirements have been met. [After IEEE 610]

tester: A technically skilled professional who is involved in the testing of a component or system.

testing: The process consisting of all life cycle activities, both static and dynamic, concerned with planning, preparation and evaluation of software products and related work products to determine that they satisfy specified requirements, to demonstrate that they are fit for purpose and to detect defects.

testware: Artifacts produced during the test process required to plan, design, and execute tests, such as documentation, scripts, inputs, expected results, set-up and clear-up procedures, files, databases, environment, and any additional software or utilities used in testing. [After Fewster and Graham]

thread testing: A version of component integration testing where the progressive integration of components follows the implementation of subsets of the requirements, as opposed to the integration of components by levels of a hierarchy.

traceability: The ability to identify related items in documentation and software, such as requirements with associated tests. See also horizontal traceability, vertical traceability.

top-down testing: An incremental approach to integration testing where the component at the top of the component hierarchy is tested first, with lower level components being simulated by stubs. Tested components are then used to test lower level components. The process is repeated until the lowest level components have been tested.

U

understandability: The capability of the software product to enable the user to understand whether the software is suitable, and how it can be used for particular tasks and conditions of use. [ISO 9126] See also usability.

unreachable code: Code that cannot be reached and therefore is impossible to execute.

usability: The capability of the software to be understood, learned, used and attractive to the user when used under specified conditions. [ISO 9126]

usability testing: Testing to determine the extent to which the software product is understood, easy to learn, easy to operate and attractive to the users under specified conditions. [After ISO 9126]

use case testing: A black box test design technique in which test cases are designed to execute user scenarios.

user test: A test whereby real-life users are involved to evaluate the usability of a component or system.

V

V-model: A framework to describe the software development life cycle activities from requirements specification to maintenance. The V-model illustrates how testing activities can be integrated into each phase of the software development life cycle.

validation: Confirmation by examination and through provision of objective evidence that the requirements for a specific intended use or application have been fulfilled. [ISO 9000]

variable: An element of storage in a computer that is accessible by a software program by referring to it by a name.

verification: Confirmation by examination and through the provision of objective evidence that specified requirements have been fulfilled. [ISO 9000]

vertical traceability: The tracing of requirements through the layers of development documentation to components.

volume testing: Testing where the system is subjected to large volumes of data. See also resource-utilization testing.

W

walkthrough: A step-by-step presentation by the author of a document in order to gather information and to establish a common understanding of its content. [Freedman and Weinberg, IEEE 1028]

white box test design technique: Documented procedure to derive and select test cases based on an analysis of the internal structure of a component or system.

white box testing: Testing based on an analysis of the internal structure of the component or system.

Wide Band Delphi: An expert based test estimation technique that aims at making an accurate estimation using the collective wisdom of the team members.