Добавил:
Upload Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:

ISTQB CTFL Syllabus 2011

.pdf
Скачиваний:
66
Добавлен:
12.05.2015
Размер:
1.14 Mб
Скачать

Certified Test er

International

Software Te sting

Foundation Level Syllabus

Q ualifications Board

 

 

2. Testing Throu ghout t he Soft ware Life

 

115 minutes

Cycl e (K2)

 

 

Learning Obje ctives for Testing Through out the Software

Life Cycl

The obje ctives identify what you will be able to do followi ng the completion of each module.

2.1 Software Development Models (K2)

LO-2.1.1 Explain the relationship between developme nt, test activities and work products in the develop ment life cycle, by giving examples using project and product types (K2)

LO-2.1.2 Recognize the fact t hat software developmen t models m ust be adapt ed to the con text of projec t and produ ct characteristics (K1)

LO-2.1.3 Recall characteristics of good te sting that are applicable t o any life cycle model (K 1)

2.2 Te st Levels (K2)

LO-2.2.1 Compare the differe nt levels of t esting: major objectives, typical objec ts of testing, typical t argets of tes ting (e.g., fu nctional or structural) an d related wor k products, people who test, types of de fects and failures to be identified (K2 )

2.3 Te st Types (K2)

LO-2.3.1 Compare four softwa re test type s (functional, non-functional, structur al and chang e- related) by example (K2)

LO-2.3.2 Recognize that functional and str uctural tests occur at any test level (K1)

LO-2.3.3 Identify and describe non-functional test types based on non-functional requireme nts (K2)

LO-2.3.4 Identify and describe test types b ased on the analysis of a software system’s structure or architecture (K2)

LO-2.3.5 Describ e the purpose of confirm ation testing and regression testing ( K2)

2.4 Maintenance Testing ( K2)

LO-2.4.1 Compare maintenance testing (t esting an existing system ) to testing a new application with res pect to test t ypes, triggers for testing and amount of testing (K 2)

LO-2.4.2 Recognize indicator s for mainten ance testing (modificatio n, migration and retirement) (K1)

LO-2.4.3 . Describ e the role of regression t esting and im pact analysis in mainten ance (K2)

Version 2 011

Page 21 of 78

31-Mar-2011

© Internationa l Software Testing Q ualifications Board

Certified Test er

International

Software Te sting

Foundation Level Syllabus

Q ualifications Board

 

 

2.1 Software Deve lopment Models (K2)

20 minutes

 

 

Terms

Commer cial Off-The-Shelf (COTS), iterative-incremental development model, validation, verification, V-model

Background

Testing does not exist in isolatio n; test activities are relat ed to software development activities. Different development life cycle models need different approaches to testing.

2.1.1V-model (Sequential Develo pment M odel) (K2)

Although variants of the V-model exist, a com mon type of V-model uses four test levels, correspo nding to the four development levels .

The four levels used in this sylla bus are: o Com ponent (unit ) testing

o Inte gration testin g o System testing

o Acc eptance testing

In practi ce, a V-model may have more, fewer or different levels of dev elopment a nd testing, depending on the pr oject and the software product. For example, there may be co mponent integration testing after compone nt testing, and system in tegration te sting after system testing.

Software work products (such as business sc enarios or use cases, requirements specifications, design documents and code) pro duced during developm ent are often the basis of testing in o ne or more tes t levels. References for generic wor k products include Capa bility Maturity Model Inte gration (CMMI) or ‘Software life cycle pr ocesses’ (IE EE/IEC 1220 7). Verification and valid ation (and early test design) can be c arried out d uring the de velopment of the software work prod ucts.

2.1.2Iterative incremental Devel opment Models (K2)

Iterative -incremental developme nt is the proc ess of estab lishing requirements, designing, building and testing a system in a series of short deve lopment cyc les. Examples are: proto typing, Rapid Application Develop ment (RAD), Rational Unified Process (RUP) and agile devel opment models. A system that is produced using these models may be teste d at several test levels d uring each iteration . An increment, added to others deve loped previo usly, forms a growing p artial system, which sh ould also be tested. Reg ression testing is increasingly important on all ite rations after the first one. Verification and validation can be c arried out on each increm ent.

2.1.3Testing within a Life Cycle Model (K2 )

In any life cycle model, there are several characteristics o f good testi ng:

o

For every develo pment activity there is a corresponding testing a ctivity

o

Eac h test level has test obje ctives specific to that lev el

oThe analysis an d design of t ests for a given test level should begin during the corresponding dev elopment activity

oTesters should b e involved i n reviewing d ocuments as soon as dr afts are available in the dev elopment life cycle

Test lev els can be c ombined or reorganized depending on the nature of the proje ct or the system architecture. For example, for th e integration of a Comme rcial Off-Th e-Shelf (COT S) software product into a syste m, the purch aser may perform integra tion testing at the syste m level (e.g.,

Version 2 011

Page 22 of 78

31-Mar-2011

© Internationa l Software Testing Q ualifications Board

Certified Test er

International

Software Te sting

Foundation Level Syllabus

Q ualifications Board

 

 

integration to the infrastructure and other systems, or system deploym ent) and acceptance te sting (function al and/or non-functional, and user a nd/or operational testing).

Version 2 011

Page 23 of 78

31-Mar-2011

© Internationa l Software Testing Q ualifications Board

Certified Test er

International

Software Te sting

Foundation Level Syllabus

Q ualifications Board

 

 

2.2 Test L evels (K2)

40 minutes

 

 

Terms

Alpha testing, beta t esting, component testin g, driver, field testing, fu nctional requirement,

integration, integrati on testing, n on-functional requirement, robustness testing, stu b, system te sting, test environment, tes t level, test-driven development, user acceptance testing

Background

For each of the test levels, the following can be identified: the generic objectives, the work product(s) being refe renced for d eriving test cases (i.e., t he test basis), the test o bject (i.e., wh at is being te sted), typical defects and failures to b e found, test harness requirements and tool support, and spe cific approaches and responsibilities.

Testing a system’s configuration data shall b e considered during test planning,

2.2.1Component Testin g (K2)

Test bas is:

o Com ponent requ irements o Deta iled design

o Code

Typical test objects: o Com ponents

o Prog rams

o Data conversion / migration programs o Data base modules

Component testing (also known as unit, module or progra m testing) searches for defects in, and verifies the functioni ng of, software modules, programs, o bjects, class es, etc., that are separately testable. It may be done in isolation from the rest of the system, depending on the context of the develop ment life cycle and the s ystem. Stubs , drivers an d simulators may be use d.

Component testing may include t esting of fun ctionality and specific n on-functional characteris tics, such as resource-behavior (e.g., searching fo r memory le aks) or robustness testi ng, as well as structura l testing (e. g., decision c overage). Test cases are derived from work prod ucts such as a specifica tion of the component, the software design or th e data model.

Typically , component testing occurs with access to the co de being te sted and wit h the support of a develop ment environ ment, such as a unit test framework or debugging tool. In practice, comp onent testing usually involves the programmer who wrote the co de. Defects are typically fixed as so on as they are found, witho ut formally managing th ese defects.

One app roach to co mponent test ing is to prepare and automate test c ases before coding. This is called a test-first app roach or test-driven development. T his approach is highly iterative and is based o n cycles of developing test cases, th en building and integratin g small pieces of code, and executing the compo nent tests correcting an y issues and iterating until they pass.

Version 2 011

Page 24 of 78

31-Mar-2011

© Internationa l Software Testing Q ualifications Board

Certified Test er

International

Software Te sting

Foundation Level Syllabus

Q ualifications Board

 

 

2.2.2 Integration Testing (K2)

 

Test bas is:

 

o Software and system design

 

o

Arch itecture

 

o

Workflows

 

o

Use cases

 

Typical test objects:

 

o

Subsystems

 

o Data base imple mentation

 

o

Infrastructure

 

o

Interfaces

 

o System configuration and configuration data

Integration testing tests interfaces between components, interactions with differen t parts of a system, such as the operating sy stem, file system and ha rdware, and interfaces b etween systems.

There may be more than one level of integra tion testing a nd it may be carried out on test objects of varying size as follo ws:

1.Com ponent inte gration testin g tests the interactions b etween software components and is done after component testing

2.System integration testing tests the inter actions between differen t systems or between hardware and so ftware and may be done after syste m testing. In this case, the developin g organization ma y control onl y one side of the interface. This might be considered as a risk .

Business processes implem ented as workflows may involve a se ries of syste ms. Cross-platform issues may be significant.

The gre ater the scop e of integration, the more difficult it b ecomes to isolate defects to a speci fic compon ent or syste m, which may lead to inc reased risk a nd addition al time for tro ubleshooting.

Systema tic integratio n strategies may be based on the sy stem archite cture (such as top-down and bottom-u p), function al tasks, tran saction proc essing sequences, or so me other aspect of the s ystem or components. In order to ease fault isolation and detect defects early, integration should normally be incre mental rather than “big bang”.

Testing of specific n on-functional characteristics (e.g., performance) may be included in integration testing as well as fun ctional testi ng.

At each stage of integration, testers concentrate solely on the integration itself. Fo r example, if they are integ rating module A with mo dule B they are intereste d in testing the commun ication between the modules, not the functionality of the individual module as that was done durin g component testing. Both functio nal and structural approaches may b e used.

Ideally, testers should understand the architecture and influence integ ration planning. If integration tests are planned before compon ents or systems are built, those components can be built in t he order re quired for m ost efficient testing.

Version 2 011

Page 25 of 78

31-Mar-2011

© Internationa l Software Testing Q ualifications Board

Certified Test er

International

Software Te sting

Foundation Level Syllabus

Q ualifications Board

 

 

2.2.3 System Testing (K 2)

 

Test bas is:

 

o System and software requirement specification

 

o

Use cases

 

o

Fun ctional specification

 

o

Risk analysis re ports

 

Typical test objects:

o System, user and operation manuals

o System configuration and configuration data

System testing is co ncerned with the behavio r of a whole system/pro duct. The tes ting scope s hall be clearly addressed in the Master and/or Level Test Pla n for that test level.

In syste m testing, th e test environment should correspond to the final target or pr oduction environ ment as much as possibl e in order to minimize th e risk of environment-spe cific failures not being fo und in testing.

System testing may include tests based on risks and/or on requirements specifica tions, busin ess process es, use cases, or other high level text descriptions or models of system b ehavior, interacti ons with the operating sy stem, and system resources.

System testing should investigat e functional and non-fun ctional requi rements of t he system, and data qua lity characte ristics. Test ers also need to deal wit h incomplete or undocum ented requirem ents. Syste m testing of functional requirements starts by usi ng the most appropriate

specifica tion-based (black-box) techniques fo r the aspect of the system to be tested. For exa mple, a decision table may be created for combinations of effects described i n business r ules. Structurebased te chniques (white-box) m ay then be u sed to assess the thoroughness of the testing with respect to a structur al element, such as men u structure or web page navigation (s ee Chapter 4).

An indep endent test team often c arries out s ystem testin g.

2.2.4Acceptance Testi ng (K2)

Test bas is:

o User requiremen ts

o System requirem ents o Use cases

o Business processes o Risk analysis re ports

Typical test objects:

o Business processes on fully integrated s ystem o Operational and maintenance processes

o

User procedures

o

For ms

o

Reports

o

Configuration data

Acceptance testing is often the r esponsibility of the customers or use rs of a syste m; other stakeholders may be involved as well.

The goal in acceptan ce testing is to establish confidence in the syste m, parts of th e system or specific non-functional characteristics of the system. Finding defects is not the ma in focus in accepta nce testing. Acceptance testing may assess the s ystem’s readiness for de ployment a nd

Version 2 011

Page 26 of 78

31-Mar-2011

© Internationa l Software Testing Q ualifications Board

Certified Test er

International

Software Te sting

Foundation Level Syllabus

Q ualifications Board

 

 

use, although it is not necessaril y the final le vel of testing. For example, a large-s cale system integration test may come after t he acceptance test for a system.

Acceptance testing may occur at various tim es in the life cycle, for example:

o

A C OTS software product m ay be acceptance tested when it is installed or integrated

o

Acc eptance testing of the usability of a c omponent may be done during component testin g

o

Acc eptance testing of a new functional enhancement may come before system testing

Typical forms of acc eptance testing include t he following:

User acceptance testing

Typically verifies the fitness for use of the sy tem by business users.

Operati onal (acceptance) testi ng

The acc eptance of t he system by the system administrat ors, includin g: o Testing of backup/restore

o Disaster recovery o User manageme nt o Mai ntenance tasks

o Data load and migration tasks

o Periodic checks of security vulnerabilities

Contract and regul ation acceptance testin g

Contract acceptance testing is p erformed ag ainst a contract’s accept ance criteria for producing custom-developed software. Acceptance crit eria should b e defined when the parties agree to the contract. Regulation acceptance testing is performed against any regulations that must be ad hered to, such as government, legal or safety regul ations.

Alpha and beta (or field) testing

Developers of marke t, or COTS, software often want to get feedback from potential or existing customers in their market before the software product is put up for sale commercially. Alpha t esting is performed at the developing o rganization’s site but not by the developing team. Beta testing , or field-testing, is performed by customers or po tential custo mers at their own locatio ns.

Organiz ations may use other ter ms as well, s uch as facto ry acceptance testing an d site accep tance testing f or systems t hat are tested before and after being moved to a customer’s s ite.

Version 2 011

Page 27 of 78

31-Mar-2011

© Internationa l Software Testing Q ualifications Board

Certified Test er

International

Software Te sting

Foundation Level Syllabus

Q ualifications Board

 

 

2.3 Test T ypes (K 2)

40 minutes

 

 

Terms

Black-bo x testing, co de coverag e, functional testing, interoperability t esting, load testing, maintainability testing, performan ce testing, p ortability tes ting, reliability testing, s ecurity testing, stress testing, struct ural testing, usability testing, white-b ox testing

Background

A group of test activities can be aimed at verifying the software syste m (or a part of a system) based on a spe cific reason or target for testing.

A test type is focused on a particular test obj ective, which could be an y of the follo wing: o A function to be performed by the software

o A no n-functional quality characteristic, su ch as reliability or usability o The structure or architecture of the software or syste m

oChange related, i.e., confirming that defects have be en fixed (con firmation tes ting) and lo oking for unintended changes (reg ression testi ng)

A model of the software may be developed and/or used i n structural testing (e.g., a control flo w model o r menu structure model), non-functio nal testing (e .g., performance model, usability mo del security threat modeling), and fu nctional testing (e.g., a process flow model, a state transition model or a plai n language s pecification ).

2.3.1Testing of Functio n (Functional Testing) (K2)

The func tions that a system, subsystem or co mponent are to perform may be described in work products such as a requirements specification, use cases , or a functi onal specific ation, or the y may be undocumented. The functions are “what” the system does.

Function al tests are based on fu nctions and features (de scribed in documents or understood by the testers) and their int eroperability with specific systems, a nd may be performed at all test level s (e.g., tests for components may be ba sed on a co mponent specification).

Specific ation-based techniques may be used to derive te st conditions and test ca ses from the function ality of the s oftware or sy stem (see C hapter 4). Functional tes ting conside rs the exter nal behavior of the softw are (black-box testing).

A type of functional testing, security testing, investigates the functions (e.g., a fire wall) relating to detectio n of threats, such as viruses, from m alicious outsiders. Another type of fu nctional testing, interoperability testing, evaluates the capability of the software produ ct to interact with one or more specified components or system s.

2.3.2 Testing of Non-fu nctional Software C haracteristics (Non -functional Testing) (K2)

Non-fun ctional testing includes, but is not limited to, perfo rmance testing, load testing, stress testing, usability testing, maintain ability testing, reliability testing and portability te sting. It is th e testing of “how” the s ystem works.

Non-fun ctional testing may be pe rformed at a ll test levels. The term non-functiona l testing describes the tests required to measure characteristics of systems and software that can be quantified on a varying scale, such as response times for performance te sting. These tests can be referenced to a quality m odel such as the one de fined in ‘Software Engineering – Software Product Quality’ (I SO

Version 2 011

Page 28 of 78

31-Mar-2011

© Internationa l Software Testing Q ualifications Board

Certified Test er

International

Software Te sting

Foundation Level Syllabus

Q ualifications Board

 

 

9126). N on-function al testing con siders the external beha vior of the software and in most cas es uses black-box test design techniques to accomplish that.

2.3.3Testing of Softwa re Structure/Archit cture (St ructural Testing) (K2)

Structur al (white-box) testing may be perform ed at all test levels. Structural techniques are best used aft er specification-based techniques, in order to help measure t he thorough ness of testin g through assessment of coverage of a type of structure.

Coverage is the extent that a str ucture has b een exercise d by a test s uite, expressed as a percenta ge of the items being covered. If cov erage is not 100%, then more tests may be desi gned to test th ose items th at were missed to increa se coverag e. Coverage techniques are covered in Chapter 4.

At all tes t levels, but especially in component testing and component integration t esting, tools can be used to measure the code co verage of ele ments, suc as statements or decisions. Structural testing m ay be based on the arc hitecture of the system, s uch as a calling hierarch y.

Structur al testing ap proaches can also be applied at system, system integration or acceptanc e testing l evels (e.g., t o business m odels or me nu structures).

2.3.4Testing Related to Changes : Re-testing and R egression Testing (K2)

After a defect is detected and fix ed, the softw are should b e re-tested to confirm that the original defect h as been successfully rem oved. This is called confirmation. D ebugging (lo cating and fi xing a defect) i s a develop ment activity, not a testin g activity.

Regression testing is the repeate d testing of an already t ested program, after mo dification, to discover any defects introduced or uncovere d as a result of the chan ge(s). These defects ma y be either in the software being tested, or in another related o r unrelated software co ponent. It is perform ed when the software, or its environm ent, is changed. The extent of regression testing is based o n the risk of not finding defects in software that was working p reviously.

Tests should be rep eatable if they are to be u sed for confirmation testing and to assist regres sion testing.

Regression testing m ay be performed at all t est levels, an d includes functional, no n-functional and structura l testing. Regression tes t suites are run many tim es and generally evolve slowly, so regression testing is a strong can didate for a utomation.

Version 2 011

Page 29 of 78

31-Mar-2011

© Internationa l Software Testing Q ualifications Board

Certified Test er

International

Software Te sting

Foundation Level Syllabus

Q ualifications Board

 

 

2.4 Maintenance T esting (K2)

15 minutes

 

 

Terms

Impact a nalysis, maintenance te sting

Background

Once deployed, a software system is often in service for years or decades. Durin g this time the system, its configuration data, or its environm ent are often corrected, changed or extended. The planning of releases in advance is crucial for successful maintenance testing. A distinction ha s to be made be tween plann ed releases and hot fixes. Maintenance testing i s done on an existing operational system, and is triggered by modifications, mig ration, or retirement of the software or system.

Modifications include planned enhancement changes (e. g., release-based), corrective and emergen cy changes, and chang es of environ ment, such as planned operating sy stem or database upgrades, planned upgrade of Commercial-O ff-The-Shelf software, or patches to correct newly exposed or discovered vulnerabilities of the o perating sys tem.

Mainten ance testing for migratio n (e.g., from one platform to another) should inclu de operatio nal tests of the new environment as well as of th e changed software. Migration testin g (conversio n testing) is also need ed when data from anoth er applicatio n will be mi grated into th e system be ing maintained.

Mainten ance testing for the retire ment of a s ystem may in clude the testing of data migration or archivin g if long data-retention p eriods are required.

In additi on to testing what has be en changed, maintenance testing in cludes regression testing to parts of the system that have not been chang ed. The scope of mainte nance testin g is related to the risk of the change, th e size of the existing sy stem and to the size of th e change. D epending o n the changes, maintenance testing may be done at any or all test levels a nd for any or all test types. Determi ning how the existing sys tem may be affected by changes is called impact analysis, a nd is used to help decide how much re gression te sting to do. T he impact analysis may be used to determin e the regres sion test suite.

Mainten ance testing can be diffic ult if specific ations are out of date or missing, or testers with domain knowledge are not availa ble.

References

2.1.3 C MMI, Craig, 2002, Hetzel, 1988, IEE E 12207 2.2 Het zel, 1988

2.2.4 C opeland, 200 4, Myers, 1 979

2.3.1B eizer, 1990, Black, 2001, Copeland, 2004

2.3.2Black, 2001, I SO 9126

2.3.3B eizer, 1990, Copeland, 2004, Hetzel, 1988

2.3.4H etzel, 1988, IEEE STD 829-1998

2.4 Blac k, 2001, Cr aig, 2002, H etzel, 1988, IEEE STD 8 29-1998

Version 2 011

Page 30 of 78

31-Mar-2011

© Internationa l Software Testing Q ualifications Board

Соседние файлы в предмете [НЕСОРТИРОВАННОЕ]