Добавил:
Upload Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:

ISTQB CTFL Syllabus 2011

.pdf
Скачиваний:
66
Добавлен:
12.05.2015
Размер:
1.14 Mб
Скачать

Certified Test er

International

Software Te sting

Foundation Level Syllabus

Q ualifications Board

 

 

5.3 Test Progress Monitoring and Control

20 minutes

(K2)

 

Terms

 

Defect density, failure rate, test c ontrol, test monitoring, t est summary report

 

5.3.1Test Progress Monitoring ( K1)

The pur pose of test monitoring is to provide feedback an d visibility about test acti vities. Information to be mo nitored may be collecte d manually or automatica lly and may be used to measure exit criteria, such as cov erage. Metric s may also be used to assess progr ess against the planned schedul e and budget. Common test metrics include:

oPerc entage of work done in test case pre paration (or percentage of planned test cases prepared)

o Perc entage of work done in test environ ment prepara tion

o Test case execution (e.g., nu mber of test cases run/n ot run, and test cases p assed/failed)

o Defe ct informati on (e.g., defe ct density, d efects foun d and fixed, failure rate, a nd re-test r esults) o Test coverage of requiremen ts, risks or code

o Subjective confi dence of testers in the product o Date s of test milestones

oTesting costs, including the c ost compar ed to the ben efit of finding the next d efect or to run the next test

5.3.2Test Rep orting (K2)

Test reporting is concerned with summarizin g information about the t esting endea vor, including: o Wha t happened during a period of testin g, such as d ates when e xit criteria we re met

oAnalyzed information and m etrics to sup port recommendations a nd decisions about future actio ns, such as an assessm ent of defects remaining, the econo mic benefit of continued testing, outstanding risks, and the level of confidence in the teste d software

The outline of a test summary report is given in ‘Standard for Software Test Documentation’ (IEEE Std 829-1998).

Metrics should be collected durin g and at the end of a tes t level in ord er to assess : o The adequacy of the test objectives for t hat test level

o The adequacy of the test ap proaches taken

o The effectiveness of the testing with respect to the ob jectives

5.3.3Test Con trol (K2)

Test con trol describe s any guidin g or corrective actions t aken as a result of inform ation and metrics gathered and report ed. Actions m ay cover an y test activity and may affect any oth er software life cycle activity or task.

Example s of test con trol actions include:

o Making decisions based on information f rom test mo nitoring

o Reprioritizing tests when an identified ris k occurs (e.g., software delivered late)

o Changing the te st schedule due to availability or una vailability of a test environment

oSetting an entry criterion requiring fixes to have been re-tested (confirmation tested) by a dev eloper before accepting them into a build

Version 2 011

Page 51 of 78

31-Mar-2011

© Internationa l Software Testing Q ualifications Board

Certified Test er

International

Software Te sting

Foundation Level Syllabus

Q ualifications Board

 

 

5.4 Configuration Management (K 2)

10 minutes

 

 

Terms

Configuration manag ement, version control

Background

The pur pose of configuration management is to establish and maintain the integrity of the pro ducts (compon ents, data and documen tation) of th e software o r system thro ugh the project and pro duct life cycle .

For testing, configur ation management may involve ensuring the following:

oAll items of testw are are iden tified, versio n controlled, tracked for changes, related to eac h other and related to de velopment items (test o bjects) so th at traceabilit y can be maintained

thro ughout the t est process

oAll i dentified documents and software items are referenced unambiguously in test doc umentation

For the tester, configuration management helps to unique ly identify (a nd to reprod uce) the tested item, test documents , the tests and the test harness(es).

During t est planning, the configuration mana gement procedures and infrastructur e (tools) should be chosen, documented and implem ented.

Version 2 011

Page 52 of 78

31-Mar-2011

© Internationa l Software Testing Q ualifications Board

Certified Test er

International

Software Te sting

Foundation Level Syllabus

Q ualifications Board

 

 

5.5 Risk a nd Testing (K2)

30 minutes

 

 

Terms

Product risk, project risk, risk, risk-based testing

Background

Risk can be defined as the chan ce of an eve nt, hazard, t hreat or situa tion occurri ng and resulting in undesira ble consequ ences or a potential pro blem. The level of risk will be determined by the likelihood of an adverse event ha ppening an d the impact (the harm re sulting from that event).

5.5.1Project Risks (K2)

Project risks are the risks that surround the project’s capa bility to deli ver its objectives, such as:

oOrg anizational f actors:

Skill, training and sta ff shortages

Personnel issues

Political issues, such as:

 

Problems with testers communicatin g their needs and test re sults

 

Failure by the team to follow up on in formation found in testing and revie s

 

(e.g., not im proving deve lopment an d testing practices)

• Improper attitude to ard or expectations of t esting (e.g., not appreciating the value of finding d efects during testing)

o Tec hnical issues :

Problems in defining the right req uirements

• The exte nt to which requirement s cannot be met given ex isting const raints

Test env ironment not ready on time

Late data conversion , migration planning and development and testing data conversion/migration tools

• Low qua lity of the de sign, code, configuration data, test data and tests

oSupplier issues:

Failure o f a third party

Contract ual issues

When a nalyzing, managing and mitigating th ese risks, th e test manag er is followi ng well-esta blished project management principles. The ‘Standard for Software Test Doc umentation’ (IEEE Std 8 291998) ou tline for test plans requi res risks and contingencies to be stated.

5.5.2Product Risks (K2)

Potential failure areas (adverse future events or hazards) in the software or syste m are known as product risks, as they are a risk to the quality of the produ ct. These in clude:

o Fail ure-prone software delive red

o The potential tha t the software/hardware could cause harm to an individual or company o Poor software ch aracteristics (e.g., functionality, reliability, usability and performance)

oPoor data integrity and quality (e.g., data migration issues, data conversion pr oblems, data tran sport proble ms, violation of data standards)

o Software that does not perform its intended functions

Risks are used to decide where to start testin g and wher to test more; testing is used to reduce the risk of a n adverse effect occurring, or to reduce the impa ct of an adve rse effect.

Version 2 011

Page 53 of 78

31-Mar-2011

© Internationa l Software Testing Q ualifications Board

Certified Test er

International

Software Te sting

Foundation Level Syllabus

Q ualifications Board

 

 

Product risks are a special type

of risk to the success of a project. Testing as a ris k-control activity

provides feedback about the residual risk by measuring t he effectiveness of critic al defect rem oval and of c ontingency p lans.

A risk-ba sed approach to testing provides pr oactive opportunities to reduce the levels of prod uct risk, starting in the initial stages of a project. It involves the identification of produc t risks and t heir use in g uiding test planning and control, spec ification, pre paration an d execution of tests. In a riskbased a pproach the risks identified may be used to:

o Dete rmine the te st technique s to be employed

o

Dete rmine the e xtent of testi ng to be carried out

o

Prioritize testing in an attemp t to find the critical defe cts as early as possible

oDete rmine whet her any non-testing activities could b e employed t o reduce risk (e.g., providing training to inexp erienced des igners)

Risk-bas ed testing draws on the collective kn owledge and insight of the project stakeholders to determin e the risks a nd the levels of testing required to address thos e risks.

To ensure that the c hance of a product failur e is minimize d, risk man agement activities provide a disciplin ed approach to:

o Ass ess (and reassess on a regular basis) what can go wrong (risks) o Dete rmine what risks are im portant to deal with

o Implement actio ns to deal with those risks

In additi on, testing m ay support the identification of new risks, may he lp to determ ine what ris ks should be reduced, and may lower uncertain ty about risks.

Version 2 011

Page 54 of 78

31-Mar-2011

© Internationa l Software Testing Q ualifications Board

Certified Test er

International

Software Te sting

Foundation Level Syllabus

Q ualifications Board

 

 

5.6 Inciden t Management (K3)

40 minu tes

 

 

Terms

Incident logging, incident management, incident report

Background

Since on e of the obj ectives of tes ting is to find defects, the discrepan cies between actual and expecte d outcomes need to be l ogged as incidents. An i ncident must be investig ated and may turn out to be a defect. Appropriate a ctions to dis pose incidents and defects should be defined. Inc idents and defe cts should b e tracked fr om discover y and classification to correction and confirmation of the solution. In order to manage all i ncidents to completion, an organization should es tablish an in cident management process and rules f or classification.

Incident s may be raised during development, review, testing or use of a software product. The y may be raise d for issues in code or the working sy stem, or in any type of documentatio n including requirem ents, devel opment docu ments, test documents, and user inf ormation suc h as “Help” or installation guides.

Incident reports hav e the followin g objectives:

oProv ide developers and other parties wit h feedback a bout the pro blem to enable identifica tion, isola tion and correction as n ecessary

o

Prov ide test lead ers a mean s of tracking the quality of the system under test and the progress

 

of the testing

o Prov ide ideas for test process improvem ent

Details o f the inciden t report may include:

o

Date of issue, is suing organization, and a uthor

o Exp ected and ac tual results

o Identification of the test item (configuratio n item) and environment

o Software or system life cycle process in which the inc ident was o bserved

oDescription of the incident to enable reproduction an d resolution, including log s, database dumps or screen shots

o Sco pe or degree of impact on stakeholde r(s) interests o Sev erity of the i mpact on the system

o Urg ency/priority to fix

oStatus of the incident (e.g., open, deferred, duplicate, waiting to be fixed, fixe d awaiting re-test, closed)

o Conclusions, rec ommendatio ns and app ovals

o Glob al issues, s uch as other areas that m ay be affected by a change resultin g from the incident o Change history, such as the sequence of actions tak en by projec t team mem bers with respect

to the incident to isolate, rep air, and confirm it as fixed

o Refe rences, including the ide ntity of the test case sp ecification that revealed the problem

The structure of an i ncident report is also cov ered in the ‘Standard for Software T est Documentation’ (IEEE Std 829-1998).

Version 2 011

Page 55 of 78

31-Mar-2011

© Internationa l Software Testing Q ualifications Board

Certified Test er

International

Software Te sting

Foundation Level Syllabus

Q ualifications Board

 

 

References

5.1.1Black, 2001, H etzel, 1988

5.1.2Black, 2001, H etzel, 1988

5.2.5 Black, 2001, C raig, 2002, I EEE Std 829-1998, Kaner 2002 5.3.3 Black, 2001, C raig, 2002, Hetzel, 1988, IEEE Std 8 29-1998 5.4 Craig, 2002

5.5.2 Black, 2001 , IEEE Std 82 9-1998 5.6 Blac k, 2001, IEE E Std 829-1 998

Version 2 011

Page 56 of 78

31-Mar-2011

© Internationa l Software Testing Q ualifications Board

Certified Test er

International

Software Te sting

Foundation Level Syllabus

Q ualifications Board

 

 

6. Tool Su pport for Testing (K2)

8 0 minutes

Learning Obje ctives for Tool Su pport for Testing

The obje ctives identify what you will be able to do followi ng the completion of each module.

6.1 Ty pes of Tes t Tools ( K2)

LO-6.1.1

Classify different types of test to ols accordin to their purpose and to the activities of

 

the fundamental test process an d the softwa e life cycle (K2)

LO-6.1.3

Explain the term test tool and the purpose of tool support for testing ( K2) 2

6.2 Effective Us e of Tools: Potential Benefits and Ris ks (K2)

LO-6.2.1 Summar ize the potential benefits and risks of test autom ation and too l support for testing (K2)

LO-6.2.2 Remem ber special consideration s for test execution tools , static anal ysis, and test management tools ( K1)

6.3 Int roducing a Tool int o an Organization (K1)

LO-6.3.1 State th e main principles of intro ducing a tool into an org anization (K1 )

LO-6.3.2 State th e goals of a proof-of-con cept for tool evaluation and a piloting phase for to ol implementation (K1)

LO-6.3.3 Recognize that factors other tha n simply acquiring a tool are required for good too l support (K1)

2 LO-6.1.2 Intentiona lly skipped

Version 2 011 Page 57 of 78 31-Mar-2011

© Internationa l Software Testing Q ualifications Board

Certified Test er

International

Software Te sting

Foundation Level Syllabus

Q ualifications Board

 

 

6.1 Types of Test Tools ( K2)

45 minutes

 

 

Terms

Configuration manag ement tool, coverage tool, debugging tool, dyna mic analysis tool, incident management tool, load testing tool, modeling tool, monito ring tool, performance te sting tool, probe effect, re quirements managemen t tool, revie w tool, security tool, static analysis tool, stress te ting tool, test comparator, test data preparation to ol, test desi gn tool, test harness, test execution tool, test man agement to ol, unit test fr amework tool

6.1.1Tool Sup port for T esting (K2)

Test tools can be used for one o r more activities that support testing. These include:

1.Tools that are directly used i n testing such as test ex ecution tools, test data generation to ols and result comp arison tools

2.Tools that help i n managing the testing process such as those used to manag e tests, test results, data, req uirements, incidents, defects, etc., and for reporting and mon itoring test exec ution

3.Tools that are us ed in recon naissance, or, in simple terms: explor ation (e.g., tools that monitor file a ctivity for an application )

4.Any tool that aids in testing (a spreadsheet is also a test tool in this meaning)

Tool sup port for testing can have one or mor e of the following purposes depending on the con text:

o

Improve the effic iency of test activities by automating repetitive tasks or supp orting manu al test

o

activ ities like test planning, t est design, t est reporting and monitor ing

Auto mate activities that require significan t resources when done manually (e. g., static testing)

oAuto mate activities that cann ot be executed manually (e.g., large scale performance testi ng of clien t-server app lications)

oIncr ease reliability of testing (e.g., by automating large data comp arisons or simulating beh avior)

The term “test frameworks” is als o frequently used in the industry, in at least three meanings: o Reusable and e tensible testing libraries that can be used to buil d testing tools (called test

harnesses as we ll)

o A ty pe of design of test auto mation (e.g., data-driven, keyword-driven) o Overall process of execution of testing

For the purpose of th is syllabus, the term “te st frameworks” is used in its first two meanings a described in Section 6.1.6.

6.1.2Test Too l Classifi cation (K2 )

There are a number of tools that support diffe rent aspect s of testing. Tools can be classified based on sever al criteria su ch as purpose, commer cial / free / open-source / shareware, technology used and so f orth. Tools are classified in this sylla bus according to the testing activities that they support.

Some tools clearly support one a ctivity; others may supp ort more tha n one activit y, but are classifie d under the activity with which they are most closely associated. Tools from a single provider, especially those that ha ve been de signed to work together, may be bun dled into one package.

Some types of test t ools can be intrusive, which means th at they can affect the ac tual outcome of the test. For exampl e, the actual timing may be different due to the ex tra instructio ns that are execute d by the tool, or you may get a differe nt measure of code cov erage. The consequence of intrusive tools is call ed the probe effect.

Version 2 011

Page 58 of 78

31-Mar-2011

© Internationa l Software Testing Q ualifications Board

Certified Test er

International

Software Te sting

Foundation Level Syllabus

Q ualifications Board

 

 

Some tools offer support more a ppropriate for developers (e.g., tools that are used during compon ent and component integ ration testing). Such tools are marked with “(D)” in the list below.

6.1.3Tool Sup port for Managem ent of Testing and T ests (K1)

Management tools apply to all test activities over the entire software life cycle.

Test Management T ools

These to ols provide interfaces for executing tests, tracking defects and managing requirements, along with support fo r quantitative analysis a nd reporting of the test objects. They also support tracing t he test objec ts to requirement specifications and might have an independent version control capability or an interface to an e ternal one.

Requirements Management Tools

These to ols store re quirement statements, store the attrib utes for the requirements (including priority), provide uni que identifiers and suppo rt tracing th e requireme nts to individual tests. Th ese tools may also help with identifyi ng inconsist ent or missing requirements.

Incident Manageme nt Tools (Defect Tracking Tools)

These to ols store and manage in cident reports, i.e., defe cts, failures, change requ ests or perceived problems and anom alies, and help in managing the life c ycle of incide nts, optionally with support for statistica l analysis.

Configuration Man agement Tools

Although not strictly test tools, these are nec essary for storage and v ersion management of testware and related software especially when configuring more than one hardware/software environ ment in term s of operating system ve rsions, comp ilers, brows ers, etc.

6.1.4Tool Sup port for S tatic Tes ting (K1)

Static testing tools provide a cost effective w ay of finding more defects at an earli er stage in th e develop ment process.

Review Tools

These to ols assist with review processes, ch ecklists, review guidelines and are us ed to store and commun icate review comments and report o n defects and effort. They can be of further help b y providin g aid for online reviews f or large or g eographicall y dispersed teams.

Static Analysis Too ls (D)

These to ols help dev elopers and testers find defects prior to dynamic testing by providing support for enforcing coding standards (i ncluding secure coding), analysis of structures an d dependen cies. They ca n also help i n planning or risk analysis by providi ng metrics for the code ( e.g., comple xity).

Modeling Tools (D)

These to ols are use d to validate software models (e.g., physical data model (PDM ) for a relational databas e), by enum erating incon sistencies and finding d efects. Thes e tools can o ften aid in generating some test cases base d on the mo del.

6.1.5Tool Sup port for T est Spec ification ( K1)

Test Design Tools

These to ols are use d to generate test inputs or executable tests and/ or test oracle s from requirem ents, graphical user inte rfaces, desi gn models (s tate, data o r object) or code.

Version 2 011

Page 59 of 78

31-Mar-2011

© Internationa l Software Testing Q ualifications Board

Certified Test er

International

Software Te sting

Foundation Level Syllabus

Q ualifications Board

 

 

Test Data Preparation Tools

Test data preparation tools manipulate databases, files or data trans missions to set up test da ta to be used during the execution of tests to ensure security t hrough data anonymity.

6.1.6Tool Sup port for T est Exec ution and Logging (K1)

Test Execution Too ls

These to ols enable tests to be ex ecuted automatically, or semi-autom atically, usin g stored inputs and exp ected outco mes, through the use of a scripting language and usually provide a test lo g for each test run. They can also be used to record tests, and usually support scriptin g languages or GUI-based configura tion for para meterization of data and other customization in the tests.

Test Harness/Unit Test Frame work Tools (D)

A unit test harness or framework facilitates th e testing of components or parts of a system by simulatin g the environment in wh ich that test object will ru n, through the provision of mock objects as stubs or drivers.

Test Comparators

Test co mparators determine diffe rences betw een files, da tabases or test results. Test executi on tools typically includ e dynamic c omparators, but post-execution comparison may be done by a separate compariso n tool. A test comparator may use a test oracle, especially if it is automated.

Covera ge Measure ment Tools (D)

These to ols, through intrusive or non-intrusive means, m easure the percentage of specific types of code str uctures that have been e xercised (e.g., statements, branches or decision s, and module or function calls) by a set of tests.

Security Testing To ols

These to ols are use d to evaluate the security characteristics of softw are. This includes evalu ating the ability of the soft ware to prot ect data confidentiality, in tegrity, authentication, authorization, availability, and non-repudiation. Security tools are mostl y focused on a particular technology, platform, and purpose.

6.1.7Tool Sup port for P erforman ce and Monitoring (K1)

Dynamic Analysis Tools (D)

Dynamic analysis to ols find defe cts that are e vident only when software is executing, such as time depende ncies or memory leaks. They are typ ically used in component and compo nent integra tion testing, and when testing middle ware.

Perform ance Testin g/Load Tes ting/Stress Testing Tools

Perform ance testing tools monito r and report on how a sy stem behaves under a v ariety of sim ulated usage c onditions in terms of num ber of conc urrent users, their rampup pattern, frequency an d relative percentage of transactio ns. The simu lation of loa d is achieve d by means of creating virtual users carrying out a selected set of transacti ons, spread across various test mach ines commo nly known as load gene rators.

Monitoring Tools

Monitoring tools continuously an alyze, verify and report on usage of specific syste m resource s, and give warnings of possible service problems.

6.1.8Tool Sup port for S pecific Testing Needs (K1)

Data Qu ality Assessment

Data is a t the center of some projects such as data conve rsion/migration projects and applica tions like data warehouse s and its attributes can v ary in terms of criticality and volume. In such contexts, tools ne ed to be em ployed for da ta quality assessment to review and verify the da ta conversio n and

Version 2 011

Page 60 of 78

31-Mar-2011

© Internationa l Software Testing Q ualifications Board

 

 

Соседние файлы в предмете [НЕСОРТИРОВАННОЕ]