Добавил:
Upload Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:

ISTQB CTFL Syllabus 2011

.pdf
Скачиваний:
66
Добавлен:
12.05.2015
Размер:
1.14 Mб
Скачать

Certified Test er

International

Software Te sting

Foundation Level Syllabus

Q ualifications Board

 

 

3. Static Techniques (K2 )

60 minutes

Learning Obje ctives for Static T echniques

The obje ctives identify what you will be able to do followi ng the completion of each module.

3.1 Sta tic Techniques and the Test Process (K2)

LO-3.1.1 Recognize software work products that can b e examined by the different static techniqu es (K1)

LO-3.1.2 Describ e the importa nce and value of considering static techniques fo r the asses sment of softw are work products (K2)

LO-3.1.3 Explain the difference between static and dy namic techniques, consid ering objectives, types of defects to be identified, and the role of these techniques within the softw are life cycle (K2)

3.2 Review Process (K2)

LO-3.2.1 Recall th e activities, roles and responsibilitie s of a typical formal review (K1) LO-3.2.2 Explain the differences between different types of review s: informal re view, techn cal

review, walkthrough and inspection (K2)

LO-3.2.3 Explain the factors f or successful performanc e of reviews (K2)

3.3 Sta tic Analy sis by Too ls (K2)

LO-3.3.1 Recall ty pical defect s and errors identified by static analysis and compare them to reviews and dynami c testing (K1)

LO-3.3.2 Describ e, using exa mples, the typical benefits of static an alysis (K2)

LO-3.3.3 List typic al code and design defects that may be identifie d by static a nalysis tools (K1)

Version 2 011

Page 31 of 78

31-Mar-2011

© Internationa l Software Testing Q ualifications Board

Certified Test er

International

Software Te sting

Foundation Level Syllabus

Q ualifications Board

 

 

3.1 Static Techniques and the Test Proce ss

15 minutes

(K2)

 

Terms

Dynamic testing, static testing

Background

Unlike dynamic testi ng, which re quires the ex ecution of software, static testing tec hniques rely on the manual examination (reviews ) and autom ated analysis (static analysis) of the code or othe r project d ocumentatio n without the execution of the code.

Reviews are a way o f testing software work p roducts (including code) and can be performed ell before dynamic test execution. D efects detec ted during r eviews early in the life cy cle (e.g., defects found in requirements) are often much cheap er to remov e than those detected by running tes ts on the executing code.

A review could be do ne entirely as a manual activity, but there is also tool support . The main manual activity is to examine a w ork product and make comments about it. Any s oftware work product can be reviewed, includi ng requirem ents specifications, desig n specifications, code, t est plans, test specifications, test cases, test scripts, user guides or web pages.

Benefits of reviews i nclude early defect detec tion and correction, dev elopment pr oductivity improve ments, redu ced develop ment timesc ales, reduced testing cost and time, lifetime cost reductio ns, fewer defects and improved com munication. Reviews can find omissi ons, for exa ple, in requirements, which are unlike ly to be foun d in dynamic testing.

Reviews, static anal ysis and dynamic testing have the same objective – identifying defects. T hey are complementary; the different techniques can find diffe rent types of defects effectively and efficiently. Compare d to dynamic testing, static techniques find causes of failures (defects) rather than the failures the mselves.

Typical defects that are easier to find in reviews than in dynamic testing include: d eviations fro m standard s, requirem ent defects, design defec ts, insufficient maintainability and inc orrect interface specifica tions.

Version 2 011

Page 32 of 78

31-Mar-2011

© Internationa l Software Testing Q ualifications Board

Certified Test er

International

Software Te sting

Foundation Level Syllabus

Q ualifications Board

 

 

3.2 Review Proce ss (K2)

25 minutes

 

 

Terms

Entry criteria, formal review, info rmal review, inspection, metric, mode rator, peer review, reviewer, scribe, t echnical review, walkthro ugh

Background

The diffe rent types of reviews vary from informal, characterized by no written instructions for reviewers, to systematic, characterized by te am participation, docume nted results of the review, and docume nted proced ures for cond ucting the r eview. The f ormality of a review process is relate d to factors such as the maturity of the developm ent process, any legal or regulatory requirements or the need for an audit trail.

The way a review is carried out d epends on the agreed objectives of the review (e .g., find def ects, gain und erstanding, educate testers and new team mem bers, or disc ussion and d ecision by consensus).

3.2.1Activities of a Formal Review (K1)

A typical formal revie w has the fo llowing mai n activities:

1.Planning

Defining the review crite ria

Selecting th e personnel

Allocating roles

Defining the entry and e xit criteria for more forma l review typ es (e.g., insp ections)

Selecting wh ich parts of documents to review

Checking entry criteria (for more form al review types)

2.Kick-off

Distributing documents

Explaining th e objectives , process an d documents to the participants

3.Indiv idual prepar ation

Preparing for the review meeting by reviewing th e document(s)

Noting poten tial defects, questions and comments

4.Exa mination/eva luation/recording of results (review m eeting)

Discussing o r logging, with documented results or minutes (fo r more formal review typ es)

Noting defec ts, making recommendations regarding handling the defects, making decisions about the defects

Examining/evaluating and recording issues during any physical meetings or tracking a ny group electr onic commu nications

5.Rework

Fixing defects found (typically done b y the author )

Recording updated status of defects (in formal reviews)

6.Foll ow-up

Checking th at defects ha ve been ad dressed

Gathering metrics

Checking on exit criteria (for more formal review types)

3.2.2Roles an d Respon sibilities (K1)

A typical formal revie w will includ e the roles b elow:

oManager: decide s on the exe cution of re views, alloca tes time in p roject sched ules and determines if the review objectives have been met.

Version 2 011

Page 33 of 78

31-Mar-2011

© Internationa l Software Testing Q ualifications Board

Certified Test er

International

Software Te sting

Foundation Level Syllabus

Q ualifications Board

 

 

oModerator: the person who l eads the review of the do cument or set of documents, includi ng planning the review, running the meeting, and following-up after the meeting. If necessary, the moderator may mediate betw een the various points o f view and i s often the person upon whom

the success of th e review re sts.

o Auth or: the writer or person with chief res ponsibility f or the docum ent(s) to be reviewed.

oReviewers: individuals with a specific technical or bu siness backg round (also called check ers or inspectors) who, after the necessary pre paration, identify and des cribe finding s (e.g., defects) in the product unde r review. Re viewers should be chos en to repres ent different perspectives and

role s in the review process, and should t ake part in any review meetings.

oScri be (or record er): docume nts all the issues, problems and open points that were identified duri ng the meeting.

Looking at software products or r elated work products from different p erspectives and using checklis ts can make reviews mo re effective a nd efficient. For exampl e, a checklist based on various perspec tives such a s user, maint ainer, tester or operation s, or a checklist of typic al requirements problems may help to uncover pr eviously un detected issu es.

3.2.3Types of Reviews (K2)

A single software pr oduct or related work product may be the subject of more tha n one review. If more tha n one type of review is used, the ord er may vary. For example, an inform al review m ay be carried o ut before a technical rev iew, or an in spection ma y be carried out on a req uirements specifica tion before a walkthroug h with customers. The m ain charact eristics, optio ns and purp oses of comm on review types are:

Informal Review

o No formal proce ss

o May take the form of pair pro gramming or a technical lead reviewing designs and code o Results may be documented

o Varies in usefuln ess depending on the reviewers

o Mai n purpose: i nexpensive way to get s ome benefit

Walkthrough

o Meeting led by author

o May take the form of scenarios, dry runs, peer group participation

oOpen-ended ses sions

• Optional pre-meeting pre paration of reviewers

 

• Optional preparation of a review repo rt including list of finding s

o Optional scribe (who is not th e author)

o

May vary in practice from quite informal to very forma l

o

Mai n purposes: learning, gaining understanding, find ing defects

Technical Review

oDocumented, defined defect -detection process that i ncludes peers and technical experts with opti onal manage ment participation

o May be perform ed as a peer review with out manage ment participation o Ideally led by trained modera tor (not the author)

o Pre-meeting preparation by reviewers o Optional use of c hecklists

oPrep aration of a review report which incl udes the list of findings, the verdict whether the

soft ware produc t meets its re quirements and, where appropriate, recommendations related to findings

o May vary in practice from quite informal to very forma l

oMai n purposes: discussing, making decisions, evalu ating alternatives, finding defects, solving technical proble ms and checking conform ance to spe cifications, plans, regulations, and standards

Version 2 011

Page 34 of 78

31-Mar-2011

© Internationa l Software Testing Q ualifications Board

Certified Test er

International

Software Te sting

Foundation Level Syllabus

Q ualifications Board

 

 

Inspection

o Led by trained m oderator (no t the author) o Usually conduct ed as a peer examination o Defined roles

o Inclu des metrics gathering

o For mal process based on rules and checklists

o Specified entry a nd exit criteria for acceptance of the software pr oduct o Pre-meeting preparation

o Inspection report including li st of findings

o For mal follow-up process (with optional p rocess improvement co mponents) o Optional reader

o Mai n purpose: fi nding defects

Walkthro ughs, technical reviews and inspections can be performed within a peer group,

i.e., colle agues at th e same organizational level. This typ e of review i s called a “p eer review”.

3.2.4Success Factors for Review s (K2)

Success factors for reviews include:

o

Eac h review has clear predefined objectives

o

The right people for the revie w objectives are involved

o

Testers are valued reviewers who contrib ute to the re view and al so learn about the product

 

which enables th em to prepa re tests earlier

o Defe cts found are welcomed and expressed objectiv ely

o

People issues a nd psycholo gical aspects are dealt with (e.g., ma king it a positive experie nce for

o

the author)

The review is conducted in an atmosphere of trust; the outcome will not be used for the

 

evaluation of the participants

oReview techniques are appli ed that are suitable to achieve the ob jectives and to the type and level of software work produ cts and reviewers

o Checklists or rol es are used if appropriate to increase effectiveness of defect identification

oTrai ning is given in review techniques, es pecially the more formal techniques such as inspection

o Management supports a good review pro cess (e.g., b y incorporating adequate time for review activ ities in proje ct schedule s)

o The re is an emphasis on learning and process improvement

Version 2 011

Page 35 of 78

31-Mar-2011

© Internationa l Software Testing Q ualifications Board

Certified Test er

International

Software Te sting

Foundation Level Syllabus

Q ualifications Board

 

 

3.3 Static Analysis by Tools (K2)

20 minutes

 

 

Terms

Compiler, complexity , control flo w, data flow, static analy sis

Background

The obje ctive of static analysis is to find defects in software source co de and soft ware models. Static an alysis is performed with out actually executing th e software being examined by the to ol; dynamic testing doe s execute th e software c ode. Static analysis can locate defects that are h ard to find in d ynamic testi ng. As with r eviews, static analysis fin ds defects rather than f ailures. Stati c analysis tools analyze program code (e.g., co ntrol flow an d data flow), as well as generated o utput such as HTML and X ML.

The valu e of static analysis is:

o Early detection o f defects prior to test ex ecution

o

Early warning ab out suspicious aspects of the code o r design by the calculati on of metrics , such

o

as a high compl exity measure

Identification of defects not easily found by dynamic testing

o

Dete cting dependencies and inconsistencies in software models such as links

o Improved mainta inability of code and des ign

o Prev ention of defects, if less ons are lear ned in devel opment

Typical defects disco vered by st atic analysis tools include :

o

Refe rencing a v ariable with a n undefined value

o Inconsistent interfaces betwe en modules and components o Variables that are not used o r are improp erly declared

o Unr eachable (de ad) code

o Miss ing and erro neous logic (potentially infinite loops) o Overly complicat ed constructs

o Prog ramming st andards viol ations o Sec urity vulnerabilities

o Syntax violation of code an d software m odels

Static an alysis tools are typically used by dev elopers (ch ecking against predefine d rules or programming standards) before and during component a nd integration testing or when checking-in code to configuration manageme nt tools, and by designers during software modeling. Static analysis tools may produce a lar ge number of warning messages, which need to be well-man aged to allow the most eff ective use of the tool.

Compilers may offer some suppo rt for static analysis, including the ca lculation of metrics.

References

3.2 IEE E 1028

3.2.2 Gilb, 1993, van Veenendaal, 2004 3.2.4 Gilb, 1993, IE EE 1028

3.3 van Veenendaal, 2004

Version 2 011

Page 36 of 78

31-Mar-2011

© Internationa l Software Testing Q ualifications Board

Certified Test er

International

Software Te sting

Foundation Level Syllabus

Q ualifications Board

 

 

4. Test Design Te chniques (K4)

2 85 minu tes

Learning Obje ctives for Test Design Tec hniques

The obje ctives identify what you will be able to do followi ng the completion of each module.

4.1 Th e Test De velopment Process (K3)

LO-4.1.1 Different iate betwee n a test desi gn specification, test case specificati on and test procedure specification (K2)

LO-4.1.2 Compare the terms test conditio n, test case and test proc edure (K2)

LO-4.1.3 Evaluate the quality of test cases in terms of clear traceability to the requirements and expecte d results (K2)

LO-4.1.4 Translate test cases into a well-structured te st procedure specificatio n at a level of detail relevant to the knowledge of the tester s (K3)

4.2 Categories o f Test Design Techniques (K 2)

LO-4.2.1 Recall r easons that both specification-based (black-box) and structur e-based (whitebox) test design tech niques are useful and lis t the comm on technique s for each ( K1)

LO-4.2.2 Explain the characteristics, com monalities, and differenc es between specification-based testing, structure-ba sed testing and experience-based te sting (K2)

4.3 Specificatio n-based or Black-b ox Techniques (K3)

LO-4.3.1 Write te st cases from given softw are models using equiv alence partitioning, boundary value an alysis, decis ion tables and state transition diagrams/tables (K 3)

LO-4.3.2 Explain the main purpose of eac h of the four testing tech niques, what level and type of testing could use the technique, and how cov erage may be measured (K2)

LO-4.3.3 Explain the concept of use case testing and its benefits ( K2)

4.4 Structure-based or W hite-box Techniques (K4)

LO-4.4.1

Describ e the concept and value of code cove rage (K2)

LO-4.4.2

Explain the concepts of stateme nt and decision coverage , and give re asons why these

 

concept s can also b e used at tes t levels other than component testin g (e.g., on

 

busines s procedures at system l evel) (K2)

LO-4.4.3

Write te st cases from given contr ol flows usin g statement and decision test desig n

 

techniqu es (K3)

LO-4.4.4

Assess statement an d decision c overage for completeness with resp ect to define d exit

 

criteria. (K4)

4.5 Ex periencebased Tec hniques ( K2)

LO-4.5.1 Recall r easons for writing test cases based on intuition, experience a nd knowledge about co mmon defe cts (K1)

LO-4.5.2 Compare experience -based techniques with specification-based testing technique s (K2)

4.6 Choosing Te st Techniques (K2)

LO-4.6.1 Classify test design techniques a ccording to their fitness to a given co ntext, for the test basis, respective models and software characteristics (K 2)

Version 2 011

Page 37 of 78

31-Mar-2011

© Internationa l Software Testing Q ualifications Board

Certified Test er

International

Software Te sting

Foundation Level Syllabus

Q ualifications Board

 

 

4.1 The Te st Deve lopment Process (K3)

15 minutes

 

 

Terms

Test case specification, test design, test exe cution schedule, test pro cedure specification, test script, tr aceability

Background

The test developme nt process d escribed in t his section can be done in different w ays, from ve ry informal with little or no documen tation, to very formal (as it is described below). T he level of formality depends on the context of the testing, including the maturity of testing an d development process es, time con straints, safe ty or regulatory requirem ents, and the people inv olved.

During t est analysis, the test basis documentation is analyzed in order to determi ne what to test, i.e., to identify the test conditions. A test cond ition is defin ed as an item or event t hat could be verified by one or m ore test cases (e.g., a fun ction, trans action, quality characteri stic or struct ural element).

Establis hing traceability from test conditions back to the s pecification s and require ments enables both effe ctive impact analysis wh en requirem ents change , and deter mining requirements cov erage for a set of tests. During test analysis the detailed test approach is implemented t o select the test design t echniques to use based on, among other conside rations, the identified risks (see Chapter 5 for more on risk analysis).

During t est design the test cases and test data are create d and specified. A test case consists of a set of in put values, execution pre conditions, expected res ults and exe cution postc onditions, defined to cover a certain tes t objective(s ) or test condition(s). The ‘Standard for Software Test Documentation’ (IEEE STD 829-1998) describes the content of test design specifications

(containing test cond itions) and test case spe cifications.

Expected results sho uld be prod uced as part of the specification of a test case and include outputs, changes to data and states, and any other co nsequences of the test. If expected results have not been defined, then a plausible, but erroneou s, result may be interpreted as the correct one. Expected results sho uld ideally b e defined prior to test execution.

During t est impleme ntation the te st cases are developed, implemented, prioritized and organi zed in the test procedure s pecification (IEEE STD 829-1998). T he test proce dure specifies the sequ ence of actions for the exe cution of a test. If tests are run using a test execution tool, the sequence of actions is specified i n a test scrip t (which is an automated test procedure).

The various test pro cedures and automated test scripts are subseque ntly formed into a test execution schedule that defines t he order in which the various test pr ocedures, an d possibly automated test scripts, are exec uted. The test execution schedule will take into account such factors a s regression tests, prioritization, and technical an d logical dependencies.

Version 2 011

Page 38 of 78

31-Mar-2011

© Internationa l Software Testing Q ualifications Board

Certified Test er

International

Software Te sting

Foundation Level Syllabus

Q ualifications Board

 

 

4.2 Catego ries of Test Design Techniques

15 minutes

(K2)

 

Terms

Black-bo x test design technique, experience-based test d esign technique, test design technique, white-box test desig n technique

Background

The pur pose of a test design tec hnique is to identify test conditions, t est cases, a nd test data.

It is a cla ssic distinction to denot e test techni ques as black-box or white-box. Black-box test d esign techniques (also called specification-based t echniques) are a way to derive and select test conditio ns, test cases, or test dat a based on an analysis of the test b asis documentation. Thi s includes both functional and non-functional te sting. Blac k-box testing, by definition, does not use

any infor mation regarding the internal structure of the co mponent or system to be tested. White-box test design techniqu es (also call ed structural or structurebased techn iques) are b ased on an analysis of the structure of the co mponent or system. Black-box and white-box te sting may al so be combined with experience-based techniques to leverage the experien ce of develo pers, tester s and users to determine w hat should be tested.

Some techniques fall clearly into a single cat egory; other s have elem ents of more than one category .

This syllabus refers to specification-based te st design techniques as black-box te chniques an d structure -based test design tech niques as wh ite-box techniques. In a ddition expe rience-based test design t echniques a re covered.

Commo n characteris tics of specification-bas ed test design technique s include:

oModels, either fo rmal or informal, are use d for the sp ecification of the problem to be solved, the soft ware or its c omponents

o Test cases can be derived s ystematically from these models

Commo n characteris tics of struct ure-based t est design techniques in clude:

o Information abou t how the so ftware is constructed is used to deri ve the test c ases (e.g., c ode and detailed design information)

oThe extent of coverage of th e software c an be measu red for existing test cas es, and further test case s can be derived system atically to increase coverage

Commo n characteris tics of expe rience-based test design techniques include: o The knowledge and experien ce of peopl e are used to derive the test cases

o The knowledge of testers, de velopers, us ers and oth er stakeholders about th e software, its usa ge and its environment is one source of informati on

o Knowledge abou t likely defe cts and their distribution is another so urce of infor mation

Version 2 011

Page 39 of 78

31-Mar-2011

© Internationa l Software Testing Q ualifications Board

Certified Test er

International

Software Te sting

Foundation Level Syllabus

Q ualifications Board

 

 

4.3 Specification-b ased o r Blackbox Techniques (K3)

150 minutes

Terms

Boundary value analysis, decision table testi ng, equivale nce partitioning, state transition testin g, use case testing

4.3.1Equivalence Parti tioning (K3)

In equiv alence partitioning, inputs to the soft ware or syste m are divid ed into group s that are expecte d to exhibit similar behavior, so they are likely to be processed in the same way. Equivale nce partition s (or classes) can be fo und for both valid data, i.e., values that should be accepte d and invalid data, i.e., v alues that sh ould be rejected. Partitio ns can also be identified for outputs, internal valu es, time-rel ated values (e.g., before or after an event) and for interface parameters (e.g., integrated components bei ng tested during integration testing). Tests can b e

designe d to cover all valid and invalid partitions. Equivalence partition ing is applicable at all levels of testing.

Equivale nce partition ing can be used to achi eve input an d output coverage goals. It can be ap plied to human input, input via interfaces to a syste m, or interfa ce paramet ers in integra tion testing.

4.3.2Boundary Value Analysis ( K3)

Behavior at the edge of each eq uivalence partition is more likely to be incorrect than behavior within the partition, so bou ndaries are a n area where testing is likely to yield defects. The maximum and minimum values of a partition ar e its boundar y values. A boundary value for a valid partition is a valid bo undary value; the bound ary of an inv alid partition is an invalid boundary v alue. Tests can be designe d to cover bo th valid and invalid boun dary values. When desi gning test ca ses, a test f or each boundary value is chosen.

Boundary value analysis can be applied at all test levels. It is relatively easy to apply and its defectfinding capability is h igh. Detaile d specificatio ns are helpful in determining the inte resting boundaries.

This tec hnique is oft en considere d as an exte nsion of eq uivalence partitioning or other black-b ox test design techniqu es. It can be used on eq uivalence cla sses for use r input on s creen as well as, for exam ple, on time ranges (e.g., time out, tr ansactional speed requirements) or table ranges (e.g., table size is 256*256 ).

4.3.3Decision Table Testing (K3)

Decision tables are a good way to capture system require ments that contain logical conditions , and to docu ment internal system design. They m ay be used to record com plex business rules that a system is to implem ent. When creating decision tables, th e specification is analyzed, and con ditions and acti ons of the system are ide ntified. The input conditions and actions are mos t often stated in such a w ay that they must be true or false (Boolean). The decision table contains the triggering conditio ns, often com binations of true and false for all input condition s, and the resulting actio ns for each co mbination of conditions. Each column of the table corresponds to a busin ess rule that defines a unique co mbination of conditions and which res ult in the execution of the actions associated with that rule. The coverage stan dard commonly used wit h decision table testing is to have at least one tes t per colum n in the table, which typic ally involves covering all combinatio ns of triggering conditions.

Version 2 011

Page 40 of 78

31-Mar-2011

© Internationa l Software Testing Q ualifications Board

Соседние файлы в предмете [НЕСОРТИРОВАННОЕ]