Добавил:
Upload Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:

ISTQB CTFL Syllabus 2011

.pdf
Скачиваний:
66
Добавлен:
12.05.2015
Размер:
1.14 Mб
Скачать

Certified Test er

International

Software Te sting

Foundation Level Syllabus

Q ualifications Board

 

 

1.1 Why is Testing Necessary (K 2)

20 minutes

 

 

Terms

Bug, defect, error, failure, fault, m istake, quality, risk

1.1.1Software Systems Context (K1)

Software systems are an integral part of life, from busine ss applications (e.g., ban king) to consumer products (e.g., cars). Most people have had an experience with software that did not work as expecte d. Software that does not work corre ctly can lead to many pro blems, including loss of money, time or busin ess reputation, and could even cause injury or death.

1.1.2Causes of Softwa re Defects (K2)

A huma n being can make an err or (mistake), which prod ces a defect (fault, bug) in the program code, or in a docum ent. If a defect in code is executed, th e system may fail to do what it should do (or do so mething it shouldn’t), causing a failure. Defects in software, systems or documents m ay result in failures, but not all defec ts do so.

Defects occur because human beings are fallible and bec ause there is time pressure, complex code, complexity of infrastructure , changing technologies, and/or man y system int eractions.

Failures can be caus ed by enviro nmental co nditions as w ell. For example, radiation, magnetism, electronic fields, and pollution can cause faults in firmware or influence the execution of softw are by changin g the hardwa re conditions.

1.1.3 Role of T esting in Software Develop ment, Mai ntenance and Operat ions (K2)

Rigorou s testing of systems and documentation can help to reduce the risk of problems occurring during operation and contribute to the quality of the software system, if the defect s found are corrected before the system is re leased for operational u se.

Software testing ma y also be req uired to me et contractua l or legal re quirements, or industry-specific standard s.

1.1.4Testing and Quality (K2)

With the help of testing, it is poss ible to meas ure the quality of software in terms o f defects fou nd, for both functional a nd non-functional softwar e requirements and characteristics (e.g., reliabili ty, usability, efficiency, maintainability and portability). For more information on non-functional tes ting see Chapter 2; for more information on software characte ristics see ‘ Software En gineering – Software Product Qu ality’ (ISO 9126).

Testing can give confidence in th e quality of the software if it finds few or no defec ts. A properly designe d test that pa sses reduce s the overall level of risk in a system. When testing does find defects, the quality of the softwa re system in creases when those defe cts are fixed .

Lessons should be l earned from previous pro jects. By understanding the root causes of defects found in other projects, processes can be im proved, which in turn sho uld prevent those defects from reoccurring and, as a consequen ce, improve the quality o f future systems. This is an aspect of quality assurance.

Testing should be integrated as one of the qu ality assurance activitie s (i.e., along side development standard s, training and defect an alysis).

Version 2 011

Page 11 of 78

31-Mar-2011

© Internationa l Software Testing Q ualifications Board

Certified Test er

International

Software Te sting

Foundation Level Syllabus

Q ualifications Board

 

 

1.1.5How Mu ch Testin g is Enou gh? (K2)

Deciding how much testing is enough should take accoun t of the level of risk, including technical, safety, and business risks, and project constr aints such as time and b udget. Risk is discusse d further in Chapter 5.

Testing should provide sufficient information to stakeholders to make informed decisions abou t the release of the softwa re or system being tested, for the next development step or h andover to customers.

Version 2 011

Page 12 of 78

31-Mar-2011

© Internationa l Software Testing Q ualifications Board

Certified Test er

International

Software Te sting

Foundation Level Syllabus

Q ualifications Board

 

 

1.2 What i s Testing? (K2)

30 minutes

 

 

Terms

Debugging, requirem ent, review, test case, t esting, test objective

Background

A common perception of testing is that it only consists of running test s, i.e., executing the soft ware. This is part of testing , but not all of the testin g activities.

Test activities exist b efore and af ter test exec ution. These activities i nclude plann ing and control, choosin g test conditions, designing and executing test cases, checkin g results, ev aluating exit criteria, reporting on the testing p rocess and system und r test, and finalizing or completing closure activities after a test phase has b een completed. Testing also includes reviewing documents (including source co de) and con ducting static analysis.

Both dyn amic testing and static testing can be used as a means for achieving similar objective s, and will provide information that can be used to improve both the system being tested and the develop ment and tes ting process es.

Testing can have the following o bjectives: o Finding defects

o Gain ing confiden ce about th e level of qu ality o Prov iding information for decision-makin g

o Prev enting defe ts

The thou ght process and activitie s involved i n designing tests early in the life cycle (verifying the test basis via test design) can help to prevent defects fro m being intro duced into c ode. Review s of docume nts (e.g., requirements) and the iden tification and resolution o f issues als o help to prevent defects appearing in the code.

Different viewpoints in testing tak e different objectives into account. For example, in developm ent testing ( e.g., compo nent, integration and sys tem testing), the main objective may be to cause as many failures as possible so that defects in t he software are identified and can be fixed. In accepta nce testing, the main objective may b e to confirm that the system works as expected, to gain confidence that it has met th e requireme nts. In some cases the main objectiv e of testing may be to as sess the qua lity of the so ftware (with no intention of fixing defects), to give information to stakeholders of the risk of releasing the syste m at a given time. Maintenance testing often includes testing t hat no new d efects have been introd uced during development of the chan ges. During

operational testing, the main obj ective may be to assess system characteristics s uch as reliability or availability.

Debugging and testi ng are differ ent. Dynami c testing can show failure s that are c aused by defects. Debugging is the de velopment activity that fi nds, analyzes and removes the caus e of the failure. Subsequ ent re-testin g by a tester ensures th at the fix doe s indeed re solve the failure. The responsibility for the se activities is usually te ters test an d developers debug.

The pro cess of testin g and the te sting activities are explained in Section 1.4.

Version 2 011

Page 13 of 78

31-Mar-2011

© Internationa l Software Testing Q ualifications Board

Certified Test er

International

Software Te sting

Foundation Level Syllabus

Q ualifications Board

 

 

1.3 Seven Testing Principles (K2)

35 minutes

 

 

Terms

Exhaustive testing

Princi ples

A numb er of testing principles ha ve been sug gested over the past 40 years and offer general guidelin es common for all testing .

Principle 1 – Testing shows presence of defects

Testing can show th at defects are present, b ut cannot pr ove that there are no defects. Testing reduces the probability of undiscovered defe cts remainin g in the softw are but, eve n if no defe cts are found, it is not a pro of of correctn ess.

Principle 2 – Exhaustive testing is impossible

Testing everything ( all combinations of inputs and preconditions) is n ot feasible e xcept for trivial cases. I nstead of ex haustive testing, risk analysis and priorities should be used to focus testing efforts.

Principle 3 – Early t esting

To find d efects early, testing activities shall be started as early as possible in the software or s ystem develop ment life cycle, and shall be focused on defined o bjectives.

Principle 4 – Defec t clustering

Testing effort shall be focused proportionally to the expec ted and later observed d efect density of modules. A small number of mod ules usually contains m ost of the defects discov ered during p rerelease testing, or is responsible for most of the operation al failures.

Principle 5 – Pesticide parado x

If the sa me tests are repeated ov er and over again, eventually the same set of tes t cases will no longer fi nd any new defects. To overcome this “pesticide paradox”, test cases nee d to be regu larly reviewe d and revise d, and new a nd different tests need to be written to exercise d ifferent parts of the softw are or syste m to find potentially more defects.

Principle 6 – Testing is context dependen t

Testing is done differently in diffe rent contexts. For example, safety-critical software is tested differently from an e-commerce s ite.

Principle 7 – Absen ce-of-error s fallacy

Finding and fixing de fects does n ot help if the system built is unusabl e and does n ot fulfill the users’ needs a nd expectations.

Version 2 011

Page 14 of 78

31-Mar-2011

© Internationa l Software Testing Q ualifications Board

Certified Test er

International

Software Te sting

Foundation Level Syllabus

Q ualifications Board

 

 

1.4 Funda mental Test Pro cess (K 1)

35 minutes

 

 

Terms

Confirm ation testing, re-testing, exit criteria, incident, regr ession testi ng, test basi s, test condition, test cov erage, test data, test execution, test log, test plan, test proced ure, test policy, test suite, test summary report, testware

Background

The mos t visible part of testing is test execution. But to b e effective a nd efficient, t est plans should also include time to be spent on planning the tests, designing test cas es, preparing for execution and evaluating results.

The fund amental test process consists of the following main activities: o Test planning an d control

o Test analysis and design

o Test implementa tion and exe cution o Evaluating exit criteria and re porting o Test closure activities

Although logically sequential, the activities in the process may overla p or take place concurre ntly. Tailoring these main activities within the context of the system and th e project is u sually requir ed.

1.4.1Test Pla nning and Control (K1)

Test pla nning is the

activity of defining the ob jectives of t esting and the specification of test activities

in order to meet the

objectives a nd mission.

Test con trol is the on going activit y of comparing actual progress agai nst the plan, and reportin g the status, i ncluding deviations from the plan. It i nvolves takin g actions n ecessary to meet the mission and obje ctives of the project. In o rder to control testing, th e testing activities should be monitored througho ut the project. Test planning takes in to account the feedback from monito ring and co ntrol activities .

Test pla nning and co ntrol tasks a re defined i n Chapter 5 of this syllabus.

1.4.2Test Ana lysis and Design ( K1)

Test ana lysis and design is the activity during which gene ral testing objectives are transformed into tangible test conditions and test cases.

The test analysis an d design activity has the following ma jor tasks:

oReviewing the te st basis (su ch as require ments, soft ware integrit y level1 (risk level), risk analysis reports, architecture , design, interface specifications)

o Evaluating testa bility of the t est basis an d test object s

oIdentifying and prioritizing te st conditions based on a nalysis of te t items, the specification , beh avior and structure of the software

o Designing and prioritizing hig h level test cases

o Identifying neces sary test data to support the test co nditions and test cases

o Designing the test environm ent setup and identifying any require d infrastructu re and tools o Cre ating bi-directional tracea bility betwee n test basis and test ca es

1 The deg ree to which so ftware complies or must comply with a set of stakeholder-sele cted software a nd/or software-based system characteristics (e.g., software co mplexity, risk as sessment, safety level, security level, desired performance, reliability, or cost) which are defined to reflect the importance of the software to its stakeholders.

Version 2 011

Page 15 of 78

31-Mar-2011

© Internationa l Software Testing Q ualifications Board

Certified Test er

International

Software Te sting

Foundation Level Syllabus

Q ualifications Board

 

 

1.4.3Test Implementation and E xecution (K1)

Test imp lementation and execution is the activity where test procedures or scripts are specified by combini ng the test c ases in a pa rticular order and includi ng any other information needed for test execution, the enviro nment is set up and the tests are run .

Test imp lementation and execution has the f ollowing major tasks:

o Finalizing, implementing and prioritizing test cases (i cluding the identification of test data)

oDeveloping and prioritizing te st procedures, creating test data an d, optionally, preparing t est harnesses and w riting autom ated test scripts

o

Cre ating test suites from the test procedu res for efficient test exe cution

o

Verifying that the test enviro nment has been set up correctly

o

Verifying and updating bi-dir ectional trac eability between the test basis and test cases

o Exe cuting test p rocedures either manually or by usin g test execu tion tools, according to th e planned sequen ce

o Log ging the outc ome of test execution an d recording the identities and versions of the software

 

und er test, test tools and testware

o Com paring actu al results with expected results

o

Reporting discrepancies as incidents and analyzing them in order to establish their cause (e.g.,

 

a defect in the c ode, in specified test dat a, in the test document, or a mistake in the way t he test

o

was executed)

Repeating test activities as a result of action taken for each discr epancy, for e xample, re-

 

exec ution of a test that previ ously failed in order to co nfirm a fix ( confirmation testing), exe cution

 

of a corrected test and/or ex ecution of tests in order to ensure tha t defects have not been

 

introduced in un changed areas of the software or that defect fixing did not un cover other

 

defects (regression testing)

1.4.4Evaluati ng Exit Criteria and Reporting (K1)

Evaluati ng exit criteria is the acti vity where test execution is assessed against the defined objective s. This sho uld be done f or each test level (see Section 2.2).

Evaluati ng exit criteria has the following majo r tasks:

o

Checking test lo gs against th e exit criteria specified i n test planni ng

o

Ass essing if more tests are n eeded or if the exit crite ria specified should be c hanged

o

Writing a test summary repo rt for stakeh olders

1.4.5Test Closure Acti vities (K1)

Test clo sure activities collect data from completed test activities to consolidate experience, testware, facts and n umbers. Te st closure activities occur at project m ilestones su ch as when a software system is r eleased, a test project is completed (or cancelled), a milestone has been achieve d, or a maintenance rele ase has bee n completed.

Version 2 011

Page 16 of 78

31-Mar-2011

© Internationa l Software Testing Q ualifications Board

Certified Test er

International

Software Te sting

Foundation Level Syllabus

Q ualifications Board

 

 

Test clo sure activities include the

following m ajor tasks:

o Checking which planned deliverables ha ve been deli vered

o

Closing incident reports or ra ising chang e records for any that re main open

o Documenting th e acceptanc e of the syst em

o

Finalizing and archiving test ware, the test environment and the test infrastruc ture for later reuse

o

Handing over th e testware to the mainten ance organization

o

Analyzing lesson s learned to determine c hanges needed for future releases a nd projects

o

Using the information gathered to improve test maturity

Version 2 011

Page 17 of 78

31-Mar-2011

© Internationa l Software Testing Q ualifications Board

Certified Test er

International

Software Te sting

Foundation Level Syllabus

Q ualifications Board

 

 

1.5 The Ps ycholog y of Testing (K2)

25 minutes

 

 

Terms

Error guessing, independence

Background

The mindset to be u sed while tes ting and reviewing is different from t hat used while developing software. With the ri ght mindset developers a re able to test their own code, but se paration of this responsibility to a te ster is typically done to help focus effort and provide additiona l benefits, such as an indep endent view by trained a nd professi onal testing resources. In dependent testing may be carried o ut at any lev el of testing.

A certain degree of independence (avoiding the author bias) often ma kes the tester more effective at findin g defects and failures. Independence is not, how ever, a replacement for f amiliarity, and develop ers can efficiently find m any defects in their own code. Sever al levels of in dependenc e can be defin ed as shown here from l ow to high:

o Tests designed by the person(s) who wr ote the software under test (low level of independence) o Tests designed by another person(s) (e.g ., from the d evelopment team)

oTests designed by a person(s) from a different organizational group (e.g., an independent test team ) or test spe cialists (e.g ., usability or performance test speci alists)

oTests designed by a person(s) from a different organization or company (i.e., outsourcing or certification by an external b ody)

People a nd projects are driven by objectives. People ten d to align their plans with the objectives set by mana gement and other stakeholders, for example, to find defects or to confirm that softwar e meets it s objectives. Therefore, it is important to clearly state the objectives of testing.

Identifyi ng failures d uring testing may be per ceived as criticism against the produ ct and again st the author. As a result, testing is ofte n seen as a destructive activity, eve n though it is very constr uctive in the m anagement of product ris ks. Looking for failures in a system requires curi osity, profes sional pessimis m, a critical eye, attentio n to detail, good commu nication wit h development peers, and experien ce on which to base err or guessing.

If errors, defects or failures are communicated in a constructive way, bad feelings between th e testers a nd the analy sts, designe rs and developers can be avoided. T his applies to defects found during re views as w ell as in testi ng.

The tester and test l eader need good interpersonal skills to communi cate factual information a bout defects, progress and risks in a c onstructive way. For the author of the software or document, defect information ca n help them improve the ir skills. Defects found and fixed during testing will save time and money later, and r educe risks.

Commu nication prob lems may occur, particularly if testers are seen only as messengers of unwanted news abo ut defects. However, the re are sever al ways to im prove comm unication a nd relationships between testers and others:

Version 2 011

Page 18 of 78

31-Mar-2011

© Internationa l Software Testing Q ualifications Board

Certified Test er

International

Software Te sting

Foundation Level Syllabus

Q ualifications Board

 

 

oStart with collab oration rather than battles – remind everyone of the common goal of bette r quality systems

oCom municate fin dings on th e product in a neutral, fa ct-focused way without criticizing the person who crea ted it, for example, write objective an d factual inc ident reports and review findings

o Try to understand how the other person feels and why they react as they do

o Confirm that the other perso n has unders tood what you have said and vice ve rsa

Version 2 011

Page 19 of 78

31-Mar-2011

© Internationa l Software Testing Q ualifications Board

Certified Test er

International

Software Te sting

Foundation Level Syllabus

Q ualifications Board

 

 

1.6 Code o f Ethics

10 minutes

 

 

Involvem ent in software testing e nables individuals to learn confidential and privil eged informa tion. A code of ethics is necessary, among other reasons to ens ure that the information i s not put to inapprop riate use. Recognizing the ACM and IEEE code of ethics for engineers, the ISTQB states the following code of ethics:

PUBLIC - Certified software test ers shall act consistently with the public interest

CLIENT AND EMPLOYER - Cert ified software testers sh all act in a manner that is in the best interests of their c lient and em ployer, cons istent with t he public int erest

PRODUCT - Certified software t esters shall e nsure that t he deliverables they prov ide (on the products and systems they te st) meet the highest prof essional sta ndards possible

JUDGMENTCertifie d software testers shall maintain int egrity and in dependence in their prof essional judgmen t

MANAGEMENT - Ce rtified software test man agers and le aders shall subscribe to and promote an ethical approach to the manage ent of softw are testing

PROFE SSION - Certified software testers shall advance the integrity and reputation of the pro fession consistent with the public interest

COLLEA GUES - Certified softwa re testers sh all be fair to and supportive of their c olleagues, a nd promote cooperation with software developer s

SELF - Certified software testers shall participate in lifelo ng learning regarding the practice of their professi on and shall promote an ethical appr oach to the p ractice of the profession

References

1.1.5 Black, 2001, K aner, 2002

1.2 Beiz er, 1990, Black, 2001, M yers, 1979

1.3 Beiz er, 1990, H etzel, 1988, Myers, 1979

1.4 Het zel, 1988

1.4.5 Black, 2001, C raig, 2002 1.5 Blac k, 2001, Hetzel, 1988

Version 2 011

Page 20 of 78

31-Mar-2011

© Internationa l Software Testing Q ualifications Board

Соседние файлы в предмете [НЕСОРТИРОВАННОЕ]