4. Functional Acceptance Testing
(1) Story-based (aka use-case based)
   - In the main books on XP, including Crispin's book on testing in XP, story-based tests are described as the primary vehicle for functional acceptance testing.

(2) Typical tests are fairly simple
   a. Happy path:
      - The main sequence(s) that lead to the result that the actor is trying to achieve
   b. Sad paths
      - Error cases or other paths that don't achieve the desired result
   c. Alternate paths
      - Non-error sequences that lead to results other than the typical, main result

(3) Might concatenate stories to develop more complex scenario tests
   -
www.testingeducation.org/articles

(4) It's common for industrial test groups to use only one or two dominant test techniques. 

(5) We recommend that they balance their efforts by using several techniques
   -
http://www.testingeducation.org/articles/blackbox_paradigms_tutorial.pdf

(6) because different tests are more effective for different types of bugs, or under different contexts.
   -
http://www.satisfice.com/tools/satisfice-tsm-4p.pdf

(7) Scenario testing is just one style (or main technique) for functional acceptance testing. 

(8) One point of disagreement with XP literature
   a. It's often said that all acceptance testing should be automated. I think this is a serious error.
      - There are significant cost/benefit tradeoffs associated with UI-level automated testing.
      - The choice of all-versus-some automation should be pragmatic, based on the project's context.
   b. I disagree that all non-automated testing should be fully scripted, so the person behaves as if s/he were an automaton.
      - There are significant benefits to exploratory manual testing
      - The documentation cost of scripting is high and cost to future change is high because of the cost of maintenance of the documentation
      - It's an ineffective way to find bugs


     
5. Parafunctional Testing
(1) Functional testing is only part of the story. Consider these other attributes, which we call para-functional (or non-functional):
Security, Accessibility, Supportability, Localizability, Compatibility ,(configurations), Interoperability, Installability and , uninstallability, Usability, Performance, Scalability

(2) These are probably not well handled by customer stories.
   a. These aren't well defined as a set of features.
   b. They are (or aren't) built into every feature.
   c. These also are often very technical
      - the customer is not likely to be an authority on scalability or interoperability in the way that she is on her own business processes.

6. Human Development in XP
(1) The programming process advocated in XP provides these to the programmer automatically.
   - Development and visibility of skill
   - Rapid feedback
   - Detailed review of the work product
   - Honesty
   - Peer support
   - Broadening experiences

(2) How will / should these be provided for the testing work?
   a. XP was not tailored to provide personal / career development to a person who works as a tester.
   b. How should the process in your company be adapted so that it provides support for the development of the testing skills / knowledge of the person doing testing, as it does for the programming skills / knowledge of the person doing programming.

7. Role Alternatives
(1) The programmers are the testers.

(2) The customer organization supplies the testers.

(3) A third role gets introduced to XP, the testers.

(4) The programmers (or the customers) hire consultants (specialists) and dispose of them at end-of-project.
   - I think that the idea of significant reliance on specialist consultants for parafunctional testing is unrealistic and undesirable.

8. Avoid Remaking the Following Mistakes
(1) the purpose of testing is to find bugs
   a. The purpose of testing is to provide information, under one of several competing information-gathering missions
      - Find defects
      - Maximize bug count
      - Block premature product releases
      - Help managers make ship / no-ship decisions
      - Minimize technical support costs
      - Assess conformance to specification
      - Conform to regulations
      - Minimize safety-related lawsuit risk
      - Find safe scenarios for use of the product
      - Assess quality
      - Verify correctness of the product
      - Assure quality

(2) The test group works independently of the programming group

(3) Tests are designed without knowledge of the underlying code
   - Think in terms of what knowledge the tester SHOULD have rather than what knowledge the tester should avoid

(4) Automated tests are developed at the user interface level, by non-programmers
   - These are inefficient and high-maintenance.
   - Many will be replaced with glass box tests or API level tests.
   - Others can be avoided via exploratory testing

(5) Tests are designed early in development

   - We should design tests as we need them

(6) Tests are designed to be reused time and time again, as regression tests
   - Change detectors, yes.
   - GUI level regression tests? Trade costs and benefits. What is the inertial result?

(7) Black box testers should design the build verification tests, even the ones to be run by programmers
   - Be cautious about replacing programmer regression with black box regression

(8) Testers should assume that the programmers did a light job of testing and so should extensively cover the basics
   - Obsoleted in the context of XP

(9) The pool of tests should cover every line and branch in the program, or perhaps every basis path
   - Absurd in black box testing

(10) Manual tests are documented in great procedural detail so that they can be handed down to less experienced or less skilled testers

(11) There should be at least one thoroughly documented test for every requirement item or specification item
   - Is the emphasis on existence of one test (why only one?) or on the documentation? Does this focus us on the right issues?

(12) Test cases should be based on documented characteristics of the program
   - Hopefully, this is considered obsolete thinking in XP

(13) Test cases should be documented independently, ideally stored in a test case management system that describes 
the pro- conditions, procedural details, post-conditions, and basis (such as trace to requirements) of each individual test case
   - Inertial expense is enormous. What are the advantages?

(14) Failures should be reported into a bug tracking system
   - This is often a good rule, but it is subordinate to the overall process.
   - The purpose of the bug tracking process is to get the right bugs fixed.
   - Other objectives are normally (not always) secondary.

(15) The test group can block release if product quality is too low

arrow
arrow
    全站熱搜

    kojenchieh 發表在 痞客邦 留言(0) 人氣()