目前分類:測試自動化 (61)

瀏覽方式: 標題列表 簡短摘要

測試自動化的Programming Paradigms

Programming Paradigms in Test Automation

May 14, 2009
Published in I.M.Testy

kojenchieh 發表在 痞客邦 留言(0) 人氣()


Automate This!

April 11, 2007
Posted by Micahel
Published in Test Guide

kojenchieh 發表在 痞客邦 留言(0) 人氣()


Return on Investment for Automated Testing

Apr 02 2009
Posted by Amr Elssamadisy
Published in Info Queue

kojenchieh 發表在 痞客邦 留言(0) 人氣()


What are the Qualities of a Good Test?

Oct 03, 2008
Posted by Mark Levison
Published in Info Queue

kojenchieh 發表在 痞客邦 留言(0) 人氣()


When to use automated tests

March 31st, 2009
Published in Scary Tester

測試自動化是非常有用和令人印象深刻的工具, 可以幫助測試更有效率. 但是, 測試自動化並不是適用所有的專案, 可能是因缺乏時間, 或是因為技術上的限制.

kojenchieh 發表在 痞客邦 留言(0) 人氣()

Test Automation職務需要具備什麼樣的能力

QA Automation Skill Matrices (reviewing your career or hiring a resource)

5 February 2009
Posted by Albert Gareev 
Published in Software Testing Club

kojenchieh 發表在 痞客邦 留言(0) 人氣()


Challenges in Implementation of Automation Testing

22 March 2009
Posted by artis
Published in Software Testing Club

kojenchieh 發表在 痞客邦 留言(0) 人氣()

如何開始進行Test Automation

How to catch up on test automation

03, Jan, 2008
Posted by Henrik Kniberg

kojenchieh 發表在 痞客邦 留言(0) 人氣()

要進行Test Automation時, 其可能的組織架構為何?

最近有人請我和他們團隊, 一起討論如何開始進行test automation. 這真是一個很大的題目, 實在不容易回答

這裡我提供了一些想法, 有關於test automation常見的組織結構:

Organization 1
A. Role
 - Only limited dedicated test developers do test automation programs

kojenchieh 發表在 痞客邦 留言(0) 人氣()


GUI test automation is not child's play

March 12, 2009
Posted by Bj Rollison
Published in I. M. Testy

kojenchieh 發表在 痞客邦 留言(0) 人氣()

如何利用Microsoft Hyper-V來進行自動化測試

Automating Software Testing with Microsoft Hyper-V

Posted by Jani Jarvinen
Published in Developer.com

每次在作測試時, QA很大的effort是在準備環境, 不但要乾淨的環境, 還要各式各樣的環境, 所以常讓QA覺得生命為何要浪費在這樣無意義的事情上.

因此當VMWare出來時, 讓software testing看到了一道曙光. 尤其它提供了API,讓你可以自動化這些流程時, 更是讓QA高興的不得了.

就在這時候, Microsoft也不甘示弱, 也加入這場戰場, 推出了Virual PC, 之後還有Hyper-V. 現在推出了Hyper-V Server 2008, 和VMware的 ESXi很像, 它是一個免錢的軟體.

因此有了Hyper-V Server 2008, 再加上Visual Studio Team Foundation Server (TFS) 和 Team Build 的功能, 你就能做到 one click, 就能將compile, deploy和testing畢其功於一役.

這裡有篇介紹如何使用Hyper-V來做自動化測試的文件, enjoy it!!

kojenchieh 發表在 痞客邦 留言(0) 人氣()


10 Things You Might Not Know About Automation
By Linda Hayes, Better Software, Jan/Feb 2009

1. 測試資料是最困難的部份
- Automation is all about repeatability, and if your data is unstable or unpredictable, then your tests can't be repeated.
- Consider using your automation tool to load data so you always know the state of your test environment.

2. Record and play 只是商業噱頭, 不應該完全是你實作的策略
- Designed to make it look easy long enough for your check to clear, recorded scripts are nothing more than bad, fragile code.
- The mere mention of this term by a vendor should be grounds for expulsion from serious consideration, as it has been proven responsible for untold failed automation efforts. Don't be fooled.

3. Don't write programs to test programs
- Don't try to replicate application logic in your tests or you will end up with more code than the application itself has.
- Write your automated test to expect the expected and use logic only to recover from the unexpected
(我真的不是很了解作者想強調什麼, 知道的人分享一下吧)

4. 量多不一定有用
- Just because automation can run more tests than you can perform manually does not mean more is better.
- Every test takes time to design, develop, execute, and analyze.
- Worry about coverage, not quantity.

5. 維護性是重要的考量
- We test software because it changes.
- If it takes too long to update your tests, you can't keep up and you will have to revert to manual testing to meet schedules.
- If a vendor tells you it is easier to re-create the test than to maintain it, run for your life.
- Be sure that maintainability is a key quality of any test library design.

6. 不要認為Automation是在最後才能做的
- Anyone who says you can't automate until the application is complete and stable does not know how to design a proper test.
- Modern techniques allow tests to be automated before the code is even allowing automation to play a part in even the most agile of environments

7. 不要認為做了automation,就不需要領域專家
- If an automation tool is so technical it cannot be used by the people who know your application best, keep looking.
- Programming prowess is not a substitute for domain expertize.
- Testing is only as good as the tester.

8. 規劃好內部開發規則
- Establish and enforce naming conventions, design standards, and change control procedures.
- Whitout them, you will lost track of your test assets, resulting in duplication and omission.
- If you cannot find a test, you cannot use it, and if you cannot make sense of it, you cannot maintain it.

9. 要記錄下你的設計原理
- The most elegant of all architectures has no value if you cannot understand it.
many a genius has labored over an approach only to leave confusion behind when he leaves.
- Insist on diagrams and documentation that describe the overall structure and the purpose of components.

10. 要請開發人員也加入
- Unless the developers cooperate by deliveing testable code in the form of persistent object names, exposed methods and properties, and enough heads-up on changes for you to react, all of your efforts will be in vain.
- Make programmers your partners, educating them about what it takes to automate and supporting them with automated build tests and other time savers.

kojenchieh 發表在 痞客邦 留言(3) 人氣()

Unit Testing的價值被過度高估

Testing is overrated
Posted by Luke Francl
on Friday, July 11

作者在這篇文章中, 提到了一個觀點: "測試的價值被過度高估". 因為在agile中, 強調RD要做unit testing, 要採用TDD. 但是作者認為這樣是不夠的, 他認為這個效果有過度被渲染, 事實上RD還是花很多時間在做debugging. 所以根據Steve McConnell (Code Complete一書的作者)所說的, 必須還要搭配各式不同的方法,來確保軟體的品質, 因為每種方法所找出的bug是不同的.

1. RD測試所帶來的問題
首先, 作者先談為何光是RD做unit testing是不夠的. 由RD來做測試會有一些limitations, 這些是作者所觀察到的

A. 測試是非常困難的....大部分的RD並不十分擅長
- Programmers tend write “clean” tests that verify the code works, not “dirty” tests that test error conditions.
- Steve McConnell reports, “Immature testing organizations tend to have about five clean tests for every dirty test.
- Mature testing organizations tend to have five dirty tests for every clean test.
- This ratio is not reversed by reducing the clean tests; it’s done by creating 25 times as many dirty tests.” (Code Complete 2, p. 504)

B. 你沒有辦法測試尚未寫出的Code
- Robert L. Glass discusses this several times in his book Facts and Fallacies of Software Engineering.
- Missing requirements are the hardest errors to correct, because often times only the customer can detect them.
- Unit tests with total code coverage (and even code inspections) can easily fail to detect missing code.
- Therefore, these errors can slip into production (or your iteration release).
- Tests alone won’t solve this problem, but I have found that writing tests is often a good way to suss out missing requirements.

C. Test Cases也可能包含錯誤
- Numerous studies have found that test cases are as likely to have errors as the code they’re testing (see Code Complete 2, p. 522).
- So who tests the tests? Only review of the tests can find deficiencies in the tests themselves.

D. RD所做的測試並不能有效地找出Bugs
- To cap it all off, developer testing isn’t all that effective at finding defects.
- Defect-Detection Rates of Selected Techniques (Code Complete 2, p. 470)
Removal Step                         Lowest Rate     Modal Rate     Highest Rate
Informal design reviews         25%                 35%             40%
Formal design inspections     45%                 55%             65%
Informal code reviews             20%                 25%             35%
Modeling or prototyping         35%                 65%             80%
Formal code inspections         45%                 60%             70%
Unit test                                 15%                 30%             50%
System test                             25%                 40%             55%

2. 不要把所有雞蛋放在同一個籃子
因此作者提出不要把所有雞蛋放在同一個籃子的論調. 不同種類的defect detection 技巧, 能找到不同種類的問題. 因此你不能只是用其中一種, unit testing, manual testing, usability testing 和 code review 都要使用

A. Manual testing
- As mentioned above, programmers tend to test the “clean” path through their code.
- A human tester can quickly make mincemeat of the developer’s fairy world.
- Good QA testers are worth their weight in gold.
- I once worked with a guy who was incredibly skilled at finding the most obscure bugs.
- He could describe exactly how to replicate the problem, and he would dig into the log files for a better error report, and to get an indication of the location of the defect.
- Joel Spolsky wrote a great article on the Top Five (Wrong) Reasons You Don’t Have Testers—and why you shouldn’t put developers on this task. We’re just not that good at it.

B. Code reviews
- Code reviews and formal code inspections are incredibly effective at finding defects (studies show they are more effective at finding defects than developer testing, and cheaper too), and the peer pressure of knowing your code will be scrutinized helps ensure higher quality right off the bat.
- I still remember my first code review. I was doing the ArsDigita Boot Camp which was a 2-week course on building web applications.
- At the end of the first week, we had to walk through our code in front of the group and face questions from the instructor.
- It was incredibly nerve-wracking! But I worked hard to make the code as good as I could.
- This stresses the importance of what Robert L. Glass calls the “sociological aspects” of peer review.
- Reviewing code is a delicate activity. Remember to review the code…not the author.

C. Usability tests
- Another huge problem with developer tests is that they won’t tell you if your software sucks.
- You can have 1500% test coverage and no known defects and your software can still be an unusable mess.
- Jeff Atwood calls this the ultimate unit test failure:

    I often get frustrated with the depth of our obsession over things like code coverage. Unit testing and code coverage are good things. But perfectly executed code coverage doesn’t mean users will use your program. Or that it’s even worth using in the first place. When users can’t figure out how to use your app, when users pass over your app in favor of something easier or simpler to use, that’s the ultimate unit test failure. That’s the problem you should be trying to solve.
    (這段話, 道出了coverage test的弱點. coverage高並不代表你程式quality高, 可能是你程式沒有做太多error handling, 或是有些需求沒有寫到, 甚至也可能是你用的criteria太低[譬如你只用function coverage or statement coverage來看結果 ])
- Fortunately, usability tests are easy and cheap to run.  (這點個人是有點持保留態度, 但也可能我不知usability test如何執行. 各位先進, 還請分享一下)
- Don’t Make Me Think is your Bible here (the chapters about usability testing are available online).
- For Tumblon, we’ve been conducting usability tests with screen recording software that costs $20.
- The problems we’ve found with usability tests have been amazing. It punctures your ego, while at the same time giving you the motivation to fix the problems.

那為什麼Unit Testing有用呢?

作者認為Unit testing 之所以有用, 是因為它讓我們思考我們所寫的code是否有問題, 是否有可以改進的地方.

作者還引用的Michael Feathers所寫的文章:The Flawed Theory Behind Unit Testing, 來佐證

    One very common theory about unit testing is that quality comes from removing the errors that your tests catch. Superficially, this makes sense….It’s a nice theory, but it’s wrong….

    In the software industry, we’ve been chasing quality for years. The interesting thing is there are a number of things that work. Design by Contract works. Test Driven Development works. So do Clean Room, code inspections and the use of higher-level languages.

    All of these techniques have been shown to increase quality. And, if we look closely we can see why: all of them force us to reflect on our code.

    That’s the magic, and it’s why unit testing works also. When you write unit tests, TDD-style or after your development, you scrutinize, you think, and often you prevent problems without even encountering a test failure.

So: adapt practices that make you think about your code; and supplement them with other defect detection techniques.

所以千萬不要是為了做事而做事, 而是要思考你做這事能幫助你什麼, 你為什麼要做這件事

既然Unit Testing是不夠的, 那我們為什麼還要RD只做這些事情呢?

Most programmers can’t hire a QA person or conduct even a $50 usability test.
And perhaps most places don’t have a culture of code reviews.
But they can write tests. Unit tests! Specs! Mocks! Stubs! Integration tests! Fuzz tests!
也就是說這些事情是他們所能控制的, 所以他們只好一直做這些事情.

這聽起來是不是很諷刺, 可是這也是我們平時容易做的事: 只做容易做的, 或是能做的. 但不是做正確的, 或是重要的.

No single technique is effective at detecting all defects.
We need manual testing, peer reviews, usability testing and developer testing (and that’s just the start) if we want to produce high-quality software.

* Robert L. Glass, Facts and Fallacies of Software Engineering.
* Steve McConnell, Code Complete 2nd ed, Chapters 20-22.
* Steve Krug, Don’t Make Me Think.

kojenchieh 發表在 痞客邦 留言(0) 人氣()

什麼是ET, ST, 和Test Automation正確的組合?

My theory of software testing - I

What's the right mix of exploratory testing, "planned" manual testing, and test automation?

作者認為這個答案是"看狀況", 要看你遇到的問題種類是什麼.

kojenchieh 發表在 痞客邦 留言(0) 人氣()

成功和失敗的Test Automation的差別

The Difference Between Test Automation Success and Failure
Elisabeth Hendrickson
Quality Tree Software, Inc.

這篇文章作者在討論, 成功和失敗的Test automation差在哪裡. 首先他先定義何謂失敗, 何謂成功.
什麼是失敗的Test Automation
‧ Wasted Time
‧ Wasted Money
‧ Inaccurate Results
‧ Demoralized Team
‧ Overall Reduced Productivity
‧ Lost Opportunity

什麼是成功的Test Automation
‧ Overall Cost Savings
‧ Improved Testing
‧ Shortened Software Development Cycle
‧ Reliable Results
‧ Process in Place for Future Success

接下來從Project的 level來討論, 她所經歷過的成功和失敗的project有怎樣的差別
(1). 領導階層的特徵
Failed Project
‧ Executives expected immediate payback.
‧ QA Manager had unrealistic expectations.
‧ Automation lead (me) inexperienced in leadership and automation.
Successful Project
‧ Different executives were open to having their expectations reset.
‧ Different QA manager with more automation experience.
‧ Automation lead (me) got a clue.

(2) Project對Test Automation的目標
Failed Project
‧ Stated goal: “Automate Everything”
‧ Unstated goal: “Reduce number of testers needed.”
‧ Goals not measurable.
Successful Project
‧ Stated goal: “Save manual testers time and improve testing coverage.”
‧ Unstated goal: “Reduce test cycle time.”
‧ Goals specifically designed to be measurable.

(3) 溝通狀況
Failed Project
‧ No consistent communication about project goals and status.
‧ Inadequate communication with executives.
‧ Inadequate communication with manual testers.
Successful Project
‧ Same detailed weekly status report sent to all. Status information available online at all times.
‧ Close communication with the VP of Development and Director of QA
‧ Verbal status reports delivered in weekly QA meeting.

(4) 自動化的準備程度
Failed Project
‧ Extremely limited test documentation; most testing ad hoc.
‧ No method of tracking test results.
‧ Testers lacked a strong understanding of how to test the product.
Successful Project
‧ Written test documentation. Each test case numbered individually.
‧ Test results tracked on spreadsheets that referenced the test case number.
‧ The test group as a whole had a much better understanding of how to test the product.

(5) Automated Testing Team的心態
Failed Project
‧ “Bulldozer builders vs. ditch diggers”
‧ Automators didn’t appreciate the value of manual testing.
‧ Manual testers felt threatened.
Successful Project
‧ Service organization focused on building tools.
‧ Automators understood that automation cannot replace manual testing.
‧ Manual testers more involved in the process and therefore less threatened.

從技術的觀點, 她所經歷過的成功和失敗的project有怎樣的差別
(1) Automated Test System 的架構
Failed Project
‧ Tool best-suited to creating individual scripts, not entire systems.
‧ Tool did not support creating a reusable library of functions
‧ No support for logical layer resulted in maintenance nightmare.
Successful Project
‧ Tool specifically designed to support creation of automation systems.
‧ Tool supported & encouraged creating a reusable library (“infrastructure”).
‧ Logical layer vastly improved portability & maintainability of scripts.

(2) Script Creation 的方法
Failed Project
‧ Primarily record & playback
‧ No automatic test case or test data generation
Successful Project
‧ Primarily data driven; record & playback used as a learning tool only.
‧ Used advanced features in the automation tool to support automated test data generation.

Good Script 設計要素
‧ Tests structured with setup, action, and result
‧ Tests are not order-dependent
‧ Test data is never hard coded
‧ Results are informative
‧ Pass/Fail determination is as automated as is practical

(3) Verification的方法
Failed Project
‧ Bitmap comparisons to verify both window existence & contents of window.
Successful Project
‧ Logical window existence functions to verify window appeared.
‧ Logical comparison between expected fields in window and actual fields.
‧ Test data verification

(4) Automation Programming Practices
Failed Project
‧ Automation standards focused on file naming conventions.
‧ Extremely limited code reviews.
‧ No source control; test script management done exclusively through tool’s management module.
Successful Project
‧ Automation standards focused on what constitutes good code
‧ Both formal & informal code reviews on a regular basis.
‧ Commercial source control system used.

作者在 Automation Management所學到的事情
‧ Set realistic goals.
‧ Measure your progress toward those goals.
‧ Communicate goals and status clearly and consistently.
‧ Don’t let your management set their expectations based on vendor hype.
‧ Coordinate with manual testers.

作者在 Automation Creation所學到的事情
‧ The right architecture can make everything else fall into place.
‧ Having the right tool for the job makes a difference.
‧ Simple scripts can be more powerful than complex do-everything scripts.
‧ Automation is programming: good programming practices apply.

kojenchieh 發表在 痞客邦 留言(1) 人氣()

為何大家只想做GUI Test Automation? 如何改變?

Flipping the Automated Testing Triangle: the Upshot

在Cohn’s ideal triangle中分成三種Test Automation: Brick tests, Stick tests, and Straw tests.

1. Brick tests: unit testing
- They tend to run really fast (on the order of 10 to 100 per second) because they run entirely in memory. - - They tend to pinpoint bugs really well. Then tend to be hard to break. Then are hard to learn to write truly well.
- They are the single most important thing you can learn to automate. Without a solid suite of wellwritten
unit tests, it’s hard to find a software team is that is not basically screwed.
- Tools: xUnit, TestNG, MbUnit...

2. Stick tests: 不透過GUI來做測試, 像是end to end, integration等等
- They bypass the GUI, then tend to be less brittle. They are certainly more brittle than really
good unit tests, however.
- They do tend to be about chunks of behavior as large as a feature, more than about smaller isolated
- They again tend to use real external resources, real-ish data.
- They again tend to be large,and to run slowly. By this we mean many minutes, as opposed to a few seconds.
- They are building the right thing, much more than building the thing right.
- Tools: Fit, xUnit, FitNesse, ZiBreve, Concordian...

3. Straw tests: GUI testing
- They tend to use the entire system as a black box, mimicking real-world behavior, talking to all of the real code and external resources and dependencies of the system.
- They tend to focus on large chunks of behavior.
- Tthey tend to be slow
- Tools: Watir, Selenium, Canoo and commercial products...

從這個列表來看, 我們應該要投資在Brick tests, 因為他最快, 開發最省力, 能夠cover的test最多, rework的狀況最少. 可是真實的狀況是什麼呢?

根據作者的調查, 絕大部分的team都投資在Straw tests.


作者認為會比較想做GUI test automation的理由如下:
- They are easier, at first, to learn to write
- Our programmers “don’t do testing,” because we have dedicated manual testers who at first seem like the logical test automators.
- We tend to fall naturally into this pattern: starting with the through-the-GUI tests.

那為什麼不做unit testing的理由如下
We tend not to start with unit tests, because they are hard to learn to do well, because
- Programmers are afraid of them, or because programmers feel that they do not have the time or permission to write them.
- Sometimes hear software professionals claim that these tests are not worth the effort, that they are not valuable enough.
- Here are some of the things a programmer must learn to do well, in order to produce high-value, low-maintenance-cost unit test suites. This is quite a bit to learn.
    a. SRP, small modules, mocking/faking, dependency injection, Refactoring, legacy code rescue, TDD, OO, BDD, CI, Design Patterns
    b. This is a hell of a lot of work. Unit testing is really, really, hard for most teams to learn. This is especially true when they are already being slogged about by enormous, nasty, untested legacy codebases.

1. Follow the pain.
- This is true for most agile transitions.
- Is it really production defects blowing up badly that you want to focus on? And are you constrained in the unit tests you can create just yet?
- Are all or most of your tests currently manual? Then it might, in fact, make sense to start with GUI test suites.

2. Early wins;low-hanging fruit.
- For such teams, GUI tests are often the lowest hanging fruit, a logical starting place (though a
dreadful ending place).

3. The Whole Team owns the transition.
- Let there not be fiat testing initiatives from above.
- No “THou Shalt Do TDD or Else” directives, nor coverage rate directives. At least, not without the buy-in and input from the entire team.
- Let the team self-organize -- testers, programmers, BAs, managers -- around where the biggest pain points are, where the low hanging fruit is, and which steps to take when, while still finding a way to meet ridiculous production schedules.

4. Pair programmers with testers/QA persons.
- The parable of the overloaded tester.
- There should be no such thing as a group of programmers blocked by testers who cannot test code fast enough.
- That’s just a version of “that’s not my job.” Here is what you get when any role or responsibility on the team says “that’s not my job.”

5. Pair programmers with testers/QA persons.
- Testing is everybody’s job.

6. Three initiatives: straw, sticks, and bricks.
- The team needs to look at these three initiatives as separate.
- We need separate plans for enough bricks, enough sticks, and just enough straw.
- We need plans for how to increase the bricks as we scale back the straw.

7. Learn to make bricks, no matter what.
- And no matter what happens, if you don’t end up with an absolutely great suite of unit tests
   on the typical non-trivial project, this means thousands of tests than run in a few seconds
   then you won’t ever get your automated test costs down to least TCO levels.
- And your code maintain costs, turnover costs, and customer satisfaction costs will all be higher as well.

8. Earn testers freedom to do more Exploratory Testing.
- What are the best testers really good at? They are good at sniffing out bugs where others would never think to find them.
- This is instinctual, “blink” skill (from Malcolm Gladwell’s book on deep expertise).
- Use a great automated testing strategy to buy your testers time to do more and more and more exploratory testing (ET) per iteration.

9. There is such a thing as too little courage, skill, trust, respect

kojenchieh 發表在 痞客邦 留言(2) 人氣()

Test Automation Video

這裡有一些Test automation的video, 都是來自於 Google Tech Talk. Google對testing真的蠻重視的, 有些presenter 都還蠻有名的. 我想我們公司應該可以效倣他們, 來提升我們testing的水準, 不要只是光只有RD的演講, 這樣會讓人家認為我們其實不重視testing.

1. GTAC 2008: Context-Driven Test Automation - How to Build the System You Really Need
- Google Tech Talks October 24, 2008
- Presenter: Peter Schneider

2. Google London Test Automation Conference (LTAC) Openning
- Google Tech Talks September 7th, 2006
- Presenters: Shannon Maher and Allen Hutchison

3. Using Test Oracles in Automation
- Google TechTalks April 25, 2006
- Presenter: Douglas Hoffman

4. Automated Testing Patterns and Smells
- Google Tech Talks March, 6 2008
- Presenter: Gerard Meszaros

5. Advances in Automated Software Testing Technologies
- Google Tech Talks October 23, 2008
- Presenters: Elfriede Dustin and Marcus Borch

6. The Clean Code Talks -- Unit Testing
- Google Tech Talks October, 30 2008
- Presenter: Misko Hevery

7. Using Cloud Computing to Automate Full-Scale System Tests
- Google Tech Talks October 23, 2008
- Presenter: Marc-Elian Begin

8. Testing mobile handsets with Fitnesse
- Google Tech Talks September 8th, 2006
- Presenters: Uffe Koch & Mark Boxall

9. GTAC 2008: Automated Model-Based Testing of Web Applications
- Google Tech Talks October 24, 2008
- Presenters: Atif Memon, Oluwaseun Akinmade

kojenchieh 發表在 痞客邦 留言(0) 人氣()

Google C++ Mocking Framework

五個月前, Google才announce C++ Testing Framwork, 現在Goolge又announce C++ Mocking Framework. 它可以support Linux, Windows 和Mac OS X. Google宣稱內部已經有超過100projects用過它, 大家的反應都還不錯, 以下是他的好處

* Simple, declarative syntax for defining mocks
* Rich set of matchers for validating function arguments
* Intuitive syntax for controlling the behavior of a mock
* Automatic verification of expectations
* Easy extensibility through new user-defined matchers and actions

Google C++ Mocking Framework Home Page

Documentation URL

Download URL

Google Mocking Framework Discussion Group

這裡有一個class ShoppingCart, 它會從server拿到tax rate.  
你要測試的狀況是: it remembers to disconnect from the server even when the server has generated an error.
class TaxServer {
  // Returns the tax rate of a location
  // (by postal code) or -1 on error.
  virtual double FetchTaxRate(
    const string& postal_code) = 0;
  virtual void CloseConnection() = 0;

這裡demo你如何用mock server去verify
class MockTaxServer : public TaxServer {     // #1
  MOCK_METHOD1(FetchTaxRate, double(const string&));
  MOCK_METHOD0(CloseConnection, void());

    StillCallsCloseIfServerErrorOccurs) {
  MockTaxServer mock_taxserver;              // #2
  EXPECT_CALL(mock_taxserver, FetchTaxRate(_))
    .WillOnce(Return(-1));                   // #3
  EXPECT_CALL(mock_taxserver, CloseConnection());
  ShoppingCart cart(&mock_taxserver);        // #4
  cart.CalculateTax();  // Calls FetchTaxRate()
                        // and CloseConnection().
}                                            // #5

1. Derive the mock class from the interface. For each virtual method, count how many arguments it has, name the result n, and define it using MOCK_METHODn, whose arguments are the name and type of the method.

2. Create an instance of the mock class. It will be used where you would normally use a real object.
3. Set expectations on the mock object (How will it be used? What will it do?). For example, the first EXPECT_CALL says that FetchTaxRate() will be called and will return an error. The underscore (_) is a matcher that says the argument can be anything. Google Mock has many matchers you can use to precisely specify what the argument should be like. You can also define your own matcher or use an exact value.
4. Exercise code that uses the mock object. You'll get an error immediately if a mock method is called more times than expected or with the wrong arguments.
5. When the mock object is destroyed, it checks that all expectations on it have been satisfied.

1. Announcing Google C++ Mocking Framework
2. Mockers of the (C++) World, Delight!

kojenchieh 發表在 痞客邦 留言(0) 人氣()

Top 40 Automated Testing Blogs

雖然標題是有關automated testing的blog, 不過我認為其實是有關software testing的blog, 所以大家可以參考一下

1.    Google Testing Blog, (various)

2.    Performance Tidbits, Rico Mariani

3.    Scott Barber's blog, Scott Barber

4.    Collaborative Software Testing, Jonathan Kohl

5.    Cem Kaner's blog, Cem Kaner

6.    Agile Testing, Grig Gheorghiu

7.    James Bach’s blog, James Bach

8.    Creative Chaos, Matthew Heusser

9.    Advanced QTP, (various)

10. Corey Goldberg's blog, Corey Goldberg

11. The Braidy Tester, Michael J Hunter

12. Tester Tested!, Pradeep Soundararajan

13. WilsonMar.com, Wilson Mar

14. Testing Hotlist Update, Bret Pettichord

15. Test Obsessed, Elisabeth Hendrickson

16. My Load Test, Stuart Moncrieff

17. Theo Moore's blog, Theo Moore

18. Thinking Tester, Shrini Kulkarni

19. Observations on software testing and quality, Michael Bolton

20. Quality through Innovation, Adam Goucher

21. Easy way to automate testing, Dmitry Motevich

22. Software Testing Zone, Debasis Pradhan

23. JW on Test, James Whittaker

24. Mike Kelly's blog, Mike Kelly

25. Questioning Software, Ben Simo

26. London software testing news, (various)

27. Ankur Jain's blog, Ankur Jain

28. Jeff Fry on Testing, Jeff Fry

29. The Software Inquisition, (various)

30. 90kts, Tim Koopmans

31. Test this Blog, Eric Jacobson

32. Stefan Thelenius about Software Testing, Stefan Thelenius

33. LoadRunner Tips and Tricks, Hwee Seong Tan

34. QuickTest Pro, Mohan Kumar Kakarla

35. KnowledgeInbox, Tarun Lalwani

36. Alexander Podelko's blog, Alexander Podelko

37. Software Performance Engineering & Testing, Charlie Weiblen

38. Software Testing Blog, Unknown

39. Automated Chaos, Bas M. Dam

40. Automated Web Test, Meena

kojenchieh 發表在 痞客邦 留言(2) 人氣()

Can Test Automation Tools Replace The Human Testers?

Oct 22th, 2008
Posted by Debasis
Published in Software Testing Zone

kojenchieh 發表在 痞客邦 留言(2) 人氣()


請輸入暱稱 ( 最多顯示 6 個中文字元 )

請輸入標題 ( 最多顯示 9 個中文字元 )

請輸入內容 ( 最多 140 個中文字元 )