應用系統效能測試的難題: Cloud, Virtual labs, Scale-up

Application performance testing issues: Cloud, virtual labs, scale-up
http://itknowledgeexchange.techtarget.com/software-quality/application-performance-testing-issues-cloud-virtual-labs-scale-up/

Posted by: Jan Stafford
Published in IT Knowledge Exchange

應用程式的效能測試過去曾經是一個單獨的流程. 但是由於複雜mission-critical的應用系統, 虛擬化技術, 和雲端計算等狀況的出現, 人們將這些東西合在一起, 導致於效能測試十分困難. Mark Kremer, Precise Sofiware Solution of Redwood Shores的 CEO最近和我談到這些事. 在談話中, 他提出一些建言, 有關於要如何來面對這些挑戰, 以確保最高的系統效能.

我問Kremer, 要把應用程式移植到雲端上, 對於應用程式的效能測試和管理會有多難處理. 他回答說, 由於雲端的動態性質, 因此意味著應用程式的效能需要時時刻刻被監督. 下是他提到的重點

1. In physical environments, application performance management assumes quasi-static resource configurations:
    - the computing power, network bandwidth, memory pools, and system overhead are invariable over time or at least until the next configuration upgrade

2. Once an application is run on a cloud, its configuration may change from one invocation to another, or even within the same run, as processes may be transparently moved around the cloud. 

3. This phenomenon of ever-changing resources makes time measurements inconsistent as they have been taken under a different condition. 

4. Correcting, or normalizing time measurements to a standard scale is conditional to self referencing performance monitoring, and is a daunting challenge to model and implement

(有關於軟體測試和雲端計算的更多資訊, 請參考我和Eugene Ciurana(LeapFrog Enterprises的director of systems infrastructure, 美國一家大的教育玩具公司)的訪談)

Kremer提到, 虛擬化環境的動態性質也要求改變到, 應用系統的效能如何被監控和測試. 開發和測試團隊應該保有一個內部的時鐘 - app time - 它不會隨著底層的硬體而有所變動. 他解釋到:

1. For example, a transaction will spend the same time measured by the application clock in a Java method regardless of the power of CPUs used in each invocation

2.As application performance management evolves to include this concept, developers building applications for virtual or more commonly mixed mode — virtual and physical — can get around the semantics of time in virtual environments.



一般而言, 談論到應用系統的效能, Kremer強調測試不能只是在實驗室裡做, 因為你很難模擬出真實產品的環境. 即使你可以在實驗室中建立出產品的環境, 通常所測出的效能還是和真實的環境中不同. Kremer的想法如下:

1. This dynamic manner of problem resolution analyzes the data that causes performance-loss by tracking spikes in user behavior, patterns in data accumulations, and changes to the systems configurations.

2. Application performance testing relies more on static test models which makes it tough to replicate real-world production environments

我問Kremer, 如何做到因應scale-up的改變, 我們必須要進行怎樣的測試, 以確保出色的應用程式效能. 他說到, 應用程式要能scale up, 效能測試需要根據輸入內容(input oriented)來做改變, 也就是要著重於test patterns, synthetic transactions等等, 也就是要總處理能力導向(throughput oriented).

1. As systems scale up, their performance testing paradigm shifts from predefined synthetic tests to monitoring and self-reference,

2. For optimal results, IT needs to identify the top, say 20, transactions of the system, constantly monitor their performance, their component's performance, and the time allocations of various tiers in the system. Then it must self reference these measurements hour-to-hour, day-to-day, season-to-season…to detect performance degradation, offending transaction components or performance hot-spots

arrow
arrow
    全站熱搜

    kojenchieh 發表在 痞客邦 留言(0) 人氣()