Panel Discussion: Performance Engineering in the Time of DevOps
August 25, 2020Solving the ‘Need for Speed’ in the World of Continuous Integration
August 25, 2020Traditional load testing (optimized for the waterfall software development process) was mostly focused on pre-production realistic tests – so the main goal was to make load and environment as similar to production as possible. Drastic changes in the industry in recent years – agile development and cloud computing probably the most – opened new opportunities for performance testing. Instead of the single way of doing performance testing (and all other were considered rather exotic), we have a full spectrum of different tests which can be done at different moments – so deciding what and when to test became a very non-trivial task heavily depending on the context. But the same industry trends that made performance testing easier (a working system on each iteration due to agile development, cloud infrastructure available for deployment) introduced new challenges (need to test on each iteration, increased sophistication and scale of systems).
Videos are available to Virtual Conference Registrants and CMG members.
To access these resources, sign in here.
Due to increased sophistication and scale of systems, full-scale realistic performance testing is not viable anymore in most situations (or at least not on each iteration). In many cases, we may have different partial performance test results (for a lower level of load, different parts of the system, different functionality, etc.). So results analysis and interpretation became more challenging – and may require modeling to make meaningful conclusions about the performance of the whole system.
Performance testing provides response times and resource utilization for specific workloads. Together with knowledge about architecture and environments, it allows the creation of a model to predict system performance (to be verified by larger-scale performance tests if necessary). This is a pro-active approach to mitigating performance risks – but it requires significant skills and investments to be implemented properly. So for existing systems, it is often complemented (or even completely replaced) by reactive approaches of observing the production system (“shift-right” – monitoring, tracing, canary testing, etc.). However, it is not working for new systems. If you are creating a new system, you need proactive methods such as early performance testing (“shift-left”) and modeling to make sure that the system will perform as expected.
Modeling becomes more important when we are at the design stage and need to investigate the performance and cost consequences of different design decisions. In this case, production data are not available and waiting until the system would be fully developed and deployed is too risky for any non-trivial system. One of the best examples of performance risk mitigation by a combination of performance testing and modeling is big data systems. The enormous size of the system makes the creation of a full-scale prototype almost impossible. However, associated performance risks are very high – implementing a wrong design may be not fixable and lead to complete re-design from a scratch. So building a model to predict the system’s cost and performance based on early/partial prototype performance test results and knowledge about architectures and environments becomes the main way to mitigate associated risks. A few examples of such models would be shortly discussed.
Early Load Testing provides valuable information but does not answer the question of how new applications will perform in production environments with a large number of concurrent users accessing large volumes of data. It does not answer the question of how the implementation of new applications will affect the performance of existing applications and how to change the workload management parameters affecting priorities, concurrency, and resource allocation to meet business Service Level Goals. It does not answer the question if the production environment has enough capacity to support expected workloads growth and increase in the volume of data. Should the new application be part of Data Warehouse or Big Data environment? Maybe the new application should use Cloud platform? What is the best Cloud platform for new applications?
In this presentation, we will review the value and limitations of available Load Testing tools and discuss how modeling and optimization technology can expand results of Load Testing. We will review a use case based on use of BEZNext Performance Assurance software illustrating all phases of data collection and workload characterization in small test and large production environment, anomaly and root cause detection and seasonality determination, workload forecasting, predicting impact of new application implementation, finding appropriate platform and development proactive recommendations, set realistic expectations reducing the risk of performance surprises and enable automatic results verification after implementation of new application.
This session will be presented at the Performance Engineering in the Time of DevOps Virtual Summit
About the Presenters
Boris Zibitsker, CEO, BEZnext
Alexander Podelko, Consulting Member of Technical Staff, Performance Engineering at Oracle
Presentation Date/Time: August 18, 11:50 AM EDT