Application Testing 13,278 views
Posted Saturday, May 07, 2016 by RICHARD HARRIS, Executive Editor
READ MORE: https://www.blazemeter.com/...
BlazeMeter recently announced new functionality to its application testing platform by adding technology that provides the ability to create load and performance tests as brief fragments of code in any text editor. Developers can run any combination of open source tools including Gatling, The Grinder, Locust, Selenium and JMeter tests in parallel through a single unified control language, both locally and in the cloud.
Alon Girmonsky, BlazeMeter CEO, reached out to us to provide more insight into this latest platform enhancement. Included in his insight is the impact of testing on the Agile process.
ADM: Why should developers care about this update?
Girmonsky: Developers should care about this update because it makes it easy for them to create their own performance tests as code that is granular and directly related to the application code they are creating or changing each day. Developers deserve an easy and fast way to get performance feedback right in their development environment without waiting in line for a performance testing resource to learn and test their apps.
Some organizations already require developers (or a testing member of an Agile team) to create and run functional tests before their code can be committed to the repo or promoted to a build. This update makes it easy to add performance “unit tests” to the same process. Lastly, developers now have the freedom to run any combination of tests in languages they are used to (not just JMeter), and to run many tests in parallel to reduce cycle times.
ADM: What does it mean to “democratize performance testing?”
Girmonsky: “Democratizing performance testing” is simple – historically, performance testing resources have been specialized and centralized. Performance testing is often times considered a stand-alone center of excellence, and in even in a good sized firm it may only have a few people in it. With this centralized bottleneck, there was no way to have feedback on performance in real-time. Developers would have to wait until their code got through the queue, and it can take a long time for performance testing to get up to speed in the development process.
The democratization of performance testing makes it self-service and available to all developers working on any team, in any enterprise. Democratization also makes getting performance testing started simple, so that creation of performance “unit tests” at the level of code modules can be done by the developer, with no knowledge transfer and nothing new to build. It allows developers the opportunity to see the results of performance unit tests before the code goes out to other members of their team for use.
Lastly, the organization has the ability to constantly keep these unit tests up to date, so the problem of queuing work by specialists to create or update tests goes away.
ADM: What do you mean by “performance testing should be like internet access?”
Girmonsky: Performance testing should be similar to connecting to Wi-Fi, straight-forward, easy, fast and right at your finger tips. You can log on to your performance testing platform and then five minutes later, begin running tests, much in the same way as you browse the internet immediately after connecting.
ADM: Explain how these new capabilities work. How do you run tests in parallel with each other?
Girmonsky: Developers can run a toolset at no cost on their own desktop. If they want improved reporting and analytics and the ability to easily share results with other team members, they need a paid BlazeMeter account. With BlazeMeter accounts, developers also gain the ability to run multiple tests in parallel, dramatically reducing testing cycle time when a build or release candidate is built.
The math of parallel testing is simple: When you are running multiple tests at the same time, the entire test suite will run only as long as it takes for the longest individual test to be completed. For example, if you are running a dozen tests ranging from two to nine minutes each, the whole testing process will be no more than nine minutes. A developer could run thousands of tests at the same time, but it would only take as long as the most time consuming one.
ADM: What are the challenges this product solves?
Girmonsky: Creating performance tests as code is built means there is no queue for analysis and creation of performance test scripts. Parallel execution reduces test cycle time to length of the longest single component rather than the combined total time of all the separate tests. The plain text-based integrated development environment (IDE)-friendly test creation and execution capability means developers never have to leave their tool of choice.
ADM: What challenges does testing in parallel address?
Girmonsky: The biggest challenge solved is reducing test cycle time, letting developers easily “cheat” the time it takes for tests to run and avoiding cycle time bloat as new tests are added. BlazeMeter’s implementation supports cloud resources and on-premise systems, and one instance or on-premise system can run multiple small tests simultaneously, saving resources. Whether you want to run 10 or 10,000 parallel tests, additional systems are easily added to make parallel scaling effectively infinite.
ADM: How will this make developers think about testing differently?
Girmonsky: This will make the thinking about performance testing radically different – many developers are already being asked to write unit tests for functionality, adding non-functional additions such as performance testing gives them valuable “shifted left” performance feedback. As many teams are already doing developer-centric functional tests, it makes sense to add performance testing with a minimal additional cost on the overall workflow process. There is also no context shift required, so developers do not have to transfer to another tool and their editor of choice can be used.
Performance testing can be as simple or complex as developers would like it to be, and it is also version control friendly.
ADM: What is next for the future of testing?
Girmonsky: Testing still has some distance to cover. It needs to part ways completely from the Waterfall age and become a first class citizen of the Agile age. In the Waterfall age, tests were governed by scripts. A script is created by reverse engineering an application, compiling a script that can be done to simulate an application behavior. It made sense through the waterfall age, where a test/QA cycle lasted a few weeks before the software was deployed into production.
In an agile environment, where code changes frequently, the “script” paradigm can not last.
If we mark every code commit as a code change, code changes frequently. Sometimes a few times per day. It is practically impossible to change the script as frequent as the code changes.
Welcome to the NoScript era. Tests should auto-generate themselves in real time either according to a blueprint or according to some intelligence deduced from live actions (e.g. production logs or APM). We should expect tests to create themselves according to the changes in the code and the "author intent" as specified in the test blueprint. Tests will come to life as defects unveil themselves in production. The NoScript era is the time tests become self aware :)
READ MORE: https://www.blazemeter.com/...