1. AI software testing explained by Perforce
11/30/2023 7:22:47 AM
AI software testing explained by Perforce
Software testing,CI/CD,AI,Developers,Agile,BlazeMeter,Perforce
https://news-cdn.moonbeam.co/AI-software-testing-explained-by-Perforce-App-Developer-Magazine_5t19ugyf.jpg
App Developer Magazine
Programming

AI software testing explained by Perforce


Thursday, November 30, 2023

Richard Harris Richard Harris

We recently caught up with Stephen Feloney from Perforce and chatted about the future of software testing, how AI is transforming operations for testing teams, how Perforce's testing tool BlazeMeter is helping developers, and much more.

Stephen Feloney from Perforce explains the strategies and techniques for providing test data within an agile shift-left approach, how to identify and address bottlenecks in the testing process, the biggest opportunities for AI to impact on software testing, how AI generates data faster and makes testing more through, how BlazeMeter is helping testing teams provision and manage complex scenarios and all you need to know about cross device testing and more below.

ADM: How can AI and automation help create diverse, scalable test data more efficiently?

Feloney: Test data has always been a challenge. When it comes to testing, it's even more so when it comes to shifting left. There was a challenge when you had just the center of excellence trying to get data when testers had their specified testing time to do it. But once testing shifts left, you need this data on a regular basis so spending the time and overcoming the challenges is no longer an option. Perforce has created a way of synthetically generating synchronized data.

As stated, we do not just create data. We keep the data synchronized with the data driving a test, the system under test, and data in mock services. We provide those capabilities, but coming up with the data rules has slowed the adoption in shift left testing. The development and testing teams do not even have time to figure out the rules. They want to get their testing done and move forward. BlazeMeter has now integrated AI in many ways to help create better data faster.

First, it can analyze and profile data from a recording or a CSV file. It figures out what type of data it is. Is it an address? Is it in the United States? Is it from a different country? Is it a social security number? It will create the data rules automatically. Not only does it understand and profile the data, but it can now generate any type of data needed from any region and any language. Historically, people had to use seed lists of different data types such as a list of different types of social security numbers, credit cards, first names, last names, and so on. AI takes all that away. Now it can generate an unlimited set of diverse types of data.

That means you are testing more thoroughly, and you are able to get that data generated faster because it can easily profile that data and give you the types of data it needs.

Effectively providing and maintaining test data within an agile shift left approach

ADM: What strategies and techniques do you recommend for effectively providing and maintaining test data within an agile, shift-left approach?

Feloney: When it comes to Agile testing, I would say to not even try to maintain test data.

What you need to understand is what type of data you want to create. Create data rules, and maintain data rules, but do not maintain the data itself.

Once rules are created, the data can be generated on the fly. You can have the data generated once and reuse the same data repeatedly. But that is limited in its effectiveness. It's always "Jane Doe." You know the app/service works for "Jane Doe," but does it work for “John Smith”? You do not really know. That is where the rules come in. Every time you run a test, the test will generate data that matches those rules. That enables better and more effective testing. Now shift-left testing can be more productive and efficient.

When it comes to generating data, it is practically useless if you just generate data to drive a test. Because if you don't match that data or sync that data with the system under test, or with the mock/virtual services, there's no point. When you're generating data, you need to generate data for the entire test. The data needs to be synchronized across the entire testing environment.

We had a customer who was mandated to create virtual services for all the services they were creating, and they did it. I then discovered no one - no developer, no tester - was using those created virtual services. They had hundreds of these virtual services. When I asked why, they said they had no idea what data was in those virtual services. It became useless. That was an "aha" moment for me. This is why data synchronization is so important, especially when you're shifting left.

When shifting left you don't need or want to get the data from production because it takes too long, there’s too much data and generally, you will not need data across an entire application as the test covers a component or service.

ADM: In your experience, what are the most effective approaches to identifying and addressing bottlenecks in the testing process to streamline development cycles?

Feloney: The most effective approach is to really look at your tests and time what's going on.

If a developer has something ready to test, how long does it take for that developer to get the feedback?

Once you understand that, you must go through and talk to the people who are responsible for testing and interviewing them. What is taking the most time? Generally, what you end up finding is manual testing had to be done, they didn't have an environment available for them, or they didn't have the test data available for them. Those are generally what you find as bottlenecks. But the interviews should take place to discover the problems.

Opportunities for AI to make an impact on software testing

ADM: Where do you see the biggest opportunities for AI to make an impact on software testing in the next 5 years?

Feloney: That’s hard to say. AI in its current form came about extremely fast. It's going to impact all aspects of testing. It will help analyze test results, pinpoint where problems are, and figure out what tests should be run. If you're running tests and your tests start to have errors, AI will highlight which errors are related to one another.

AI will analyze testing steps, developers' debugger, and/or usage from production to auto-generate tests of all kinds. That is going to be a massive step forward.  We will no longer have to have regression suites because the AI will generate them on the fly.

Essentially, every aspect of testing will be and is being impacted by AI now.

ADM: Ensuring consistent test environments is critical for accurate testing results. How does BlazeMeter facilitate the provisioning and management of these environments, particularly in complex multi-device and multi-browser scenarios?

Feloney: Consistent test environments are very important.

But one must minimize the testing environment.  Do not try to replicate production for shift-left testing.

You do not always need to have full application environments. You want very small pieces of that testing environment that is needed for the particular component being tested.  You want to isolate what you are testing and not be forced to wait for a full environment that is not needed or end up debugging an issue that is not related to the component/service you are testing.  This is where mock/virtual services come into play.

The mock service mocks out parts of the environment that are not being tested and they will react the same way every time. That means there is now consistency in testing because there is a focus on one or two specific areas to be tested.

I cannot tell you how many times I talk with customers and data is the problem because the data driving the test doesn't match the data in the system under test and or match the data in a virtual service. They get false positives, and they must spend a lot of wasted time and energy before realizing it was the test data. That is a big problem.

Being able to generate and synchronize that test data across the environment is yet other way that BlazeMeter helps ensure the accuracy of that testing.

When you run a test, you can have that test set up where it will automatically generate the virtual services that are needed and generate the data that's needed. This means anyone going to run the test doesn't need to know everything that’s needed to get it to run. It's more accurate and easier to run the test through BlazeMeter than through others.

ADM: As the adoption of DevOps practices accelerates, how does BlazeMeter integrate into CI/CD pipelines to enable continuous testing without impeding development speed?

Feloney: BlazeMeter is a SaaS-based testing solution.

Everything that BlazeMeter does is CI-driven; BlazeMeter out of the box supports CI/CD tools like Jenkins.

Whatever tool is used for the CI/CD process, BlazeMeter has well-documented APIs that anybody can use to build into the CI tool, tool chest, or the CI pipeline.

That includes bringing up the mock services, generating the data for the test, running the test, and retrieving the results.

All of it is CI-driven. BlazeMeter was built with a focus on shift left testing.

Effective collaboration between testers developers and other stakeholders

ADM: Effective collaboration between testers, developers, and other stakeholders is key. How does BlazeMeter facilitate transparent communication and data sharing among cross-functional teams to ensure alignment and quality in testing efforts?

Feloney: BlazeMeter does this in a few ways. First, because it is a SaaS-based tool solution, any tests, any data, and any rules created are seen by everybody.

Anybody who has permission can modify, use, and share all those components.

For example, say a developer is creating a unit test and they need to create a mock service for that unit test. Developers can build their mock service inside the unit test. They didn't build a mock service inside of BlazeMeter. They didn't store it there. They built it inside their unit test, adding one line of code in that unit test and BlazeMeter will see those transactions, store those transactions, and create a more advanced mock service from that. What happens is a developer does what they want to do, and the tester sees what that developer has done. That tester then can create a more holistic mock service to better enhance that testing, then share that through BlazeMeter with the developer.  The developer can then change one more line of code again and can utilize the new mock service the tester modified.

The same can happen with any test type. A developer/tester can create, by hand, a performance test, a functional test, or an API test and upload it to BlazeMeter. Another user can take that, copy it, or edit it. That’s a great collaboration between developers and testers.

ADM: Cross-browser and cross-device compatibility testing is essential for user satisfaction. How does BlazeMeter assist in managing the complexities of testing across different platforms while maintaining testing effectiveness?

Feloney: With BlazeMeter, we support multiple browsers.  A user can record on Google Chrome and then run the test across Edge, Firefox, and Chrome in parallel.

When it comes to cross-device testing, Perfecto has the ability through its device cloud to allow parallel testing across iOS and Android devices (both physical and virtual). In both instances, test results can be compared to understand different performance, functional, and visual differences.

About Stephen Feloney

Stephen Feloney is the Vice President of Products at Perforce. Prior to this role, for the last 11 years, Stephen has been in Product Management, focused on enterprise software, at various companies spanning from the very large, like HP, to startups. Before product management, Stephen spent 12+ years as a software engineer. Stephen holds a B.S. in Computer Engineering from Santa Clara University.

Stephen Feloney

Subscribe to App Developer Magazine

Become a subscriber of App Developer Magazine for just $5.99 a month and take advantage of all these perks.

MEMBERS GET ACCESS TO

  • - Exclusive content from leaders in the industry
  • - Q&A articles from industry leaders
  • - Tips and tricks from the most successful developers weekly
  • - Monthly issues, including all 90+ back-issues since 2012
  • - Event discounts and early-bird signups
  • - Gain insight from top achievers in the app store
  • - Learn what tools to use, what SDK's to use, and more

    Subscribe here