Testing solution to make AI more human launches from Applause
Monday, January 13, 2020
Applause, the crowd testing provider recently launched a new solution aimed at both the training and testing of AI experiences to address some of the common problem companies face when developing and releasing algorithms to real users.
Artificial intelligence (AI) is quickly reaching the mainstream. Once the stuff of science fiction, AI is now used to automate many mundane tasks to make our lives easier. And from a business perspective, AI helps brands personalize digital experiences, save money, and improve customer satisfaction and loyalty.
While the excitement around AI is substantial, one area that hasn’t received much attention is the testing of AI algorithms. Customer expectations for AI are high, only adding to the importance of testing. Applause – the leading crowd testing provider – recently launched a new solution aimed at both the training and testing of AI experiences to address some of the common problem companies face when developing and releasing algorithms to real users.
ADM: How is artificial intelligence impacting our daily lives?
Simonini: We interact with AI on a day-to-day basis, even if we don’t always realize it. A simple task such as unlocking a phone becomes an AI action if it is done by using a fingerprint scanner or through facial recognition. There are also plenty of websites using AI, whether that’s through a chatbot that helps to answer customer questions, or an algorithm that offers up personalized shopping recommendations. And with the growth of smart speakers like Amazon Alexa and Google Assistant, many of us are now using AI to make voice searches.
ADM: What industries do you see leveraging AI the most?
Simonini: There are use cases for AI no matter the industry, but there are some industries that have gotten off to a faster start. Retailers are already leveraging voice experiences to provide additional buying opportunities, and personalized recommendation engines are an easy fit in retail. There’s been a lot of movement toward AI in financial services too. Companies are using AI assistants – like Bank of America’s Erica – to give financial tips and alerts to customers. Finally, the automotive industry is clearly on the cutting-edge of AI. Not only are cars integrating voice features, but the continued move toward self-driving cars is all about artificial intelligence.
ADM: Why hasn’t AI caught on to a larger degree?
Simonini: AI has a data problem. Companies are struggling to source the massive amount of data that is required to train an algorithm. Not only do companies need tons of data, but they also need tons of good data – which can be hard to come by. When not enough data is used, or if the data is of poor quality, it can lead to biased results.
ADM: How can an algorithm be biased?
Simonini: Algorithms become biased when they are not trained and subsequently tested with diverse data. Say, for example, a single team based in the northwestern US develops an algorithm and does all their own training and testing. The AI they develop will work great for them and for people that resemble their profile – including gender, race, background, and location. But, the AI will deliver the wrong results when used by people who don’t match their exact profile. The old adage, “Garbage in, garbage out” still summarizes this bias problem best.
ADM: What issues can bias cause?
Simonini: Because bias doesn’t accurately represent all users, it can lead to skewed results and lower consumer trust. It can also cause brands to give up on an AI project altogether – a decision that can impact both companies’ finances and their willingness to try AI projects again in the future.
ADM: How can companies building or using AI algorithms address these concerns?
Simonini: Stopping AI bias starts with sourcing diverse data that is representative of the entire customer base. Then, on the other side, the outputs produced by the AI must be tested by a diverse audience. At Applause, we take a crowdsourced approach to both train and test AI for clients. With access to a global community of highly vetted and diverse testers, we are able to mitigate bias concerns and ensure AI experiences are of high quality.
ADM: How is your company using AI to improve its own products?
Simonini: We are using AI to improve our software platform in three main areas. First, by leveraging AI during tester selection, we pinpoint the best candidates for a given testing project depending on factors like the testing type. This delivers better value to our customers while removing some of the natural human bias that can sneak into the tester selection process. Second, we are using AI to improve payment optimization to ensure that we are paying testers the correct amount for the bugs found – based on the number of bugs and the severity. And, finally, we are working toward providing our clients with improved predictive analytics that will help them make decisions to help their business. This process is already underway with our release of the Applause Quality Score in October.
ADM: Do you see AI as a complete replacement for human testers?
Simonini: I don’t, at least not at this point in time. The technology is not ready for completely removing the human element out of the equation. There will always be the need to understand an application ‘in the wild’ and with the unpredictability that only a real human can provide.
ADM: Can you give some examples of the AI training and testing work you are already doing with clients?
Simonini: There are five main types of AI engagements that we currently have with clients.
- Voice: Source utterances to train voice-enabled devices, and test those devices to ensure they understand and respond accurately.
- OCR (Optical Character Recognition): Provide documents and corresponding text to train algorithms to recognize text, and compare printed docs and the recognized text for accuracy.
- Image Recognition: Deliver photos taken of predefined objects and locations, and ensure objects are being recognized and identified correctly.
- Biometrics: Source biometric inputs like faces and fingerprints, and test whether those inputs result in an experience that’s easy to use and actually works
- Chatbots: Give sample questions and varying intents for chatbots to answer, and interact with chatbots to ensure they understand and respond accurately in a human-like way.
About Kristin Simonini
A 20-year veteran of the product management space, Kristin leads Applause’s product organization. Her team is responsible for defining the strategic roadmap for Applause's industry-leading crowdsourced testing platform. Prior to Applause, Kristin led the product management efforts at EdAssist, a Bright Horizons Solution at Work where she instituted a product management practice and led the effort to reinvigorate their industry-leading tuition assistance platform including the release of their first mobile app.
Become a subscriber of App Developer Magazine for just $5.99 a month and take advantage of all these perks.
MEMBERS GET ACCESS TO
- - Exclusive content from leaders in the industry
- - Q&A articles from industry leaders
- - Tips and tricks from the most successful developers weekly
- - Monthly issues, including all 90+ back-issues since 2012
- - Event discounts and early-bird signups
- - Gain insight from top achievers in the app store
- - Learn what tools to use, what SDK's to use, and more