New AWS Amazon Machine Learning Platform Takes on IBM’s Watson
|Richard Harris in Enterprise Tuesday, April 14, 2015|
AWS has introduced Amazon Machine Learning, a new service that helps developers to embed machine learning intelligence into their apps. Developers can build and fine-tune predictive models using large amounts of data, and then use Amazon Machine Learning to make predictions - in batch mode or in real-time - at scale.
Amazon Machine Learning provides visualization tools and wizards that guide developers through the process of creating machine learning (ML) models without having to learn complex ML algorithms and technology. Once the models are ready, Amazon Machine Learning facilitates the ability to get predictions for an application using simple APIs, without having to implement custom prediction generation code, or manage any infrastructure.
The service uses custom algorithms to create ML models by finding patterns in existing data. Then, Amazon Machine Learning uses these models to process new data and generate predictions for an application. The platform is highly scalable and can generate billions of predictions daily, and serve those predictions in real-time and at high throughput.
- Easily Create Machine Learning Models: Amazon Machine Learning APIs and wizards make it easy for any developer to create and fine-tune ML models from data stored in Amazon Simple Storage Service (Amazon S3), Amazon Redshift, or MySQL databases in Amazon Relational Database Service (Amazon RDS), and query these models for predictions. The service’s built-in data processors, scalable ML algorithms, interactive data and model visualization tools, and quality alerts help build and refine models quickly.
- From Models to Predictions in Seconds: Amazon Machine Learning is a managed service that provides end-to-end model creation, deployment, and monitoring. Once the model is ready, developers can quickly and reliably generate predictions for applications, eliminating the time and investment needed to build, scale, and maintain machine learning infrastructure.
- Scalable, High Performance Prediction Generation Service: Amazon Machine Learning prediction APIs can be used to generate billions of predictions for applications. Developers can request predictions for large numbers of data records all at once using the batch API, or use the real-time API to obtain predictions for individual data records, and use them within interactive web, mobile, or desktop applications.
- Leverage Proven Technology: Amazon Machine Learning is based on the same highly ML technology used by Amazon to perform critical functions like supply chain management, fraudulent transaction identification, and catalog organization.
Example Use Cases for the Platform
- Fraud Detection: Amazon Machine Learning makes it easy to build predictive models that help identify potentially fraudulent retail transactions, or detect fraudulent or inappropriate item reviews.
- Content Personalization: The Platform can help a website provide a more personalized customer experience by using predictive models to recommend items or optimize website flow based on prior customer actions.
- Propensity Modeling for Marketing Campaigns: Amazon Machine Learning can help deliver targeted marketing campaigns. For example, Amazon Machine Learning could use prior customer activity to choose the most relevant email campaigns for target customers.
- Document Classification: The platform can help developers process unstructured text and take actions based on content. For instance, Amazon Machine Learning could be used to build applications that classify product reviews as positive, negative, or neutral.
- Customer Churn Prediction: Amazon Machine Learning can help find customers who are at high risk of attrition, enabling the ability to proactively engage them with promotions or customer service outreach.
- Automated Solution Recommendation for Customer Support: The platform can process free-form feedback from customers, including email messages, comments or phone conversation transcripts, and recommend actions that can best address their concerns. For example, you use Amazon Machine Learning to analyze social media traffic to discover customers who have a product support issue, and connect them with the right customer care specialists.
Pricing is on a pay-as-you-go basis with data analysis, model training, and model evaluation will costing $0.42 per compute hour. Batch predictions will cost $0.10 for every 1,000 predictions, rounded up to the next 1,000. Real time predictions cost $0.10 for every 1,000 predictions plus an hourly reserved capacity charge of $0.001 per hour for each 10 MB of memory provisioned for the model. During model creation, developers specify the maximum memory size of each model to manage the cost and to control predictive performance. Charges for data stored in S3, Amazon RDS, and Amazon Redshift are billed separately.
As the AWS team explained the benefits of machine learning in a recent blog post:
Today, it is relatively straightforward and inexpensive to observe and collect vast amounts of operational data about a system, product, or process. Not surprisingly, there can be tremendous amounts of information buried within gigabytes of customer purchase data, web site navigation trails, or responses to email campaigns. The good news is that all of this data can, when properly analyzed, lead to statistically significant results that can be used to make high-quality decisions. The bad news is that you need to find data scientists with relevant expertise in machine learning, hope that your infrastructure is able to support their chosen tool set, and hope (again) that the tool set is sufficiently reliable and scalable for production use.
The science of Machine Learning (often abbreviated as ML) provides the mathematical underpinnings needed to run the analysis and to make sense of the results. It can help you to turn all of that data into high-quality predictions by finding and codifying patterns and relationships within it. Properly used, Machine Learning can serve as the basis for systems that perform fraud detection (is this transaction legitimate or not?), demand forecasting (how many widgets can we expect to sell?), ad targeting (which ads should be shown to which users?), and so forth.
In order to benefit from machine learning, you need to have some existing data that you can use for training purposes. It is helpful to think of the training data as rows in a database or a spreadsheet. Each row represents a single data element (one purchase, one shipment, or one catalog item). The columns represent the attributes of the element: customer zip code, purchase price, type of credit card, item size, and so forth.
This training data must contain examples of actual outcomes. For example, if you have rows that represent completed transactions that were either legitimate or fraudulent, each row must contain a column that denotes the result, which is also known as the target variable. This data is used to create a Machine Learning Model that, when presented with fresh data on a proposed transaction, will return a prediction regarding its validity. Amazon Machine Learning supports three distinct types of predictions: binary classification, multiclass classification, and regression. Let’s take a look at each one:
Binary classification is used to predict one of two possible outcomes. Is this transaction legitimate, will the customer buy this product, or is the shipping address an apartment complex?
Multiclass classification is used to predict one of three or more possible outcomes and the likelihood of each one. Is this product a book, a movie, or an article of clothing? Is this movie a comedy, a documentary, or a thriller? Which category of products is of most interest to this customer?
Regression is used to predict a number. How many 27″ monitors should we place in inventory? How much should we charge for them? What percent of them are likely to be sold as gifts?
A properly trained and tuned model can be used to answer any one of the questions above. In some cases it is appropriate to use the same training data to build two or more models.
You should plan to spend some time enriching your data in order to ensure that it is a good match for the training process. As a simple example, you might start out with location data that is based on zip or postal codes. After some analysis, you could very well discover that you can improve the quality of the results by using a different location representation that contains greater or lesser resolution. The ML training process is iterative; you should definitely plan to spend some time understanding and evaluating your initial results and then using them to enrich your data.
You can measure the quality of each of your models using a set of performance metrics that are computed and made available to you. For example, the Area Under Curve (AUC) metric monitors the performance of a binary classification. This is a floating point value in the range 0.0 to 1.0 that expresses how often the model predicts the correct answer on data it was not trained with. Values rise from 0.5 to 1.0 as the quality of the model rises. A score of 0.5 is no better than random guessing, while 0.9 would be a very good model for most cases. But a score of 0.9999 is probably too good to be true, and might indicate a problem with the training data.
As you build your binary prediction model, you will need to spend some time looking at the results and adjusting a value known as the cut-off. This represents the probability that the prediction is true; you can adjust it up or down based on the relative importance of false positives (predictions that should be false but were predicted as true) and false negatives (predictions that should be true but were predicted as false) in your particular situation. If you are building a spam filter for email, a false negative will route a piece of spam to your inbox and a false positive will route legitimate mail to your junk folder. In this case, false positives are undesirable. The tradeoffs between false positives and false negatives is going to depend on your business problem and how you plan to make use of the model in a production setting.
Read more: https://aws.amazon.com/machine-learning/