Scaling Ad Verification with Machine Learning and AWS Inferentia
Amazon Advertising helps companies build their brand and connect with shoppers, through ads shown both within and beyond Amazon’s store, including websites, apps, and streaming TV content in more than 15 countries. Businesses or brands of all sizes including registered sellers, vendors, book vendors, Kindle Direct Publishing (KDP) authors, app developers, and agencies on Amazon marketplaces can upload their own ad creatives, which can include images, video, audio, and of course products sold on Amazon. To promote an accurate, safe, and pleasant shopping experience, these ads must comply with content guidelines.
Here’s a simple example. Can you figure out why two of the following ads would not be compliant?
The ad in the center doesn’t feature the product in context. It also shows the same product multiple times. The ad on the right looks much better, but it contains text, which is not allowed for this ad format.
New ad creatives come in many sizes, shapes, and languages, and at very large scale. Assuming it would even be possible, verifying them manually would be a complex, slow, and error-prone process. Machine learning (ML) to the rescue!
Using Machine Learning to Verify Ad Creatives
Each ad must be evaluated against many rules, which no single model could reasonably learn. In fact, it takes many models to check ad properties, for example:
- Media-specific models that analyze images, video, audio, and text that describe the advertised products.
- Content-specific models that detect headlines, text, backgrounds, and objects.
- Language-specific models that validate syntax and grammar, and flag unapproved language.
Some of these capabilities are readily available in AWS AI services. For example, Amazon Advertising teams use Amazon Rekognition to extract metadata information from images and videos.
Other capabilities require custom models trained on in-house datasets. For this purpose, Amazon teams labeled large ad datasets with Amazon SageMaker Ground Truth, using a combination of manual labeling, and automatic labeling with active learning. Using these datasets, teams then used Amazon SageMaker to train models, and deploy them automatically on real-time prediction endpoints with the AWS Cloud Development Kit (AWS CDK) and Amazon SageMaker Pipelines.
When a business uploads a new ad, relevant models are invoked simultaneously to process specific ad components, extract signals, and output a quality score. All scores are then consolidated, and sent to a final model that predicts whether the ad should be manually reviewed.
Thanks to this process, most new ads can be verified and published automatically, which means businesses can quickly promote their brand and products, and Amazon can maintain a high-quality shopping experience.
However, faced with a growing number of more complex models, Amazon Advertising teams started to look for a solution that could increase prediction throughput while reducing costs. They found it in AWS Inferentia.
What is AWS Inferentia?
Available in Amazon EC2 Inf1 instances, AWS Inferentia is a custom chip built by AWS to accelerate ML inference workloads, and optimize their cost. Each AWS Inferentia chip contains four NeuronCores. Each NeuronCore implements a high-performance systolic array matrix multiply engine, which massively speeds up typical deep learning operations such as convolution and transformers. NeuronCores are also equipped with a large on-chip cache, which helps to cut down on external memory accesses, reduce latency, and increase throughput.
Thanks to AWS Neuron, a software development kit for ML inference, AWS Inferentia can be used natively from ML frameworks like TensorFlow, PyTorch, and Apache MXNet. It consists of a compiler, runtime, and profiling tools that enable you to run high-performance and low latency inference. For many trained models, compilation is a one-liner with the Neuron SDK, not requiring any additional application code changes. The result is a high performance inference deployment, that can easily scale while keeping costs under control. You’ll find many examples in the Neuron documentation. Alternatively, thanks to Amazon SageMaker Neo, you can also compile models directly in SageMaker.
Scaling Ad Verification with AWS Inferentia
Amazon Advertising teams started compiling their models for Inferentia, and deploying them on SageMaker endpoints powered by Inf1 instances. They compared the Inf1 endpoints to the GPU endpoints they had been using so far. They found that large deep learning models like BERT run more effectively on Inferentia, which decreases latency by 30%, and reduces costs by 71%. A few months ago, ML teams working on Amazon Alexa came to the same conclusions.
What about prediction quality? GPU models are typically trained with single-precision floating-point data (FP32). Inferentia uses the shorter FP16, BF16, and INT8 data types, which can create slight differences in predicted output. Running both GPU and Inferentia models in parallel, teams analyzed probability distributions, tweaked prediction thresholds for their Inferentia models, and made sure that these models would predict ads just like GPU models did. You can learn more about these techniques in the Performance Tuning section of the documentation.
With these final adjustments out of the way, the Amazon Advertising teams started phasing out GPU models. All text data is now predicted on Inferentia, and the migration of computer vision pipelines is in progress.
AWS Customers Are Successful with AWS Inferentia
In addition to Amazon teams, customers also report very nice results on scaling and optimizing their ML workloads with Inferentia.
Binghui Ouyang, Senior Data Scientist at Autodesk: “Autodesk is advancing the cognitive technology of our AI-powered virtual assistant, Autodesk Virtual Agent (AVA) by using Inferentia. AVA answers over 100,000 customer questions per month by applying natural language understanding (NLU) and deep learning techniques to extract the context, intent, and meaning behind inquiries. Piloting Inferentia, we are able to obtain a 4.9x higher throughput over G4dn for our NLU models, and look forward to running more workloads on the Inferentia-based Inf1 instances.”
Paul Fryzel, Principal Engineer, AI Infrastructure at Condé Nast: “Condé Nast’s global portfolio encompasses over 20 leading media brands, including Wired, Vogue, and Vanity Fair. Within a few weeks, our team was able to integrate our recommendation engine with AWS Inferentia chips. This union enables multiple runtime optimizations for state-of-the-art natural language models on SageMaker’s Inf1 instances. As a result, we observed a 72% reduction in cost than the previously deployed GPU instances.”
Getting Started
You can get started with Inferentia and Inf1 instances today, either on Amazon SageMaker or with the Neuron SDK. This self-paced workshop walks you through both options.
Give it a try, and let us know what you think. As always, we look forward to your feedback. You can send it through your usual AWS Support contacts, post it on the AWS Forum for SageMaker, or on the Neuron SDK Github repository.