Thursday, October 10, 2024
No menu items!
HomeArtificial Intelligence and Machine LearningIntroducing AWS AI Service Cards: A new resource to enhance transparency and...

Introducing AWS AI Service Cards: A new resource to enhance transparency and advance responsible AI

Artificial intelligence (AI) and machine learning (ML) are some of the most transformative technologies we will encounter in our generation—to tackle business and societal problems, improve customer experiences, and spur innovation. Along with the widespread use and growing scale of AI comes the recognition that we must all build responsibly. At AWS, we think responsible AI encompasses a number of core dimensions including:

Fairness and bias– How a system impacts different subpopulations of users (e.g., by gender, ethnicity)
Explainability– Mechanisms to understand and evaluate the outputs of an AI system
Privacy and Security– Data protected from theft and exposure
Robustness– Mechanisms to ensure an AI system operates reliably
Governance– Processes to define, implement and enforce responsible AI practices within an organization
Transparency– Communicating information about an AI system so stakeholders can make informed choices about their use of the system

Our commitment to developing AI and ML in a responsible way is integral to how we build our services, engage with customers, and drive innovation. We are also committed to providing customers with tools and resources to develop and use AI/ML responsibly, from enabling ML builders with a fully managed development environment to helping customers embed AI services into common business use cases.

Providing customers with more transparency

Our customers want to know that the technology they are using was developed in a responsible way. They want resources and guidance to implement that technology responsibly at their own organization. And most importantly, they want to ensure that the technology they roll out is for everyone’s benefit, especially their end-users’. At AWS, we want to help them bring this vision to life.

To deliver the transparency that customers are asking for, we are excited to launch AWS AI Service Cards, a new resource to help customers better understand our AWS AI services. AI Service Cards are a form of responsible AI documentation that provide customers with a single place to find information on the intended use cases and limitations, responsible AI design choices, and deployment and performance optimization best practices for our AI services. They are part of a comprehensive development process we undertake to build our services in a responsible way that addresses fairness and bias, explainability, robustness, governance, transparency, privacy, and security. At AWS re:Invent 2022 we’re making the first three AI Service Cards available: Amazon Rekognition – Face Matching, Amazon Textract – AnalyzeID, and Amazon Transcribe – Batch (English-US).

Components of the AI Service Cards

Each AI Service Card contains four sections covering:

Basic concepts to help customers better understand the service or service features
Intended use cases and limitations
Responsible AI design considerations
Guidance on deployment and performance optimization

The content of the AI Service Cards addresses a broad audience of customers, technologists, researchers, and other stakeholders who seek to better understand key considerations in the responsible design and use of an AI service.

Our customers use AI in an increasingly diverse set of applications. The intended use cases and limitations section provides information about common uses for a service, and helps customers assess whether a service is a good fit for their application. For example, in the Amazon Transcribe – Batch (English-US) Card we describe the service use case of transcribing general-purpose vocabulary spoken in US English from an audio file. If a company wants a solution that automatically transcribes a domain-specific event, such as an international neuroscience conference, they can add custom vocabularies and language models to include scientific vocabulary in order to increase the accuracy of the transcription.

In the design section of each AI Service Card, we explain key responsible AI design considerations across important areas, such as our test-driven methodology, fairness and bias, explainability, and performance expectations. We provide example performance results on an evaluation dataset that is representative of a common use case. This example is just a starting point though, as we encourage customers to test on their own datasets to better understand how the service will perform on their own content and use cases in order to deliver the best experience for their end customers. And this is not a one-time evaluation. To build in a responsible way, we recommend an iterative approach where customers periodically test and evaluate their applications for accuracy or potential bias.

In the best practices for deployment and performance optimization section, we lay out key levers that customers should consider to optimize the performance of their application for real-world deployment. It’s important to explain how customers can optimize the performance of an AI system that acts as a component of their overall application or workflow to get the maximum benefit. For example, in the Amazon Rekognition Face Matching Card that covers adding face recognition capabilities to identity verification applications, we share steps customers can take to increase the quality of the face matching predictions incorporated into their workflow.

Delivering responsible AI resources and capabilities

Offering our customers the resources and tools they need to transform responsible AI from theory to practice is an ongoing priority for AWS. Earlier this year we launched our Responsible Use of Machine Learning guide that provides considerations and recommendations for responsibly using ML across all phases of the ML lifecycle. AI Service Cards complement our existing developer guides and blog posts, which provide builders with descriptions of service features and detailed instructions for using our service APIs. And with Amazon SageMaker Clarify and Amazon SageMaker Model Monitor, we offer capabilities to help detect bias in datasets and models and better monitor and review model predictions through automation and human oversight.

At the same time, we continue to advance responsible AI across other key dimensions, such as governance. At re:Invent today we launched a new set of purpose-built tools to help customers improve governance of their ML projects with Amazon SageMaker Role Manager, Amazon SageMaker Model Cards, and Amazon SageMaker Model Dashboard. Learn more on the AWS News blog and website about how these tools help to streamline ML governance processes.

Education is another key resource that helps advance responsible AI. At AWS we are committed to building the next generation of developers and data scientists in AI with the AI and ML Scholarship Program and AWS Machine Learning University (MLU). This week at re:Invent we launched a new, public MLU course on fairness considerations and bias mitigation across the ML lifecycle. Taught by the same Amazon data scientists who train AWS employees on ML, this free course features 9 hours of lectures and hands-on exercises and it is easy to get started.

AI Service Cards: A new resource—and an ongoing commitment

We are excited to bring a new transparency resource to our customers and the broader community and provide additional information on the intended uses, limitations, design, and optimization of our AI services, informed by our rigorous approach to building AWS AI services in a responsible way. Our hope is that AI Service Cards will act as a useful transparency resource and an important step in the evolving landscape of responsible AI. AI Service Cards will continue to evolve and expand as we engage with our customers and the broader community to gather feedback and continually iterate on our approach.

Contact our group of responsible AI experts to start a conversation.

About the authors

Vasi Philomin is currently a Vice President in the AWS AI team for services in the language and speech technologies areas such as Amazon Lex, Amazon Polly, Amazon Translate, Amazon Transcribe/Transcribe Medical, Amazon Comprehend, Amazon Kendra, Amazon Code Whisperer, Amazon Monitron, Amazon Lookout for Equipment and Contact Lens/Voice ID for Amazon Connect as well as Machine Learning Solutions Lab and Responsible AI.

Peter Hallinan leads initiatives in the science and practice of Responsible AI at AWS AI, alongside a team of responsible AI experts. He has deep expertise in AI (PhD, Harvard) and entrepreneurship (Blindsight, sold to Amazon). His volunteer activities have included serving as a consulting professor at the Stanford University School of Medicine, and as the president of the American Chamber of Commerce in Madagascar. When possible, he’s off in the mountains with his children: skiing, climbing, hiking and rafting

Read MoreAWS Machine Learning Blog

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments