Tuesday, May 7, 2024
No menu items!
HomeCloud ComputingThe Responsible Revolution: How generative AI is transforming the telco and media...

The Responsible Revolution: How generative AI is transforming the telco and media industry

Amid all the excitement around the potential of generative AI to transform business and unlock trillions of dollars in value across the global economy, it is easy to overlook the significant impact that the technology is already having. Indeed, the era of gen AI does not exist at some vague point in the not-too-distant future: it is here and now. 

The advent of generative AI marks a significant leap in the evolution of computing. For Media customers, generative AI introduces the ability to generate real time, personalized and unique interactions that weren’t possible before. This technology is not just revolutionizing the way we streamline the content creation process but it is also transforming broadcasting operations, such as discovering and searching media archives. 

Simultaneously, in Telco, generative AI boosts productivity by creating a knowledge based engine that can summarize and extract information from both large structures and unstructured data that employees can use to solve a customer problem, or by shortening the learning curve. Furthermore, generative AI can be easily implemented and understood by all levels of the organization without needing to know the model complexity.

How generative AI is transforming the telco and media industry

The telecommunications and media industry is at the forefront of integrating generative AI into their operations, viewing it as a catalyst for growth and innovation. Industry leaders are enthusiastic about its ability to not only enhance the current processes but also spearhead new innovations, create new opportunities, unlock new sources of value and improve the overall business efficiency. 

Communication Service Providers (CSPs) are now using generative AI to significantly reduce the time it takes to perform network-outage root-cause analysis. Traditionally, identifying the root cause of an outage involved engineers mining through several logs, vendor documents, past trouble tickets, and their resolutions. Vertex AI Search enables CSPs to extract relevant information across structured and unstructured data, and significantly shorten the time for a human engineer to identify probable causes.

“Generative AI is helping our employees to do their jobs and increase their productivity, allowing them to spend more time strengthening the relationship with our customers” explains Uli Irnich, CIO of Vodafone Germany. 

Media organizations are using generative AI to smoothly and successfully engage and retain viewers by enabling more powerful search and recommendations. With Vertex AI, customers are building an advanced media recommendations application and enabling audiences to discover personalized content, with Google-quality results that are customized by optimization objectives. 

Responding to challenges with a responsible approach to development

While the potential of generative AI is widely recognised, challenges to its widespread adoption still persist. On the one hand, many of these stem from the sheer size of the businesses involved, with legacy architecture, siloed data, and the need for skills training presenting obstacles to more widespread and effective usage of generative AI solutions. On the other hand, many of these risk-averse enterprise-scale organizations want to be sure that the benefits of generative AI outweigh any perceived risks. In particular, businesses seek reassurance around the security of customer data and the need to conform to regulation, as well as around some of the challenges that can arise when building generative AI models, such as hallucinations (more on that below). 

As part of our long-standing commitment to the responsible development of AI, Google Cloud put our AI Principles into practice. Through guidance, documentation, and practical tools, we are supporting customers to help ensure that businesses are able to roll out their solutions in a safe, secure, and responsible way. By tackling challenges and concerns head on, we are working to empower organizations to leverage generative AI safely and effectively.                  

One such challenge is “hallucinations,” which are when a generative AI model outputs incorrect or invented information in response to a prompt. For enterprises, it’s key to build robust safety layers before deploying generative AI powered applications. Models, and the ways that generative AI apps leverage them, will continue to get better, and many methods for reducing hallucinations are available to organizations.

Last year, we introduced grounding capabilities for Vertex AI, enabling large language models to incorporate specific data sources for model response generation. By providing models with access to specific data sources, grounding tethers their output to specific data and reduces the chances of inventing content. Consequently, it reduces model hallucinations, anchors the model response to specific data sources and enhances the trustworthiness of generated content. Grounding lets the model access information that goes beyond its training data. By linking to designated data stores within Vertex AI Search, the grounded model can produce relevant responses.

As AI-generated images become increasingly popular, we offer digital watermarking and verification on Vertex AI, making us the first cloud provider to enable enterprises with a robust, usable and scalable approach to create AI-generated images responsibly, and identify them with confidence. Digital watermarking on Vertex AI provides two capabilities: Watermarking, which produces a watermark designed to be invisible to the human eye, and does not damage or reduce the image quality, and Verification, which determines whether an image is generated by Imagen vis a vis a confidence interval. This technology is powered by Google DeepMind SynthID, a state-of-the art technology that embeds the watermark directly into the image of pixels, making it imperceptible to the human eye, and very difficult to tamper with without damaging the image. 

Removing harmful content for more positive user experiences

Given the versatility of Large Language Models, predicting unintended or unexpected output is challenging.  To address this, our generative AI APIs have safety attribute scoring, enabling customers to test Google’s safety filters and set confidence thresholds suitable for their specific use case and business. These safety attributes include “harmful categories” and topics that can be considered sensitive, each assigned a confidence score between 0 to 1. This score reflects the likelihood of the input or response belonging to a given category. Implementing this measure is a step forward to a positive user experience, ensuring outputs align more closely with the desired safety standards.

Embedding responsible AI governance throughout our processes

As we work to develop generative AI responsibly, we keep a close eye on emerging regulatory frameworks. Google’s AI/ML Privacy Commitment outlines our belief that customers should have a higher level of security and control over their data on the cloud. That commitment extends to Google Cloud generative AI solutions: by default Google Cloud doesn’t use customer data (including prompts, responses and adapter model training data) to train its foundation models. We also offer third-party intellectual property indemnity as standard for all customers.

By integrating responsible AI principles and toolkits into all aspects of AI development, we are witnessing a growing confidence among organizations in using Google Cloud generative AI models and the platform. This approach enables them to enhance customer experience, and overall, foster a productive business environment in a secure, safe and responsible manner. As we progress on a shared generative AI journey, we are committed to empowering customers with tools and protection they need to use our services safely, securely and with confidence.

“Google Cloud generative AI is optimizing the flow from ideation to dissemination,” says Daniel Hulme, Chief AI Officer at WPP. “And as we start to scale these technologies, what is really important over the coming years is how we use them in a safe, responsible and ethical way.”

Cloud BlogRead More

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments