Monday, May 20, 2024
No menu items!
HomeCloud ComputingProtecting LLM applications with Azure AI Content Safety

Protecting LLM applications with Azure AI Content Safety

Both extremely promising and extremely risky, generative AI has distinct failure modes that we need to defend against to protect our users and our code. We’ve all seen the news, where chatbots are encouraged to be insulting or racist, or large language models (LLMs) are exploited for malicious purposes, and where outputs are at best fanciful and at worst dangerous.

None of this is particularly surprising. It’s possible to craft complex prompts that force undesired outputs, pushing the input window past the guidelines and guardrails we’re using. At the same time, we can see outputs that go beyond the data in the foundation model, generating text that’s no longer grounded in reality, producing plausible, semantically correct nonsense.

To read this article in full, please click here

InfoWorld Cloud ComputingRead More

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments