Monday, December 2, 2024
No menu items!
HomeCloud ComputingGoogle expands Responsible GenAI Toolkit

Google expands Responsible GenAI Toolkit

Google has enhanced its Responsible Generative AI Toolkit for building and evaluating open generative AI models, expanding the toolkit with watermarking for AI content and with prompt refining and debugging features. The new features are designed to work with any large language models (LLMs), Google said.

Announced October 23, the new capabilities support Google’s Gemma and Gemini models or any other LLM. Among the capabilities added is SynthID watermarking for text, which allows AI application developers to watermark and detect text generated by their generative AI product. SynthID Text embeds digital watermarks directly into AI-generated text. It is accessible through Hugging Face and the Responsible Generative AI Toolkit.

Also featured is a Model Alignment library that helps developers refine prompts with support from LLMs. Developers provide feedback regarding how they would like their model’s outputs to change, either “as a holistic critique or a set of guidelines. Then they can use Gemini or a preferred LLM to transform the feedback into a prompt that aligns model behavior with the application’s needs and content policies. The Model Alignment library can be accessed from PyPI.

For prompt debugging, the Responsible Generative AI Toolkit adds an improved deployment experience for the Learning Interpretability Tool (LIT) on Google Cloud. Developers can use LIT’s new model server container to deploy a Hugging Face or Keras LLM with support for generation, tokenization, and salience scoring on Cloud Run GPUs. Google has also expanded connectivity from the LIT app to self-hosted models or to Gemini via the Vertex API.

Google is soliciting feedback on the new additions at the Google Developer Community Discord website.

Google expands Responsible GenAI Toolkit | InfoWorldRead More

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments