Tuesday, May 7, 2024
No menu items!
HomeCloud ComputingBuild powerful gen AI applications with Firestore vector similarity search

Build powerful gen AI applications with Firestore vector similarity search

Creating innovative AI-powered solutions for use cases such as product recommendations and chatbots often requires vector similarity search, or vector search for short. At Google Cloud Next ‘24, we announced the Firestore vector search in preview, using exact K-nearest neighbor (KNN) search. Developers can now perform vector search on transactional Firestore data without the hassle of copying data to another vector search solution, maintaining operational simplicity and efficiency.

Developers can now utilize Firestore vector search with popular orchestration frameworks such as LangChain and LlamaIndex through native integrations. We’ve also launched a new Firestore extension to make it easier for you to automatically compute vector embeddings on your data, and  create web services that make it easier for you to perform vector searches from a web or mobile application. 

In this blog, we’ll discuss how developers can get started with Firestore’s new vector search capabilities.

How to use KNN vector search in Firestore

The first step in utilizing vector search is to generate vector embeddings. Embeddings are representations of different kinds of data like text, images, video, etc in a continuous vector space, and capture semantic or syntactic similarities between the entities they represent.  Embeddings can be calculated using a service, such as the Vertex AI text-embeddings API

Once the embeddings are generated you can store them in Firestore using one of the supported SDKs. For example, let’s say you’ve generated an embedding using your favorite embedding model for the data in the field “description” in the collection “beans”. You can now add that generated embedding as a vector value to the field “embedding_field”. Simply run the following command using the NodeJS SDK:

code_block
<ListValue: [StructValue([(‘code’, ‘const db = new Firestore();rnlet collectionRef = db.collection(“beans”);rnawait collectionRef.add({rn name: “Kahawa coffee beans”,rn type: “arabica”,rn description: “Information about the Kahawa coffee beans.”,rn embedding_field: FieldValue.vector([0.1, 0.3, …, 0.2]), // a vector with 768 dimensionsrn});’), (‘language’, ‘lang-py’), (‘caption’, <wagtail.rich_text.RichText object at 0x3e649849d670>)])]>

Alternatively, rather than calling the embedding generation service from your application for a field, you can also automate the generation of the vector embeddings based on field values in your document and your favorite embedding model by using the Firestore vector search extension.

The next step is to create a Firestore KNN vector index on the “embedding_field” where the vector embeddings are stored. During the preview release, you will need to create the index using the gcloud command line tool. 

Continuing with our example, this is how you would create a Firestore KNN vector index:

code_block
<ListValue: [StructValue([(‘code’, ‘gcloud alpha firestore indexes composite creatern–collection-group=beansrn–query-scope=COLLECTIONrn–field-config field-path=embedding_field,vector-config='{“dimension”:”768″, “flat”: “{}”}”), (‘language’, ”), (‘caption’, <wagtail.rich_text.RichText object at 0x3e649849da30>)])]>

Once you have added all the vector embeddings and created the vector index, you are ready to run the K-Nearest Neighbor search. You will then utilize the “find_nearest” call to pass the query vector embedding with which to compare the stored embeddings and to specify the distance function you want to utilize.

In our example, to do a KNN search on the “embedding_field” in the “beans” collection using COSINE vector distance, you run the following query:

code_block
<ListValue: [StructValue([(‘code’, ‘collectionRef = db.collection(“beans”);rnlet vectorQuery: VectorQuery = collectionRef.findNearest(rn “embedding_field”,rn FieldValue.vector([0.4, 0.1, …, 0.3]), // a vector with 768 dimensionsrnrn {rn limit: 5,rn distanceMeasure: “EUCLIDEAN”,rn }rn);rnawait vectorQuery.get();’), (‘language’, ‘lang-py’), (‘caption’, <wagtail.rich_text.RichText object at 0x3e649849d790>)])]>

How to use pre-filtering with KNN vector search

One of the key benefits of Firestore’s KNN vector search is that it can be used in conjunction with some of the other query predicates like equality conditions to pre-filter the data set to only the vectors with which you want to do the search. This helps you to reduce the search space and get more relevant and faster results.

In order to pre-filter and run the vector search on the filtered data set, you will need to first create a composite index using the gcloud command line tool, by including the fields that you want to pre-filter along with the vector field.

For example, if you want to pre-filter on the field “type” in our “beans” collection for doing a KNN vector search, create a Firestore KNN composite vector index, using the command below:

code_block
<ListValue: [StructValue([(‘code’, ‘gcloud alpha firestore indexes composite creatern–collection-group=beansrn–query-scope=COLLECTIONrn–field-config=order=ASCENDING,field-path=”type”rn–field-config field-path=embedding_field,vector-config='{“dimension”:”768″, “flat”: “{}”}”), (‘language’, ”), (‘caption’, <wagtail.rich_text.RichText object at 0x3e649813bcd0>)])]>

Once the index is created, you can do a KNN search with the pre-filter, as the example below:

code_block
<ListValue: [StructValue([(‘code’, ‘collectionRef = db.collection(“beans”);rnvectorQuery = collectionRefrn .where(“type”, “==”, “arabica”)rn .findNearest(“embedding_field”, FieldValue.vector([0.4, 0.1, …, 0.3]), {rn limit: 5,rn distanceMeasure: “EUCLIDEAN”,rn });rnawait vectorQuery.get();’), (‘language’, ‘lang-py’), (‘caption’, <wagtail.rich_text.RichText object at 0x3e649813b280>)])]>

Ecosystem integrations

To provide application developers with tools to help them quickly and more efficiently build retrieval augmented generation (RAG) solutions using vector search and foundation models, Firestore KNN vector search now has integrations with LangChain Vector Store and LlamaIndex. These integrations provide developers with access to accurate and reliable information stored in Firestore in their workflows, which is orchestrated using LangChain or LlamaIndex, enhancing the credibility and trustworthiness of LLM (large language model) responses. Additionally, it enables enhanced contextual understanding, by pulling in contextual information from Firestore resulting in highly relevant and personalized responses tailored to customer needs. 

For more information regarding the integrations, please see the Firestore for LangChain (or Datastore for LangChain), and the LlamaIndex-Firestore integration website

As indicated earlier, we also announced the availability of a new Firebase extension that will enable developers to use their favorite embedding model to automatically compute and store embeddings for a given field of a Firestore document. This extension also makes it easier to perform vector similarity searches by generating embeddings based on a query value, for input into vector search. For more information, see the Firestore vector search extension web page.

Pricing

Firestore customers are charged for the number of KNN vector index entries read during the computation and document reads only for resultant documents matching the query. For detailed pricing, please refer to the pricing page. 

Next steps

To learn more about Firestore and its vector search, check out the following resources:

Getting Started with Firestore

Firestore Vector Search documentation

Firestore for LangChain

Datastore for LangChain

LlamaIndex-Firestore integration website

Firestore Vector Search Extension

Thanks to Minh Nguyen, Senior Product Manager Lead for Firestore for his contributions to this blog post.

Cloud BlogRead More

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments