Friday, May 17, 2024
No menu items!
HomeCloud ComputingLong document summarization with Workflows and Gemini models

Long document summarization with Workflows and Gemini models

With generative AI top of mind for both developers and business stakeholders, it’s important to explore how products like Workflows, Google Cloud’s serverless execution engine, can automate and orchestrate large language model (LLM) use cases. We recently covered how to orchestrate Vertex AI’s PaLM and Gemini APIs with Workflows. In this blog, we illustrate how Workflows can perform long-document summarization, a concrete use case with wide applicability.

Open-source LLM orchestration frameworks like LangChain for Python and TypeScript developers, or LangChain4j for Java developers, integrate various components such as LLMs, document loaders, and vector databases, to implement complex tasks such as document summarization. You can also use Workflows for this task without investing significant time in an LLM orchestration framework.

Summarization techniques

It’s easy enough to summarize a short document by entering the document’s entire content as a prompt into an LLM’s context window. However, prompts for large language models are usually token-count-limited. For longer documents, a different approach is required. Two common approaches are:

Map/reduce — A long document is split into smaller sections that fit the context window. For each section, a summary is created, and a summary of all the summaries is created as a final step.

Iterative refinement — Similar to the map/reduce approach, we evaluate the document in a piecemeal fashion. A summary is created for the first section, then the LLM refines its first summary with the details from the following section, and iteratively through to the end of the document.

Both methods yield good results. However, the map/reduce approach has one advantage over the refinement method. With refinement, you have a sequential process, as the next section of the document is summarized using the previously refined summary. 

With map/reduce, as illustrated in the diagram below, you can create a summary for each section in parallel (the “map” operation), with a final summarization in the last step (the “reduce” operation). This is faster than the sequential approach.

Long document summarization with Workflows and Gemini models

In a previous article, we showed how to call PaLM and Gemini models via Workflows, and highlighted a key feature of Workflows: parallel step execution. With this feature, we can create summaries of the long document sections in parallel.

Here’s a bird’s-eye view of the workflow definition:

The workflow is triggered when a new text document is added to a Cloud Storage bucket.

The text file is split into “chunks” that are summarized in parallel steps.

A final summarization step groups all the smaller summaries and combines them into a single summary.

All the calls to the Gemini 1.0 Pro model are made thanks to a subworkflow.

Let’s see that in action.

Retrieving the text file and summarizing sections in parallel (“map” part)

code_block
<ListValue: [StructValue([(‘code’, ‘main:rn params: [input]rn steps:rn – assign_file_vars:rn assign:rn – file_size: ${int(input.data.size)}rn – chunk_size: 64000rn – n_chunks: ${int(file_size / chunk_size)}rn – summaries: []rn – all_summaries_concatenated: “”rn – loop_over_chunks:rn parallel:rn shared: [summaries]rn for:rn value: chunk_idxrn range: ${[0, n_chunks]}rn steps:rn – assign_bounds:rn assign:rn – lower_bound: ${chunk_idx * chunk_size}rn – upper_bound: ${(chunk_idx + 1) * chunk_size}rn – summaries: ${list.concat(summaries, “”)}rn – dump_file_content:rn call: http.getrn args:rn url: ${“https://storage.googleapis.com/storage/v1/b/” + input.data.bucket + “/o/” + input.data.name + “?alt=media”}rn auth:rn type: OAuth2rn headers:rn Range: ${“bytes=” + lower_bound + “-” + upper_bound}rn result: file_contentrn – assign_chunk:rn assign:rn – chunk: ${file_content.body}rn – generate_chunk_summary:rn call: ask_gemini_for_summaryrn args:rn textToSummarize: ${chunk}rn result: summaryrn – assign_summary:rn assign:rn – summaries[chunk_idx]: ${summary}’), (‘language’, ”), (‘caption’, <wagtail.rich_text.RichText object at 0x3e194cab0b80>)])]>

The assign_file_vars step prepares a few constants and data structures. Here, we chose 64,000 characters as our chunk size, so they can fit in the LLM’s context window and stay within Workflow’s memory limits. We also have variables for the lists of summaries, and one to hold the final summary.

The loop_over_chunks step extracts each chunk of text in parallel, first by loading each portion of the document from Cloud Storage in the dump_file_content sub-step. It then calls the Gemini model-powered subworkflow in generate_chunk_summary that summarizes that section of the document. We finally store the current summary in the summaries array.

A summary of summaries (“reduce” part)

Now that we have all the chunk summaries, we can summarize all the smaller summaries into an aggregate summary, or final summary of summaries if you will:

code_block
<ListValue: [StructValue([(‘code’, ‘- concat_summaries:rn for:rn value: summaryrn in: ${summaries}rn steps:rn – append_summaries:rn assign:rn – all_summaries_concatenated: ${all_summaries_concatenated + “\n” + summary}rn – reduce_summary:rn call: ask_gemini_for_summaryrn args:rn textToSummarize: ${all_summaries_concatenated}rn result: final_summaryrn – return_result:rn return:rn – summaries: ${summaries}rn – final_summary: ${final_summary}’), (‘language’, ”), (‘caption’, <wagtail.rich_text.RichText object at 0x3e194cab0ca0>)])]>

In concat_summaries we concatenate all the chunk summaries. In the reduce_summary step, we call our Gemini model summarization subworkflow one last time to get the final summary. And in return_result, we return the results, including the chunk summaries, and the final summary.

Asking the Gemini model for summaries

Both our “map” and “reduce” steps call a subworkflow that encapsulates the call with Gemini models. Let’s zoom in on this final part of our workflow:

code_block
<ListValue: [StructValue([(‘code’, ‘ask_gemini_for_summary:rn params: [textToSummarize]rn steps:rn – init:rn assign:rn – project: ${sys.get_env(“GOOGLE_CLOUD_PROJECT_ID”)}rn – location: “us-central1″rn – model: “gemini-1.0-pro”rn – summary: “”rn – call_gemini:rn call: http.postrn args:rn url: ${“https://” + location + “-aiplatform.googleapis.com” + “/v1/projects/” + project + “/locations/” + location + “/publishers/google/models/” + model + “:generateContent”}rn auth:rn type: OAuth2rn body:rn contents:rn role: userrn parts:rn – text: ‘${“Make a summary of the following text:\n\n” + textToSummarize}’rn generation_config:rn temperature: 0.2rn maxOutputTokens: 2000rn topK: 10rn topP: 0.9rn result: gemini_responsern # Sometimes, there’s no text, for example, due to safety settingsrn – check_text_exists:rn switch:rn – condition: ${not(“parts” in gemini_response.body.candidates[0].content)}rn next: return_summaryrn – extract_text:rn assign:rn – summary: ${gemini_response.body.candidates[0].content.parts[0].text}rn – return_summary:rn return: ${summary}’), (‘language’, ”), (‘caption’, <wagtail.rich_text.RichText object at 0x3e194be90250>)])]>

In init, we prepare a few variables for the configuration of the LLM we want to use (in this case, Gemini Pro).

In the call_gemini step, we make an HTTP POST call to the model’s REST API. Notice how we can declaratively authenticate to this API simply by specifying the OAuth2 authentication scheme. In the body, we pass the prompt that requests a summary, as well as some model parameters like temperature, or a maximum length of the summary to be generated. 

Finally, the last step of the subworkflow returns the summary to the calling steps.

The resulting summary

Saving the text of ‘Pride and Prejudice’ by Jane Austen into a Cloud Storage bucket triggers and executes the workflow, resulting in the following partial and final summaries:

Going further

For the purpose of this article, we kept the workflow simple, but it could be further improved in various ways. For example, we hard-coded the number of characters for each section to summarize, but that could be a parameter of the workflow, which could even be computed depending on the length of the model’s context-window limit. 

Workflows itself also has a memory limit for the variables and data it keeps in memory, so we could handle those cases where an extra-long list of document section summaries wouldn’t fit in memory. And let’s not forget our newer large language model Gemini 1.5, which is able to receive up to one million tokens in input, and can summarize a long document in a single pass.

Of course, you can also choose to use an LLM orchestration framework, but as this example demonstrates, Workflows itself is capable of handling interesting LLM orchestration use cases.

Summary

In this article, we explored a new use case for orchestrating LLMs with Workflows and implemented a long document summarization exercise without using a dedicated LLM framework. We took advantage of Workflows’ parallel step capabilities to create section summaries in parallel and reduced the latency needed to create the whole summary.

Be sure to check out this sample summarization workflow in our Workflows sample repository, and feel free to read more about accessing Vertex AI models in the Workflows documentation. Also don’t hesitate to reach out to @glaforge for feedback or questions.

Related Article

Orchestrate Vertex AI’s PaLM and Gemini APIs with Workflows

Workflows simplifies gen AI orchestration. Learn how to use gen AI models with Workflows for automation and streamlined API calls

Read Article

Cloud BlogRead More

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments