Thursday, December 5, 2024
No menu items!
HomeCloud ComputingBuild generative AI pipelines without the infrastructure headache

Build generative AI pipelines without the infrastructure headache

While creating a basic ChatGPT prototype might take a weekend, developing production-ready generative AI systems that securely handle enterprise data presents significantly greater engineering challenges. Development teams typically invest weeks addressing critical infrastructure requirements: securing data pipelines across siloed systems (both unstructured and structured), configuring vector databases, making model selection decisions, and implementing comprehensive security controls—all while maintaining strict compliance standards.

Traditional approaches present a difficult choice. We either invest months in building custom infrastructure from scratch or we accept the limitations of vendor-specific ecosystems that restrict our choice of models, databases, and deployment options.

Gencore AI transforms this landscape. It enables the construction of enterprise-grade generative AI pipelines using any data system, vector database, AI model, and prompt endpoint. Through its flexible architecture and embedded security controls, you can deploy production-ready AI systems in days instead of months.

A highly flexible platform for building enterprise-grade AI systems

Gencore AI is a holistic solution that allows you to easily build safe, enterprise-grade generative AI systems, utilizing proprietary enterprise data securely across diverse data systems and applications. It accelerates generative AI adoption in enterprises by simplifying the construction of unstructured and structured data and AI pipelines from hundreds of data systems. The solution automatically learns data controls (such as entitlements) in underlying systems and applies them at the AI usage layer, protects AI systems against malicious use, and provides full provenance of the entire AI system for comprehensive monitoring and control.

Developers can use Gencore AI’s components either as a complete platform or as modular building blocks in existing projects. Gencore AI allows you to:

  1. Build safe enterprise AI copilots: Draw on a rich library of connectors and a unique knowledge graph to build enterprise AI copilots, knowledge systems, and apps that combine data from multiple systems. Enterprise controls, like entitlements in data systems, are automatically learned and applied at the AI usage layer. Gain full provenance of the entire AI system, including data and AI usage—down to the level of each file, every user, and all AI models and usage endpoints.
  2. Safely sync data to vector databases: Quickly and securely ingest and sync unstructured and structured data at scale from any system, including SaaS, IaaS, private clouds, and data lakes and data warehouses. Generate embeddings from data while retaining associated metadata and store them into a chosen vector database, making enterprise data ready for large language models (LLMs) to produce valuable insights.
  3. Prepare unstructured data for AI model training: Build and manage data preparation pipelines for model training and tuning, with built-in sanitization and quality control capabilities.
  4. Protect AI interactions: Configure LLM firewalls to protect user prompts, model responses, and data retrievals in AI systems.

Gencore AI pipeline architecture and components

With its flexible architecture, Gencore AI enables developers to easily configure complex AI pipelines and rapidly create, iterate, and deploy enterprise-grade AI systems. Let’s examine the core components and their capabilities:

  1. The Data Loader forms the foundation of the pipeline, connecting to a wide array of source systems. It implements granular filtering options based on file types, modification dates, and custom criteria. A key feature is its ability to extract and preserve metadata, such as access controls from source systems. The loader also supports incremental loading, efficiently handling large-scale data updates without the need for full reprocessing.
  2. Next, the Data Sanitizer performs in-memory data obfuscation based on classified data elements. It utilizes advanced pattern recognition techniques and natural language processing (NLP) models for sensitive content classification. The sanitizer offers customizable rules that can be fine-tuned to align with specific industry regulations. Importantly, it provides detailed logging and auditing of sanitization actions, which is crucial for maintaining compliance and enabling forensic analysis if needed.
  3. The Embeddings Generator captures the semantic meaning of your data into vector representations using selected embedding models. It supports multiple state-of-the-art APIs and hosted models as well as custom models, allowing organizations to choose the best fit for their data and use case. The generator implements efficient splitting strategies to handle long documents, ensuring that context is preserved while optimizing for vector database storage and retrieval.
  4. The Vector Database stores and indexes these embeddings for efficient retrieval. Gencore AI integrates with popular vector databases and implements optimized indexing strategies for fast similarity search. A standout feature is its support for hybrid search, combining vector similarity, syntactic similarity, and metadata filtering to provide more accurate and contextually relevant results.
  5. The Embeddings Retriever configures similarity search parameters for optimal result relevance. It offers advanced retrieval methods like hybrid search and LLM assisted re-ranking, providing tunable parameters for controlling the number of results and similarity thresholds. To improve performance for frequent queries, the retriever implements intelligent caching mechanisms.
  6. The LLM Selection component integrates with chosen LLM providers, supporting both cloud-based and on-premises deployments. It offers sophisticated model comparison tools to evaluate use cases and performance across different LLMs. The selected LLM is used within a powerful agentic workflow framework to maximize response accuracy.
  7. The Prompt, Retrieval, and Response Firewalls come with pre-configured policy templates for common security and compliance scenarios while also providing for custom rule and policy creation. These firewalls provide real-time policy enforcement with minimal latency impact, ensuring that AI interactions remain secure and compliant.
  8. Finally, the Assistant API exposes configured pipelines through RESTful APIs, enabling seamless integration with enterprise systems. Users can access these pipelines via multiple options: a web widget for easy integration into web pages or applications, a centralized user portal for dedicated access, or through popular messaging apps like Slack and Teams.

Accelerating enterprise generative AI development

As enterprises continue to explore the vast potential of generative AI, solutions like Gencore AI play a crucial role in bridging the gap between innovation and governance. By providing a comprehensive, safe, and flexible platform for building enterprise-grade AI systems, Gencore AI empowers organizations to move beyond proof-of-concept implementations to deploy scalable generative AI applications across their organization.

The future of enterprise AI development isn’t just about accessing advanced models—it’s about deploying these models efficiently and securely within complex organizational environments. Gencore AI provides developers with the tools and infrastructure needed to build generative AI applications that meet enterprise security, compliance, and scalability requirements, without sacrificing development speed or flexibility.

Mike Rinehart is VP of artificial intelligence and Bharat Patel is head of infrastructure engineering at Securiti.

—

Generative AI Insights provides a venue for technology leaders—including vendors and other outside contributors—to explore and discuss the challenges and opportunities of generative artificial intelligence. The selection is wide-ranging, from technology deep dives to case studies to expert opinion, but also subjective, based on our judgment of which topics and treatments will best serve InfoWorld’s technically sophisticated audience. InfoWorld does not accept marketing collateral for publication and reserves the right to edit all contributed content. Contact [email protected].

Anthropic introduces the Model Context Protocol | InfoWorldRead More

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments