Skip to content

Blog

The goal of the blog is to discuss broader topics around the cognee project, including the motivation behind the project, the technical details, and the future of the project.

knowledge graphs + rags

  1. LLMOps stack + Graphs

product announcements

This section covers the release notes for the cognee library. It includes the new features, bug fixes, and improvements in each release.

  1. Cognee - library release

  2. New website for cognee

Towards deterministic data pipelines for LLMs step by step

This series mostly deals with product discovery, data engineering, and the development of robust AI memory data pipelines.

  1. From demo to production 1
  2. From demo to production 2
  3. From demo to production 3
  4. From demo to production 4

LLMOps stack + Graphs

The past: Berlin startup scene

Machine learning has had its place in the business models of companies for several years, but due to high labor costs, lack of generalizability, and long development cycles, it often did not meet the early days' expectations. With the rise of ChatGPT, however, foundation models and LLMs are reemerging as the next step in the evolution in the Machine Learning stack, democratizing it for the end users.

As a consultant and operator in the Berlin startup scene over the past ten years, I have seen the vocation of “Data Scientist” reflect this trend as well. Its initial popularity, decline, and resurgence easily could have mirrored the rise and fall of the Roman Empire. Humble beginnings, grandiosity, and then the rigidity of thought of early Christianity.

In my early years as a data analyst in tier-II e-commerce businesses, data science was considered a prestigious, cutting-edge title. However,  most of these ventures lacked the experience or maturity to properly productionize their business models.

Often, I would see a data scientist building tons of features for their company’s AI models to only slightly improve on their basic KPIs. They were often stuck in the limbo of demoware, and only the businesses in which data was a key operational element would successfully deploy and run data science systems at scale.


Pandemic and fall out of grace

Over the years, this low impact-high drain dynamic led to data science falling out of favor. The COVID pandemic seemed to deliver a death blow to the Berlin Data Science community, with many data scientists being made redundant.

This played out differently in larger markets and companies, where I saw more mature setups heavily relying on machine learning. However, from the perspective of most software Mittelstand (a German term for medium-sized enterprises), the technology was seen as a nice-to-have, not a must-have.

Suddenly, with the release of ChatGPT, most knowledge previously required to operate machine learning became obsolete,  with the only thing now needed being an API key. This dropped the barrier to entry to the floor and created a need for new tools to be built around these APIs.

Tools like Langchain met this need perfectly, enabling everyone to interact with their data.

ML to LLMOps

A question arises about how we should approach LLM tooling. Ignoring previous knowledge and inferring new paradigms(Agents come to mind)  as if in a vacuum can be counterproductive. Re-inventing categories should be done cautiously; history shows that overdoing it can lead to failure.

A recently published article by angel investor and Professor of Neuroscience at U.C. London, Peter Zhegin, has effectively mapped out the elements of the MLOps system ripe for disruption, suggesting which ones might be impacted:

  1. Vector Stores: The authors argue that data storage and vector stores will be acquired by large players, but that differentiation still may be possible in the data space. They state that "A realistic way to differentiate might be to push for real-time vectorization while finding the best way to use traditional databases and feature stores (relational/NoSQL) together with vector DBs."
  2. Feature Storage: The authors note that "Prompt engineering does not involve traditional training but allows one to change the model's behavior during inference by creating appropriate instructions or queries. This ‘training without training’ presents an interesting opportunity outside the MLOps pipeline."

Feature Stores and the Next Steps

The evolution of the MLOps stack signals a need for a new type of feature store that enables in-context learning.

Since fine-tuning LLMs will start happening at inference time, we need a system to interact with and manage data points fed to the LLM at scale. This implies a need for a feature store that provides more determinism to the LLM outputs, enabling the integration of business processes and practices into data that captures the context of an enterprise or organization.

An example of such a use case would be using a set of documents from different departments, enabling the LLM to understand the relationships between these documents and their individual parts.

This effort often requires humans to provide the base rules for how the LLM should interact with the information, leading to the creation of what is commonly referred to as a RAG (Retrieval Augmented Generation) pipeline.


Since recently, we’ve been able to combine graphs and vector data stores to create a semantic layer on top of naive RAGs. This layer has been a major step towards encoding rules into an in-context learning pipeline.

In his recent blog post, co-founder of WhyHow.AI, Chia Jeng Yang, explained what a typical RAG pipeline looks like. He also introduced Graph Ops and Vector Ops as new elements of the RAG stack which can lead to more stable retrieval patterns.

Shiny_new_LLMOps/Untitled.png

The argument Zhegin made a few months ago is now taking shape. We are seeing feature stores evolve into tools that manage vector and graph stores.

We are still in the early stages, though. As Jon Turrow of Madrona Ventures  suggests, the next generation of AI agent infrastructure—what Chia refers to as Graph Ops—will be a personalization layer.


I believe that these terms are interchangeable and that a new in-context Feature Store, Vector and Graph Ops, and personalization layers are essentially the same thing. Moreover, it’s my belief that Vector and Graph Ops are not differentiation categories in and of themselves.

The challenge is, thus, not connecting Vector and Graph stores or giving a RAG system a 10% performance boost.

The main issues still remain Shiny_new_LLMOps/llm_problems.png

The challenge and solution lie in creating a new type of probabilistic data engine—one with an interface as simple as SQL, but which can retrieve and structure information in real-time, optimizing what we feed the LLM based on solid evaluations.


Striving to make sense of the best computing engine we know of—our mind—cognitive sciences may offer us clues on how to move forward.

After all, we process, store, and retrieve data from our mental lexicon with ease, with inherent personalization and dynamic data enrichment.

I believe that understanding the way our mind carries out these processes may allow us to replicate them in machine learning.

With human language as the new SQL and cognitive theories as inspiration, the next generation of tooling is still on the horizon.

Cognee new website

We are excited to announce the launch of Cognee.ai.

Highlights: Book a Discussion: Schedule a consultation directly through our website. Check it out at: www.cognee.ai and book your discussion with our experts.

We'd be happy to hear your feedback and discuss GraphRAGs, and more.

Cognee v0.1.4

Cognee - release v0.1.4

New Features

Enhanced Text Processing Capabilities:

Customizable models for classification, summarization, and labeling have been introduced to extend the versatility of our text analytics.

Better Graph Integration and Visualization:

We have revamped our graph logic. Introduced comprehensive support for multiple graph database types including but not limited to Neo4j and NetworkX, enhancing integration and scalability. New functions for graph rendering, color palette generation, and dynamic graph visualization to help you better visualize data relationships and structures.

DSPy Module:

You can now train your graph generation query on a dataset and visualize the results with the dspy module.

Async Support:

Added asyncio support in various modules to improve performance and system efficiency, reducing latency and resource consumption.

Enhancements

Infrastructure Upgrades:

Our infrastructure has been significantly upgraded to support a wider range of models and third-party providers, ensuring compatibility and performance. Improved configuration settings for a more robust and adaptive operational environment.

Graph and Logic Improvements:

Improved Neo4j Integration: Enhancements to our Neo4j graph database integration for better performance and stability. Semantic Links and Node Logic: We have improved the semantic linkage between nodes and enhanced the node logic for a more intuitive and powerful user experience.

Refactor

Asynchronous Module Updates:

Various modules have been updated to use asynchronous operations to enhance the responsiveness and scalability of our systems.

Documentation

Enhanced Documentation:

We have extensively added to and reorganized our documentation.

Bug Fixes

Data Handling and Logic Improvements:

Fixed several inconsistencies in data handling and improved the logic flow in text input processing.

Cognee - release v0.1.0

Preface

In a series of posts we explored issues with RAGs and the way we can build new infrastructure stack for the world of agent networks.

To borrow the phrase Microsoft used, to restate the problem:

  • Baseline RAG performs poorly when asked to understand summarized semantic concepts holistically over large data collections or even singular large documents.

In the previous blog post we explained how developing a data platform and a memory layer for LLMs was one of our core aims.

To do that more effectively we turned cognee into a python library in order to make it easier to use and get inspiration from the OSS community.

Improved memory architecture

architecture.png

With the integration of Keepi.ai, we encountered several challenges that made us reassess our strategy. Among the issues we’ve identified were:

  • The decomposition of user prompts into interconnected elements proved overly granular, leading to data management difficulties on load and retrieval.

  • A recurring problem was the near-identical decomposition pattern for similar messages, which resulted in content duplication and an enlarged user graph. Our takeaway was that words and their interrelations represent only a fragment of the broader picture. We need to be able to guide the set of logical connections and make the system dynamic so that the data models can be adapted and adjusted to each particular use-case. What works for e-commerce transaction handling might not work for an AI vertical creating power point slides.

  • The data model, encompassing Long-Term, Short-Term, and Buffer memory, proved both limited in scope and rigid, lacking the versatility to accommodate diverse applications and use cases. Just collecting all elements from all memories seemed naive, while getting certain nodes with classifiers did not add enough value.

  • The retrieval of the entire buffer highlighted the need for improved buffer content management and a more adaptable buffer structure. We conceptualized the buffer as the analogue of human working memory, and recognize the need to better manage the stored data.

Moving forward, we have adopted several new strategies, features, and design principles:

Propositions:

Defined as atomic expressions within a text, each proposition encapsulates a unique factoid, conveyed in a succinct, standalone natural language format. We employ Large Language Models (LLMs) to break down text into propositions and link them, forming graphs with propositions as nodes and their connections as edges. For example, "Grass is green", and "2 + 5 = 5" are propositions. The first proposition has the truth value of "true" and the second "false". The inspiration was found in the following paper: https://arxiv.org/pdf/2312.06648.pdf

Multilayer Graph Network:

A cognitive multilayer networks is both a quantitative and interpretive framework for exploring the mental lexicon, the intricate cognitive system that stores information about known words/concepts.

Mental lexicon is component of the human language faculty that contains information regarding the composition of words.

Utilizing LLMs, we construct layers within the multilayer network to house propositions and their interrelations, enabling the interconnection of different semantic layers and the cross-layer linking of propositions. This facilitates both the segmentation and accurate retrieval of information.

For example, if "John Doe" authored two New York Times cooking articles, we could extract an "ingredients" layer when needed, while also easily accessing all articles by "John Doe".

We used concepts from psycholinguistics described here: https://arxiv.org/abs/1507.08539

Data Loader:

It’s vital that we address the technical challenges associated with Retrieval-Augmented Generation (RAG), such as metadata management, context retrieval, knowledge sanitization, and data enrichment.

The solution lies in a dependable data pipeline capable of efficiently and scalably preparing and loading data in various formats from a range of different sources. For this purpose, we can use 'dlt' as our data loader, gaining access to over 28 supported data sources.

To enhance the Pythonic interface, we streamlined the use of cognee into three primary methods. Users can now execute the following steps:

  • cognee.add(data): This method is used to input and normalize the data. It ensures the data is in the correct format and ready for further processing.
  • cognee.cognify(): This function constructs a multilayer network of propositions, organizing the data into an interconnected, semantic structure that facilitates complex analysis and retrieval.
  • cognee.search(query, method='default'): The search method enables the user to locate specific nodes, vectors, or generate summaries within the dataset, based on the provided query and chosen search method. We employ a combination of search approaches, each one relying on the technology implemented by vector datastores and graph stores.

Integration and Workflow

The integration of these three components allows for a cohesive and efficient workflow:

Data Input and Normalization:

Initially, Cognee.add is employed to input the data. During this stage, a dlt loader operates behind the scenes to process and store the data, assigning a unique dataset ID for tracking and retrieval purposes. This ensures the data is properly formatted and normalized, laying a solid foundation for the subsequent steps.

Creation of Multilayer Network:

Following the data normalization, Cognee.cognify takes the stage, constructing a multilayer network from the propositions derived from the input data. The network is created using LLM as a judge approach, with specific prompt that ask for creating of a set of relationships. This approach results in a set of layers and relationships that represent the document.

Data Retrieval and Analysis

The final step involves Cognee.search, where the user can query the constructed network to find specific information, analyze patterns, or extract summaries. The flexibility of the search function allows to search for content labels, summaries, nodes and also be able to retrieve data via similarity search. We also enable a combination of methods, which leads to getting benefits of different search approaches.

What’s next

We're diligently developing our upcoming features, with key objectives including:

  1. Adding audio and image support
  2. Improving search
  3. Adding evals
  4. Adding local models
  5. Adding dspy

To keep up with the progress, explore our implementation on GitHub and, if you find it valuable, consider starring it to show your support.

Going beyond Langchain + Weaviate: Level 4 towards production

Preface

This post is part of a series of texts aiming to explore and understand patterns and practices that enable the construction of a production-ready AI data infrastructure. The series mainly focuses on the modeling and retrieval of evolving data, which would empower Large Language Model (LLM) apps and Agents to serve millions of users concurrently.

For a broad overview of the problem and our understanding of the current state of the LLM landscape, check out our initial post here.

infographic (2).png

In this post, we delve into creating an initial data platform that can represent the core component of the future MlOps stack. Building a data platform is a big challenge in itself, and many solutions are available to help automate data tracking, ingestion, data contracting, monitoring, and warehousing.

In the last decade, data analytics and engineering fields have undergone significant transformations, shifting from storing data in centralized, siloed Oracle and SQL Server warehouses to a more agile, modular approach involving real-time data and cloud solutions like BigQuery and Snowflake.

Data processing evolved from an inessential activity, whose value would be inflated to please investors during the startup valuation phase, to a fundamental component of product development.

As we enter a new paradigm of interacting with systems through natural language, it's important to recognize that, while this method promises efficiency, it also comes with the challenges inherent in the imperfections of human language.

Suppose we want to use natural language as a new programming tool. In that case, we will need to either impose more constraints on it or make our systems more flexible so that they can adapt to the equivocal nature of language and information.

Our main goal should be to offer consistency, reproducibility and more that would ideally use language as a basic building block for things to come.

In order to come up with a set of solutions that could enable us to move forward, in this series of posts, we call on theoretical models from cognitive science and try to incorporate them into data engineering practices .

Level 4: Memory architecture and a first integration with keepi.ai

In our initial post, we started out conceptualizing a simple retrieval-augmented generation (RAG) model whose aim was to process and understand PDF documents.

We faced many bottlenecks in scaling these tasks, so in our second post, we needed to introduce the concept of memory domains..

In the next step, the focus was mainly on understanding what makes a good RAG considering all possible variables.

In this post, we address the fundamental question of the feasibility of extending LLMs beyond the data on which they were trained.

As a Microsoft research team recently stated:

  • Baseline RAG struggles to connect the dots when answering a question requires providing synthesized insights by traversing disparate pieces of information through their shared attributes.
  • Baseline RAG performs poorly when asked to understand summarized semantic concepts holistically over large data collections or even singular large documents.

To fill these gaps in RAG performance, we built a new framework—cognee.

Cognee combines human-inspired cognitive processes with efficient data management practices, infusing data points with more meaningful relationships to represent the (often messy) natural world in code more accurately.

Our observations indicate that systems, agents, and interactions often falter due to overextension and haste.

However, given the extensive demands and expectations surrounding Large Language Models (LLMs), addressing every aspect—agents, actions, integrations, and schedulers—is beyond the scope of the framework’s mission.

We've chosen to prioritize data, recognizing that the crux of many issues has already been addressed within the realm of data engineering.

We aim to establish a framework that includes file storage, tracing, and the development of robust AI memory data pipelines to help us manage and structure data more efficiently through its transformation processes.

Subsequently, our goal will be to devise methods for navigating diverse information segments and determine the most effective application of graph databases to store this data.

Our initial hypothesis—enhancing data management in vector stores through manipulative techniques and attention modulators for input and retrieval—proved less effective than anticipated.

Deconstructing and reorganizing data via graph databases emerged as a superior strategy, allowing us to adapt and repurpose existing tools for our needs more effectively.

AI Memory type State in Level 2 State in Level 4 Description
Sensory Memory API API Can be interpreted in this context as the interface used for the human input
STM Weaviate Class with hardcoded contract Neo4j with a connection to a Weaviate class The processing layer and a storage of the session/user context
LTM Weaviate Class with hardcoded contract Neo4j with a connection to a Weaviate class The information storage

On Level 4, we describe the integration of keepi, a chatGPT-powered WhatsApp bot that collects and summarizes information, via API endpoints.

Then, once we’ve ensured that we have a robust, scalable infrastructure, we deploy cognee to the cloud.

Workflow Overview

How_cognee_works.png

Steps:

  1. Users submit queries or documents for storage via the keepi.ai WhatsApp bot. This step integrates with the keepi.ai platform, utilizing Cognee endpoints for processing.
  2. The Cognee manager handles the incoming request and collaborates with several components:
    1. Relational database: Manages state and metadata related to operations.
    2. Classifier: Identifies, organizes, and enhances the content.
    3. Loader: Archives data in vector databases.
  3. The Graph Manager and Vector Store Manager collaboratively process and organize the input into structured nodes. A key function of the system involves breaking down user input into propositions—basic statements retaining factual content. These propositions are interconnected through relationships and cataloged in the Neo4j database by the Graph Manager, associated with specific user nodes. Users are represented by memory nodes that capture various memory levels, some of which link back to the raw data in vector databases.

What’s next

We're diligently developing our upcoming features, with key objectives including:

  1. Numerically defining and organizing the strengths of relationships within graphs.
  2. Creating a structured data model with opinions to facilitate document structure and data extraction.
  3. Converting Cognee into a Python library for easier integration.
  4. Broadening our database compatibility to support a broader range of systems.

Make sure to explore our implementation on GitHub, and, if you find it valuable, consider starring it to show your support.

Conclusion

If you enjoy the content or want to try out cognee please check out the github and give us a star!

Going beyond Langchain + Weaviate and towards a production ready modern data platform

Table of Contents

1. Introduction: The Current Generative AI Landscape

1.1. A brief overview

Browsing the largest AI platform directory available at the moment, we can observe around 7,000 new, mostly semi-finished AI projects — projects whose development is fueled by recent improvements in foundation models and open-source community contributions.

Decades of technological advancements have led to small teams being able to do in 2023 what in 2015 required a team of dozens.

Yet, the AI apps currently being pushed out still mostly feel and perform like demos.

It seems it has never been easier to create a startup, build an AI app, go to market… and fail.

The consensus is, nevertheless, that the AI space is the place to be in 2023.

“The AI Engineer [...] will likely be the highest-demand engineering job of the [coming] decade.”

Swyx

The stellar rise of AI engineering as a profession is, perhaps, signaling the need for a unified solution that is not yet there — a platform that is, in its essence, a Large Language Model (LLM), which could be employed as a powerful general problem solver.

To address this issue, dlthub and prometh.ai will collaborate on productionizing a common use-case, PDF processing, progressing step by step. We will use LLMs, AI frameworks, and services, refining the code until we attain a clearer understanding of what a modern LLM architecture stack might entail.

You can find the code in the PromethAI-Memory repository

1.2. The problem of putting code to production

infographic (2).png

Despite all the AI startup hype, there’s a glaring issue lurking right under the surface: foundation models do not have production-ready data infrastructure by default

Everyone seems to be building simple tools, like “Your Sales Agent” or “Your HR helper,” on top of OpenAI — a so-called  “Thin Wrapper” — and selling them as services.

Our intention, however, is not to merely capitalize on this nascent industry — it’s to use a new technology to catalyze a true digital paradigm shift  — to paraphrase investor Marc Andreessen, content of the new medium as the content of the previous medium.

What Andreessen meant by this is that each new medium for sharing information must encapsulate the content of the prior medium. For example, the internet encapsulates all books, movies, pictures, and stories from previous mediums.

After a unified AI solution is created, only then will AI agents be able to proactively and competently operate the browsers, apps, and devices we operate by ourselves today.

Intelligent agents in AI are programs capable of perceiving their environment, acting autonomously in order to achieve goals, and may improve their performance by learning or acquiring knowledge.

The reality is that we now have a set of data platforms and AI agents that are becoming available to the general public, whose content and methods were previously inaccessible to anyone not privy to the tech-heavy languages of data scientists and engineers.

As engineering tools move toward the mainstream, they need to become more intuitive and user friendly, while hiding their complexity with a set of background solutions.

Fundamentally, the issue of “Thin wrappers” is not an issue of bad products, but an issue of a lack of robust enough data engineering methods coupled with the general difficulties that come with creating production-ready code that relies on robust data platforms in a new space.

The current lack of production-ready data systems for LLMs and AI Agents opens up a gap we want to fill  by introducing robust data engineering practices to solve this issue.

In this series of texts, our aim will thus be to explore what would constitute:

  1. Proper data engineering methods for LLMs
  2. A production-ready generative AI data platform that unlocks AI assistants/Agent Networks

Each of the coming blog posts will be followed by Python code, to demonstrate the progress made toward building a modern AI data platform, raise important questions, and facilitate an open-source collaboration.

Let’s start by setting an attainable goal. As an example, let’s conceptualize a production-ready process that can analyze and process hundreds of PDFs for hundreds of users.

Imagine you're a developer, and you've got a stack of digital invoices in PDF format from various vendors. These PDFs are not just simple text files; they contain logos, varying fonts, perhaps some tables, and even handwritten notes or signatures.

Your goal? To extract relevant information, such as vendor names, invoice dates, total amounts, and line-item details, among others.

This task of analyzing PDFs may help us understand and define what a production-ready AI data platform entails. To perform the task, we’ll be drawing a parallel between Data Engineering concepts and those from Cognitive Sciences which tap into our understanding of how human memory works — this should provide the baseline for the evaluation of the POCs in this post.

We assume that Agent Networks of the future would resemble groups of humans with their own memory and unique contexts, all working and contributing toward a set of common objectives.

In our example of data extraction from PDFs — a modern enterprise may have hundreds of thousands, if not millions of such documents stored in different places, with many people hired to make sense of them.

This data is considered unstructured — you cannot handle it easily with current data engineering practices and database technology. The task to structure it is difficult and, to this day, has always needed to be performed manually.

With the advent of Agent Networks, which mimic human cognitive abilities, we could start realistically structuring this kind of information at scale. As this is still data processing — an engineering task — we need to combine those two approaches.

From an engineering standpoint, the next generation Data Platform needs to be built with the following in mind:

  • We need to give Agents access to the data at scale.
  • We need our Agents to operate like human minds so we need to provide them with tools to execute tasks and various types of memory for reasoning
  • We need to keep the systems under control, meaning that we apply good engineering practices to the whole system
  • We need to be able to test, sandbox, and roll back what Agents do and we need to observe them and log every action

In order to conceptualize a new model of data structure and relationships that transcends the traditional Data Warehousing approach, we can start perceiving procedural steps in Agent execution flows as thoughts and interpreting them through the prism of human cognitive processes such as the functioning of our memory system and its memory components.

Human memory can be divided into several distinct categories:

  • Sensory Memory (SM) → Very short term (15-30s) memory storage unit receiving information from our senses.
  • Short Term Memory (STM) → Short term memory that processes the information, and coordinates work based on information provided.
  • Long-Term Memory (LTM) → Stores information long term, and retrieves what it needs for daily life.

The general structure of human memory. Note that Weng doesn’t expand on the STM here in the way we did above :

Untitled

Broader, more relevant representation of memory for our context, and the corresponding data processing, based on Atkinson-Schiffrin memory model would be:

Untitled

2. Level 0: The Current State of Affairs

To understand the current LLM production systems, how they handle data input and processing, and their evolution, we start at Level 0 — the LLMs and their APIs as they are currently — and progress toward Level 7 — AI Agents and complex AI Data Platforms and Agent Networks of the future.

2.1. Developer Intent at Level 0

infographic (2).png

In order to extract relevant data from PDF documents, as an engineer you would turn to a powerful AI model like OpenAI, Anthropic, or Cohere (Layer 0 in our XYZ stack). Not all of them support this functionality, so you’d use Bing or a ChatGPT plugin like AskPDF, which do.

In order to "extract nuances," you might provide the model with specific examples or more directive prompts. For instance, "Identify the vendor name positioned near the top of the invoice, usually above the billing details."

Next, you'd "prompt it" with various PDFs to see how it reacts. Based on the outputs, you might notice that it misses handwritten dates or gets confused with certain fonts.

This is where "prompt engineering" comes in. You might adjust your initial prompt to be more specific or provide additional context. Maybe you now say, "Identify the vendor name and, if you find any handwritten text, treat it as the invoice date."

2.2 Toward the production code from the chatbot UX - POC at level 0

Untitled

Our POC at this stage consists of simply uploading a PDF and asking it questions until we have better and better answers based on prompt engineering. This exercise shows what is available with the current production systems, to help us set a baseline for the solutions to come.

  • If your goal is to understand the content of a PDF, Bing and OpenAI will enable you to upload documents and get explanations of their contents
  • Uses basic natural language processing (NLP) prompts without any schema on output data
  • Typically “forgets” the data after a query — no notion of storage (LTM)
  • In a production environment, data loss can have significant consequences. It can lead to operational disruptions, inaccurate analytics, and loss of valuable insights
  • There is no possibility to test the behavior of the system

Let’s break down the Data Platform component at this stage:

Memory type State Description
Sensory Memory Chatbot interface Can be interpreted in this context as the interface used for the human input
STM The context window of the chatbot/search. In essence stateless The processing layer and a storage of the session/user context
LTM Not present at this stage The information storage

Lacks:

  • Decoupling: Separating components to reduce interdependency.
  • Portability: Ability to run in different environments.
  • Modularity: Breaking down into smaller, independent parts.

Extendability: Capability to add features or functionality.

Next Steps:

  1. Implement a LTM memory component for information retention.
  2. Develop an abstraction layer for Sensory Memory input and processing multiple file types.

Addressing these points will enhance flexibility, reusability, and adaptability.

2.3 Summary - Ask PDF questions

Description Use-Case Summary Memory Maturity Production readiness
The Foundational Model Extract info from your documents ChatGPT prompt engineering as the only way to optimise outputs SM, STM are system defined, LTM is not present Works 15% of time Lacks Decoupling, Portability, Modularity and Extendability

2.4. Addendum - companies in the space: OpenAI, Anthropic, and Cohere

  • A brief on each provider, relevant model and its role in the modern data space.
  • The list of models and providers in the space

    Model Provider Structured data Speed Params Fine Tunability
    gpt-4 OpenAI  Yes ★☆☆   - No
    gpt-3.5-turbo OpenAI Yes ★★☆   175B No
    gpt-3 OpenAI No   ★☆☆  175B No
    ada, babbage, curie  OpenAI No ★★★   350M - 7B Yes
    claude Anthropic  No ★★☆   52B No 
    claude-instant Anthropic  No ★★★   52B No
    command-xlarge Cohere No  ★★☆  50B Yes
    command-medium Cohere No  ★★★  6B Yes
    BERT Google  No ★★★  345M  Yes
     T5 Google  No ★★☆   11B Yes
    PaLM  Google  No  ★☆☆  540B Yes
    LLaMA Meta AI  Yes ★★☆   65B Yes
     CTRL Salesforce  No ★★★  1.6B  Yes
    Dolly 2.0  Databricks No ★★☆   12B Yes 

3**. Level 1: Langchain & Weaviate**

3.1. Developer Intent at Level 1**: Langchain & Weaviate LLM Wrapper**

infographic (2).png

This step is basically an upgrade to the current state of the art LLM UX/UI where we add:

  • Permanent LTM memory (data store)

    As a developer, I need to answer questions on large PDFs that I can’t simply pass to the LLM due to technical limitations. The primary issue being addressed is the constraint on prompt length. As of now, GPT-4 has a limit of 4k tokens for both the prompt and the response combined. So, if the prompt comprises 3.5k tokens, the response can only be 0.5k tokens long.

  • LLM Framework like Langchain to adapt any document type to vector store

    Using Langchain provides a neat abstraction for me to get started quickly, connect to VectorDB, and get fast results.

  • Some higher level structured storage (dlthub)

Untitled

3.2. Translating Theory into Practice: POC at Level 1

  • LLMs can’t process all the data that a large PDF could contain. So, we need a place to store the PDF and a way to retrieve relevant information from it, so it can be passed on to the LLM.
  • When trying to build and process documents or user inputs, it’s important to store them in a Vector Database to be able to retrieve the information when needed, along with the past context.
  • A vector database is the optimal solution because it enables efficient storage, retrieval, and processing of high-dimensional data, making it ideal for applications like document search and user input analysis where context and similarity are important.
  • For the past several months, there has been a surge of projects that personalize LLMs by storing user settings and information in a VectorDB so they can be easily retrieved and used as input for the LLM.

This can be done by storing data in the Weaviate Vector Database; then, we can process our PDF.

  • We start by converting the PDF and translating it

carbon (5).png

  • the next step we store the PDF to Weaviate

carbon (6).png

  • We load the data into some type of database using dlthub

carbon (9).png

The parallel with our memory components becomes clearer at this stage. We have some way to define inputs which correspond to SM, while STM and LTM are starting to become two separate, clearly distinguishable entities. It becomes evident that we need to separate LTM data according to domains it belongs to but, at this point, a clear structure for how that would work has not yet emerged.

In addition, we can treat GPT as limited working memory and its context size as how much our model can remember during one operation.

It’s evident that, if we don’t manage the working memory well, we will overload it and fail to retrieve outputs. So, we will need to take a closer look into how humans do the same and how our working memory manages millions of facts, emotions, and senses swirling around our minds.

Let’s break down the Data Platform components at this stage:

Memory type State Description
Sensory Memory Command line interface + arguments Can be interpreted in this context as the arguments provided to the script
STM Partially Vector store, partially working memory The processing layer and a storage of the session/user context
LTM Vector store The raw document storage

Sensory Memory

Sensory memory can be seen as an input buffer where the information from the environment is stored temporarily. In our case, it’s the arguments we give to the command line script.

STM

STM is often associated with the concept of "working memory," which holds and manipulates information for short periods.

In our case, it is the time during which the process runs.

LTM

LTM can be conceptualized as a database in software systems. Databases store, organize, and retrieve data over extended periods. The information in LTM is organized and indexed, similar to how databases use tables, keys, and indexes to categorize and retrieve data efficiently.

VectorDB: The LTM Storage of Our AI Data Platform

Unlike traditional relational databases, that store data in tables, and newer NoSQL databases like MongoDB, that use JSON documents, vector databases specifically store and fetch vector embeddings.

Vector databases are crucial for Large Language Models and other modern, resource-hungry applications. They're designed for handling vector data, commonly used in fields like computer graphics, Machine Learning, and Geographic Information Systems.

Vector databases hinge on vector embeddings. These embeddings, packed with semantic details, help AI systems to understand data and retain long-term memory. They're condensed snapshots of training data and act as filters when processing new data in the inference stage of machine learning.

Problems:

  • Interoperability
  • Maintainability
  • Fault Tolerance

Next steps:

  1. Create a standardized data model
  2. Dockerize the component
  3. Create a FastAPI endpoint

3.4. Summary - The thing startup bros pitch to VCs

Description Use-Case Summary Knowledge Maturity Production readiness
Interface Endpoint for the Foundational Model Store data and query it for a particular use-case Langchain + Weaviate to improve user’s conversations + prompt engineering to get better outputs SM is somewhat modifiable, STM is not clearly defined, LTM is a VectorDB Works 25% of time Lacks Interoperability, Maintainability, Fault Tolerance Has some: Reusability, Portability, Extendability

3.5. Addendum - Frameworks and Vector DBs in the space: Langchain, Weaviate and others

  • A brief on each provider, relevant model and its role in the modern data space.
  • The list of models and providers in the space

    Tool/Service Tool type Ease of use Maturity Docs Production readiness
    Langchain Orchestration framework ★★☆  ★☆☆  ★★☆  ★☆☆ 
    Weaviate VectorDB ★★☆  ★★☆  ★★☆  ★★☆ 
    Pinecone VectorDB ★★☆  ★★☆  ★★☆  ★★☆ 
    ChromaDB VectorDB ★★☆  ★☆☆  ★☆☆  ★☆☆ 
    Haystack Orchestration framework ★★☆  ★☆☆  ★★☆  ★☆☆ 
    Huggingface's New Agent System Orchestration framework ★★☆  ★☆☆  ★★☆  ★☆☆ 
    Milvus VectorDB ★★☆  ★☆☆  ★★☆  ★☆☆ 
    https://gpt-index.readthedocs.io/ Orchestration framework ★★☆  ★☆☆  ★★☆  ★☆☆ 

Resources

Blog Posts:

  1. Large Action Models
  2. Making Data Ingestion Production-Ready: A LangChain-Powered Airbyte Destination
  3. The Problem with LangChain

Research Papers (ArXiv):

  1. Research Paper 1
  2. Research Paper 2
  3. Research Paper 3

Web Comics:

  1. xkcd comic

Reddit Discussions:

  1. Reddit Discussion: The Problem with LangChain

Developer Blog Posts:

  1. Unlocking the Power of Enterprise-Ready LLMS with NeMo

Industry Analysis:

  1. Emerging Architectures for LLM Applications

Prompt Engineering:

  1. Prompting Guide
  2. Tree of Thought Prompting: Walking the Path of Unique Approach to Problem Solving

Conclusion

If you enjoy the content or want to try out cognee please check out the github and give us a star!

Going beyond Langchain + Weaviate: Level 2 towards Production

1.1. The problem of putting code to production

This post is a part of a series of texts aiming to discover and understand patterns and practices that would enable building a production-ready AI data infrastructure. The main focus is on how to evolve data modeling and retrieval in order to enable Large Language Model (LLM) apps and Agents to serve millions of users concurrently.

For a broad overview of the problem and our understanding of the current state of the LLM landscape, check out our previous post

infographic (2).png

In this text, we continue our inquiry into what would constitute:

  1. Proper data engineering methods for LLMs
  2. A production-ready generative AI data platform that unlocks AI assistants/Agent Networks

To explore these points, we here at prometh.ai have partnered with dlthub in order to productionize a common use case — complex PDF processing — progressing level by level.

In the previous text, we wrote a simple script that relies on the Weaviate Vector database to turn unstructured data into structured data and help us make sense of it.

In this post, some of the shortcomings from the previous level will be addressed, including::

  1. Containerization
  2. Data model
  3. Data contract
  4. Vector Database retrieval strategies
  5. LLM context and task generation
  6. Dynamic Agent behavior and Agent tooling

3. Level 2: Memory Layer + FastAPI + Langchain + Weaviate

3.1. Developer Intent at Level 2

This phase enhances the basic script by incorporating:

  • Memory Manager

    The memory manager facilitates the execution and processing of VectorDB data by:

    1. Uniformly applying CRUD (Create, Read, Update, Delete) operations across various classes
    2. Representing different business domains or concepts, and
    3. Ensuring they adhere to a common data model, which regulates all data points across the system.
    4. Context Manager

    This central component processes and analyzes data from Vector DB, evaluates its significance, and compares the results with user-defined benchmarks.

    The primary objective is to establish a mechanism that encourages in-context learning and empowers the Agent’s adaptive understanding.

    As an example, let’s assume we uploaded the book A Call of the Wild by Jack London to our Vector DB semantic layer, to give our LLM a better understanding of the life of sled dogs in the early 1900s.

    Asking a question about the contents of the book will yield a straightforward answer, provided that the book contains an explicit answer to our question.

    To enable better question answering and access to additional information such as historical context, summaries, and other documents, we need to introduce different memory stores and a set of attention modulators, which are meant to manage the prioritization of data retrieved for the answers.

  • Task Manager

    Utilizing the tools at hand and guided by the user's prompt, the task manager determines a sequence of actions and their execution order.

    For example, let’s assume that the user asks: “When was Buck (one of the dogs from A Call of the Wild) kidnapped” and to have the answer translated to German”

    This query would be broken down by the task manager into a set of atomic tasks that can then be handed over to the Agent.

    The ordered task list could be:

    1. Retrieve information about the PDF from the database.
    2. Translate the information to German.
    3. The Agent

    AI agents can use computers independently. They can browse the web, use apps, read and write files, make credit card payments, and even autonomously execute processes on your personal computer.

    In our case, the Agent has only a few tools at its disposal, such as tools to translate text or structure data. Using these tools, it processes and executes tasks in the sequence they are provided by the Task Manager and the Context Manager.

3.2 Toward the memory layer - POC at level 2

Untitled

At this stage, our proof of concept (POC) allows uploading a PDF document and requesting specific actions on it such as "load to database", "translate to German", or "convert to JSON." Prior task resolutions and potential operations are assessed by the Context Manager and Task Manager services.

The following set of steps explains the workflow of the POC at level 2:

  • Initially, we specify the parameters for the document we wish to upload and define our objective in the prompt:

Untitled

  • The memory manager retrieves the parameters and the attention modulators and creates context based on Episodic and Semantic memory stores (previous runs of the job + raw data):

    carbon (23).png

  • To do this, it starts by filtering user input, in the same way our brains filter important from redundant information. As an example, if there are children playing and talking loudly in the background during our Zoom meeting, we can still pool our attention together and focus on what the person on the other side is saying.

    The same principle is applied here:

carbon (19).png

  • In the next step, we apply a set of attention modulators to process the data obtained from the Vector Store.

    NOTE: In cognitive science, attention modulators can be thought of as factors or mechanisms that influence the direction and intensity of attention.

    As we have many memory stores, we need to prioritize the data points that we retrieve via semantic search.

    Since semantic search is not enough by itself, scoring data points happens via a set of functions that replicate how attention modulators work in cognitive science.

    Initially, we’ve implemented a few attention modulators that we thought could improve the document retrieval process:

    Frequency: This refers to how often a specific stimulus or event is encountered. Stimuli that are encountered more frequently are more likely to be attended to or remembered.

    Recency: This refers to how recently a stimulus or event was encountered. Items or events that occurred more recently are typically easier to recall than those that occurred a long time ago.

We have implemented many more, and you can find them in our

repository. More are still needed and contributions are more than welcome.

Let’s see the modulators in action:

carbon (20).png

In the code above we fetch the memories from the Semantic Memory bank where our knowledge of the world is stored (the PDFs). We select the relevant documents by using the handle_modulator function.

  • The handle_modulator function is defined below and explains how scoring of memories happens.

carbon (21).png

We process the data retrieved with OpenAI functions and store the results for the Task Manager to be able to determine what actions the Agent should take.

The Task Manager then sorts and converts user input into a set of actionable steps based on the tools available.

carbon (22).png

Finally, the Agent interprets the context and performs the steps using the tools it has available. We see this as the step where the Agents take over the task, executing it in their own way.

Now, let's look back at what constitutes the Data Platform:

Memory type State Description
Sensory Memory API Can be interpreted in this context as the interface used for the human input
STM Weaviate Class with hardcoded contract The processing layer and a storage of the session/user context
LTM Weaviate Class with hardcoded contract The information storage

Lacks:

  • Extendability: Capability to add features or functionality.
  • Loading flexibility: Ability to apply different chunking strategies
  • Testability: How to test the code and make sure it runs

Next Steps:

  1. Implement different strategies for vector search
  2. Add more tools to process PDFs
  3. Add more attention modulators
  4. Add a solid test framework

Conclusion

If you enjoy the content or want to try out cognee please check out the github and give us a star!

Going beyond Langchain + Weaviate: Level 3 towards production

Preface

This post is part of a series of texts aiming to explore and understand patterns and practices that enable the construction of a production-ready AI data infrastructure. The main focus of the series is on the modeling and retrieval of evolving data, which would empower Large Language Model (LLM) apps and Agents to serve millions of users concurrently.

For a broad overview of the problem and our understanding of the current state of the LLM landscape, check out our initial post here.

In this post, we delve into context enrichment and testing in Retrieval Augmented Generation (RAG) applications.

RAG applications can retrieve relevant information from a knowledge base and generate detailed, context-aware answers to user queries.

As we are trying to improve on the base information LLMs are giving us, we need to be able to retrieve and understand more complex data, which can be stored in various data stores, in many formats, and using different techniques.

All of this leads to a lot of opportunities, but also creates a lot of confusion in generating and using RAG applications and extending the existing context of LLMs with new knowledge.

1. Context Enrichment and Testing in RAG Applications

In navigating the complexities of RAG applications, the first challenge we face is the need for robust testing. Determining whether augmenting a LLM's context with additional information will yield better results is far from straightforward and often relies on subjective assessments.

Imagine, for instance, adding the digital version of the book The Adventures of Tom Sawyer to the LLM's database in order to enrich its context and obtain more detailed answers about the book's content for a paper we're writing. To evaluate this enhancement, we need a way to measure the accuracy of the responses before and after adding the book while considering the variations of every adjustable parameter.

2. Adjustable Parameters in RAG Applications

The end-to-end process of enhancing RAG applications involves various adjustable parameters, which offer multiple paths toward achieving similar goals with varying outcomes. These parameters include:

  1. Number of documents loaded into memory.
  2. Size of each sub-document chunk uploaded.
  3. Overlap between documents uploaded.
  4. Relationship between documents (Parent-Son etc.)
  5. Type of embedding used for data-to-vector conversion (OpenAI, Cohere, or any other embedding method).
  6. Metadata structure for data navigation.
  7. Indexes and data structures.
  8. Search methods (text, semantic, or fusion search).
  9. Output retrieval and scoring methods.
  10. Integration of outputs with other data for in-context learning.
  11. Structure of the final output.

3. The Role of Memory Manager at Level 3

Memory Layer + FastAPI + Langchain + Weaviate

3.1. Developer Intent at Level 3

The goal we set for our system in our initial post — processing and creating structured data from PDFs — presented an interesting set of problems to solve. OpenAI functions and dlthub allowed us to accomplish this task relatively quickly.

The real issue arises when we try to scale this task — this is what our second post tried to address. In addition, retrieving meaningful data from the Vector Databases turned out to be much more challenging than initially imagined.

In this post, we’ll discuss how we can establish a testing method, improve our ability to retrieve the information we've processed, and make the codebase more robust and production-ready.

We’ll primarily focus on the following:

  1. Memory Manager

    The Memory Manager is a set of functions and tools for creating dynamic memory objects. In our previous blog posts, we explored the application of concepts from cognitive science —  Short-Term Memory, Long-Term Memory, and Cognitive Buffer — on Agent Network development.

    We might need to add more memory domains to the process, as sticking to just these three can pose limitations. Changes in the codebase now enable real-time creation of dynamic memory objects, which have hierarchical relationships and can relate to each other.

  2. RAG test tool

    The RAG test tool allows us to control critical parameters for optimizing and testing RAG applications, including chunk size, chunk overlap, search type, metadata structure, and more.

The Memory Manager is a crucial component of any cognitive architecture platform. In our previous posts, we’ve discussed how to turn unstructured data to structured, how to relate concepts to each other in the vector store, and which problems can arise when productionizing these systems.

While we’ve addressed many open questions, many still remain. Based on our surveys and interviews with field experts, applications utilizing Memory components face the following challenges:

  1. Inability to reliably link between Memories

    Relying solely on semantic search or its derivatives to recognize the similarities between terms like "pair" and "combine" is a step forward. However, actually defining, capturing, and quantifying the relationships between any two objects would aid future memory access.

    Solution: Graphs/Traditional DB

  2. Failure to structure and organize Memories

    We used OpenAI functions to structure and organize different Memory elements and convert them into understandable JSONs. Nevertheless, our surveys indicate that many people struggle with metadata management and the structure of retrievals. Ideally, these aspects should all be managed and organized in one place.

    Solution: OpenAI functions/Data contracting/Metadata management

  3. Hierarchy, size, and relationships of individual Memory elements

    Although semantic search helps us understand the same concepts, we need to add more abstract concepts and ideas and link them. The ultimate goal is to emulate human understanding of the world, which comprises basic concepts that, when combined, create higher complexity objects.

    Solution: Graphs/Custom solutions

  4. Evaluation possibilities of memory components (can they be distilled to True/False)

    Based on the psycholinguistic theories proposed by Walter Kintsch, any cognitive system should be able to provide True/False evaluations. Kintsch defines a basic memory component, a ‘proposition,’ which can be evaluated as True or False and can interlink with other Memory components.

    A proposition could be, for example, "The sky is blue," and its evaluation to True/False could lead to actions such as "Do not bring an umbrella" or "Wear a t-shirt."

    Potential solution: Particular memory structure

Testability of Memory components

We should have a reliable method to test Memory components, at scale, for any number of use-cases. We need benchmarks across every level of testing to capture and define predicted behavior.

Suppose we need to test if Memory data from six months ago can be retrieved by our system and measure how much it contributes to a response that spans memories that are years old.

Solution: RAG testing framework

Dashboard_example.png

Let’s look at the RAG testing framework:

It allows to you to test and combine all variations of:

  1. Number of documents loaded into memory. ✅
  2. Size of each sub-document chunk uploaded. ✅
  3. Overlap between documents uploaded. ✅
  4. Relationship between documents (Parent-Son etc.) 👷🏻‍♂️
  5. Type of embedding used for data-to-vector conversion (OpenAI, Cohere, or any other embedding method). ✅
  6. Metadata structure for data navigation. ✅
  7. Indexes and data structures. ✅
  8. Search methods (text, semantic, or fusion search). ✅
  9. Output retrieval and scoring methods. 👷🏻‍♂️
  10. Integration of outputs with other data for in-context learning. 👷🏻‍♂️
  11. Structure of the final output. ✅

These parameters and results of the tests will be stored in Postgres database and can be visualized using Superset

To try it, navigate to: https://github.com/topoteretes/PromethAI-Memory

Copy the .env.template to .env and fill in the variables

Specify the environment variable in the .env file to "local"

Use the poetry environment:

poetry shell

Change the .env file Environment variable to "local"

Launch the postgres DB

docker compose up postgres

Launch the superset

docker compose up superset

Open the superset in your browser

http://localhost:8088 Add the Postgres datasource to the Superset with the following connection string:

postgres://bla:bla@postgres:5432/bubu

Make sure to run to initialize DB tables

python scripts/create_database.py

After that, you can run the RAG test manager from your command line.

    python rag_test_manager.py \
    --file ".data" \
    --test_set "example_data/test_set.json" \
    --user_id "97980cfea0067" \
    --params "chunk_size" "search_type" \
    --metadata "example_data/metadata.json" \
    --retriever_type "single_document_context"

Examples of metadata structure and test set are in the folder "example_data"

Conclusion

If you enjoy the content or want to try out cognee please check out the github and give us a star!