What is the Knowledge Guru?
Knowledge Guru is a prototype Cambridge Semantics developed providing a ChatGPT-like interface for interacting with underlying knowledge graphs provided through Anzo®.
It utilizes LLM technology (GPT4) to transform user input and questions into knowledge graph queries that match the underlying available data’s conceptual model, also known as an ontology. Notably, like ChatGPT it maintains a thread of the conversation that allows for any user to have an exploratory introspection on insights of interest as well as allow the user to work iteratively to transform or reshape what they see visualized.
Moreover, Knowledge Guru informs users when their questions go beyond the knowledge base's limits and provides suggestions for additional data needed to answer those questions. Knowledge Guru will try to find alternative ways to answer user queries with available data but can also go beyond by providing step-by-step instructions that help users achieve their analytics goals.
Frequently Asked Knowledge Guru Questions:
- What is the Knowledge Guru?
- Can’t ChatGPT do everything for me already? Why do I need knowledge graph technology?
- What makes Anzo uniquely suited for LLM integration?
- How does Knowledge Guru work?
- Will Knowledge Guru work with other knowledge graphs?
- How do I start using the Knowledge Guru?
- Can the Knowledge Guru be customized for my organization?
- How much does Knowledge Guru cost?
Can’t ChatGPT, Bard, or other LLMs do everything for me already? Why do I need knowledge graph technology?
While LLMs are extremely powerful and may appear all-knowing at first glance, there are many challenges to relying directly on such models. Hallucinations, biases, lack of explainability, data privacy, security and consistency are some of the issues that impede LLM adoption at the enterprise level.
Instead, the Knowledge Guru drives interactions with LLMs by providing a highly curated interaction. Knowledge Graphs provide a semantic layer or conceptual model that describes the underlying data in terms that both humans and machines can understand; both the properties and relationships between entities. LLMs are able to translate the synonyms or human language(s) used in a semantic model to the question a customer is asking. Without this integration, LLMs act without bounds, enterprise context, and data permissions rendering responses unusable in most cases.
What makes CSI’s knowledge graph technology, Anzo, uniquely suited for LLM integration?
- Database Performance - the questions business leaders ask will often be complex (logistically and computationally expensive). Anzo’s MPP OLAP architecture provides performance on these types of queries without impacting source systems.
- Data Integration - Anzo provides the tooling to rapidly integrate from virtually any source system and format - structured and unstructured - into a knowledge graph.
- Semantics - A proper semantic model is key to translating between questions as it provides the LLM a basis for understanding terminology and meaning as it pertains to an enterprise or subdivision.
- Accuracy and Provenance - Utilizing Anzo, users can be confident that the results are accurate and specific to their knowledge graph. Anzo can provide any data provenance information in the knowledge graph if prompted. Data privacy and security remain in clients’ control.
How does it work?
The current KG prototype is a web app implemented as an Anzo dashboard(Anzo Hi-Res), utilizing the infrastructure provided by Anzo. It offers access control and data management features that improve query performance.
When a user asks a question, it is sent to the Anzo server, which handles communication with the LLM. The LLM's response is returned through a service and interpreted by the KG app's "chatboard". Most user questions result in SPARQL queries, which are extracted by the designated visualization for user interaction. These queries are then executed on AnzoGraph with the user's identity. The results are verified and displayed, along with feedback within the chat session for reference in follow-up questions.
When a "chatboard" session is reopened, the queries are immediately re-executed to ensure up-to-date answers and visualizations against the knowledge graph data.
When the LLM creates a query, it can be inspected and accompanied with a simple explanation of its actions. A user can then request a more detailed explanation for each query which the LLM can then further describe. The fact that these questions have a query that describes the request to the database eliminate the ambiguity and hallucinations that are often experienced with LLM interactions.
The KG prototype currently supports LLM endpoints from OpenAI and MS Azure's GPT-4 models. The location of the LLM used can be configured and could be used with other models in the future.
Will Knowledge Guru work with other knowledge graphs?
CSI is not planning on providing the Knowledge Guru as a direct overlay on other knowledge graph platforms. However, Anzo has the ability to ingest or virtualize most forms of data including other forms of graph data which means that data stored in other knowledge graphs can still act as a source of information.
How do I start using the Knowledge Guru?
The Knowledge Guru, while now a proven capability, is in the development stage. Timing of its commercial availability will be announced in the coming months. In the meantime, you can save your place in line by joining the Knowledge Guru waitlist: https://info.cambridgesemantics.com/knowledge-guru
Can the Knowledge Guru be customized for my organization?
While the KG prototype is designed to be generic and work against any Anzo graphmart, there will be parts of it customizable for a customer's particular application. The semantic model that describes each organization's underlying data as part of a knowledge graph is typically unique and provides the foundation on which answers are formed. In addition, as fine-tuning becomes available for the GPT-4 LLM model, we will take advantage of this to shrink the context and provide tuning examples that make the KG experience even more well suited to a customer’s particular knowledge graph. For example, certain types of reference data would likely become part of a fine-tuned model as well as progressive user feedback on the success of particular question/query/visualization combinations and integration of a customer's own API.
How much does Knowledge Guru cost?
CSI is still working on building the full product offering for the Knowledge Guru as well as a pricing structure. For more information, join the Knowledge Guru waitlist: https://info.cambridgesemantics.com/knowledge-guru