Athento allows the creation of knowledge bases, an advanced functionality that acts as a special type of database oriented to knowledge management. These bases are used to feed intelligent wizards that answer questions based on document content.
What is a knowledge base in Athento?
A knowledge base in Athento is composed of a set of documents (both binary and their metadata) coming from certain spaces (series) and forms (doctypes). These documents are indexed and used as context by the intelligent assistant to answer queries.
Steps to Create a Knowledge Base
-
Access the Athento Advanced Administration area.
-
Go to the “Knowledge Base” section.
-
Click on “Add new knowledge base”.
-
Complete the following configuration fields:
Metadata | Description |
---|---|
Name | Identifying name of the knowledge base. |
Description (optional) | Brief description of the purpose or scope of the base. |
Prompt | Context and instructions that guide the assistant on how to structure the answers. This field helps to define the expected tone, format and scope. There is a default promt that can be used. |
Top N | Maximum number of documents that the wizard will use to generate the answer. The N most relevant documents are selected according to their similarity to the query. |
Is Public | If enabled, the documents in the database will be considered public and the wizard will generate public links as part of the references. |
Series (Spaces) | Spaces from which documents will be included in the knowledge base. |
Doctypes (Forms) | Specific forms whose documents will be incorporated into the base. |
Source Metadata | Metadata fields to be included in the context of each document to enrich the wizard's answers. |
Access to the Assistant
Once the knowledge base has been created, a direct link to the assistant associated to the knowledge base will appear in the corresponding list. From there you will be able to start making queries using the content of the selected documents as a knowledge source.
To access the wizard view and start querying, a link for each knowledge base appears in the knowledge base list.
Maintaining the context of the conversation
To maintain the context of the current conversation it is necessary to set the ID of an openAI assistant in the project.
- OPENAI_ASSISTANT_ID. ID of an Open AI assistant. More information about the wizard configuration in this link.
How does it work and what to keep in mind about the answers?
In order for the wizard to use the documents as response context, they must be transformed into vector representations, also known as embeddings. This process allows queries to be compared with documents semantically, not just textually.
Technical process:
-
Generation of embeddings:
Document embeddings are generated by executing the internal command: nginx
This command transforms documents into vectors and stores them in a FAISS-optimized index.
-
User query:
When the user makes a query, it also becomes an embedding. -
Similarity search:
The system compares the query vector with the stored vectors and selects the closest documents using cosine similarity.
The Top N parameter determines how many of these relevant documents will be selected. -
Response generation:
The selected documents (embeddings + relevant content) are sent together with the query to the LLM model (such as OpenAI) to generate a reasoned and contextualized response.
Comments
0 comments
Please sign in to leave a comment.