Skip to main content

Installation

License: Dual-license
warning

EventCatalog Chat is currently in beta. If you find any issues, please let us know on Discord.

Installation

The installation requires a few steps:

  1. Install the @eventcatalog/generator-ai plugin
  2. Configure the eventcatalog.config.js file to enable the AI assistant
  3. Run the catalog

1. Install the plugin

Install the @eventcatalog/generator-ai plugin and @langchain/community package in your catalog directory.

npm install @eventcatalog/generator-ai && npm install @langchain/community
tip

Don't have an EventCatalog yet? Get started

Next, configure the plugin in your eventcatalog.config.js file.

// rest of file 
generators: [
[
"@eventcatalog/generator-ai", {
// optional, if you want to split markdown files into smaller chunks
// Can help with your models (default false)
splitMarkdownFiles: true,

// optional, if you want to include users and teams in the documents (embeddings)
// default is false, better search results without, but if you want the ability to ask questions about users and teams then set to true
includeUsersAndTeams: false
}
],
],

Generate the documents and embeddings for your catalog by running the following command.

npm run generate
Keep in mind

This will create documents and embeddings for your catalog. You need to rerun this command whenever you make changes to your catalog.


2. Configure the AI assistant

To configure the AI assistant, you need to add the following to your eventcatalog.config.js file.

chat: {
// enable the chat or not (default true, for new catalogs)
enabled: true,
// Optional model you can use, default is Hermes-3-Llama-3.2-3B-q4f16_1-MLC
// Another good one is Llama-3.2-3B-Instruct-q4f16_1-MLC
model: 'Hermes-3-Llama-3.2-3B-q4f16_1-MLC',

// The maximum number of tokens to generate in the completion (4096 by default, value from model)
max_tokens: 4096,

// number of results to match in the vector search (50 by default)
similarityResults: 50
}

Selecting your own model

You can find the list of supported models in the models section.

3. Run the catalog

Once you have created the documents and embeddings for your catalog, using the plugin and configured the eventcatalog.config.js file, you can run the catalog.

npm run dev

Navigate to the http://localhost:3000/chat and you should see the AI assistant.

If you want to see a demo of the AI assistant, you can try our demo catalog here.

Browser compatibility

EventCatalog Chat using these local models only works in Chrome and Edge.

By default, WebGPU is enabled and supported in both Chrome and Edge. However, it is possible to enable it in Firefox and Firefox Nightly. Check the browser compatibility for more information.

Have a question?

If you have any questions, please join us on Discord.