Skip to main content

Using OpenAI models

EventCatalog Chat can be configured to use any OpenAI model.

This let's you talk to your architecture using the power of OpenAI models.

EventCatalog Chat

This example is using the o4-mini model.

To use OpenAI models, you need to bring your OpenAPI keys to EventCatalog.

How does OpenAI models work with EventCatalog Chat?

EventCatalog Chat will configure the given OpenAI model to answer questions about your catalog. This can be great to get access to the latest and greatest models from OpenAI and get better results from EventCatalog Chat.

Configuring EventCatalog to use OpenAI models is done in a few steps:

  1. Configure your .env file with your OpenAI API key and EventCatalog license key.
  2. You install the @eventcatalog/generator-ai plugin (to create documents and embeddings for your catalog).
  3. You configure EventCatalog to use OpenAI models in the eventcatalog.config.js file.
  4. You configure the output to be server so that EventCatalog can make requests to the OpenAI API.

Installation

This will walk you through how to install EventCatalog Chat and configure it to use OpenAI models, and run your catalog on a server.

Setup your license key

First, you need to get a license key for EventCatalog.

EventCatalog Chat is a paid feature, you can get a 14 day free trial of the EventCatalog Starter Plan on EventCatalog Cloud.

Once you have a license key, you can put it into your .env file.

.env
EVENTCATALOG_SCALE_LICENSE_KEY=<your-license-key>
OPENAI_API_KEY=<your-openai-api-key>

Install the @eventcatalog/generator-ai plugin

Next install the @eventcatalog/generator-ai plugin in your catalog directory.

npm install @eventcatalog/generator-ai

Configure the plugin in your eventcatalog.config.js file.

Here we setup and use the OpenAI embeddings model for our documents and embeddings.

// rest of file 
generators: [
[
"@eventcatalog/generator-ai", {
// optional, if you want to split markdown files into smaller chunks
// Can help with your models (default false)
splitMarkdownFiles: true,

// optional, if you want to include users and teams in the documents (embeddings)
// default is false, better search results without, but if you want the ability to ask questions about users and teams then set to true
includeUsersAndTeams: false,

// optional, if you want to include custom documentation in the documents (embeddings)
// custom documentation is a feature that lets you bring any documentation into EventCatalog
// default is true, but you can turn this off if you don't want to include custom documentation in the documents
includeCustomDocumentation: false,

// optional, if you want to use a different embedding model (recommended for openai models)
// Here we use the OpenAI embeddings model for our documents and embeddings.
// added in version @eventcatalog/generator-ai@1.0.2
embedding: {
// Set the provider to openai
provider: 'openai',
// Set the model to the OpenAI embeddings model
// supports: text-embedding-3-large, text-embedding-3-small, text-embedding-ada-002
model: 'text-embedding-3-large',
}
}
],
],

Generate the documents and embeddings for your catalog by running the following command.

npm run generate

This will generate your documents and embeddings for your catalog. The browser model will use these documents and embeddings to answer your questions.

Keep in mind

This will create documents and embeddings for your catalog. You need to rerun this command whenever you make changes to your catalog.


Configure the OpenAI model

Next, you need to configure the OpenAI model. Add the following to your eventcatalog.config.js file.

eventcatalog.config.js
chat: {
// enable the chat or not (default true, for new catalogs)
enabled: true,
// OpenAI model to use (see a list of models in the models section)
model: 'o4-mini'
}

Configure EventCatalog to run on a server

You need to configure EventCatalog to run on a server, see the EventCatalog documentation for more information and the hosting options.

eventcatalog.config.js
// rest of the config...
// Default output is 'static', but you can change it to 'server'
output: 'server'
"Why do I need to run EventCatalog on a server?"

Running EventCatalog on a server allows you to keep your API keys safe. EventCatalog will make a request to its server side code, to get a response from the OpenAI model. Your keys are never exposed to the client side code. You can use our DockerFile to run EventCatalog on a server, see the EventCatalog documentation for more information.

Selecting your own model

You can select from a range of models, see the models section for more information.

Run EventCatalog

Once you have installed and configured the plugin, and enabled the chat in the eventcatalog.config.js file, you can run the catalog.

npm run generate
npm run dev

Navigate to the http://localhost:3000/chat and you should see the AI assistant.

EventCatalog Chat

"Did you know you can bring your own prompts to EventCatalog Chat?"

You can bring your own prompts to EventCatalog Chat. This lets you tailor the chat experience to your organization and teams. See the bring your own prompts section for more information.

Configuration

You can configure the model in the eventcatalog.config.js file.

eventcatalog.config.js
chat: {
model: 'o4-mini'
}

Models

Here are a list of models that you can use with EventCatalog Chat.

  • o1
  • o1-2024-12-17
  • o1-mini
  • o1-mini-2024-09-12
  • o1-preview
  • o1-preview-2024-09-12
  • o3-mini
  • o3-mini-2025-01-31
  • o3
  • o3-2025-04-16
  • o4-mini
  • o4-mini-2025-04-16
  • gpt-4.1
  • gpt-4.1-2025-04-14
  • gpt-4.1-mini
  • gpt-4.1-mini-2025-04-14
  • gpt-4.1-nano
  • gpt-4.1-nano-2025-04-14
  • gpt-4o
  • gpt-4o-2024-05-13
  • gpt-4o-2024-08-06
  • gpt-4o-2024-11-20
  • gpt-4o-audio-preview
  • gpt-4o-audio-preview-2024-10-01
  • gpt-4o-audio-preview-2024-12-17
  • gpt-4o-search-preview
  • gpt-4o-search-preview-2025-03-11
  • gpt-4o-mini-search-preview
  • gpt-4o-mini-search-preview-2025-03-11
  • gpt-4o-mini
  • gpt-4o-mini-2024-07-18
  • gpt-4-turbo
  • gpt-4-turbo-2024-04-09
  • gpt-4-turbo-preview
  • gpt-4-0125-preview
  • gpt-4-1106-preview
  • gpt-4
  • gpt-4-0613
  • gpt-4.5-preview
  • gpt-4.5-preview-2025-02-27
  • gpt-3.5-turbo-0125
  • gpt-3.5-turbo
  • gpt-3.5-turbo-1106
  • chatgpt-4o-latest

You can find more information about the models in the OpenAI documentation.


Got a question? Or want to contribute?

Found a good model for your catalog? Please let us know on Discord. Or if you need help configuring your model, please join us on Discord.

Have a question?

If you have any questions, please join us on Discord.