Set up with OpenAI and Redisv3.8+

Enable AI Semantic Caching with OpenAI embeddings API and a Redis vector database. Configuring Kong to use OpenAI as upstream or configure AI Proxy or AI Proxy Adavnced plugin is required.

Environment variables

  • OPENAI_API_KEY: Your OpenAI API key

  • REDIS_HOST: The host where your Redis instance runs

Set up the plugin

Add this section to your declarative configuration file:

_format_version: "3.0"
plugins:
  - name: ai-semantic-cache
    config:
      embeddings:
        auth:
          header_name: Authorization
          header_value: Bearer ${{ env "DECK_OPENAI_API_KEY" }}
        model:
          provider: openai
          name: text-embedding-3-large
          options:
            upstream_url: https://api.openai.com/v1/embeddings
      vectordb:
        dimensions: 3072
        distance_metric: cosine
        strategy: redis
        threshold: 0.1
        redis:
          host: ${{ env "DECK_REDIS_HOST" }}
          port: 6379
Copied to clipboard!

Did this doc help?

Something wrong?

Help us make these docs great!

Kong Developer docs are open source. If you find these useful and want to make them better, contribute today!