Set up with Mistral and Redisv3.8+

Enable AI Semantic Caching with Mistral embeddings API and a Redis vector database. Configuring Kong to use Mistral as upstream or configure AI Proxy or AI Proxy Adavnced plugin is required.

Environment variables

  • MISTRAL_API_KEY: Your Mistral API key

Set up the plugin

Add this section to your declarative configuration file:

_format_version: "3.0"
plugins:
  - name: ai-semantic-cache
    config:
      embeddings:
        auth:
          header_name: Authorization
          header_value: ${{ env "DECK_MISTRAL_API_KEY" }}
        model:
          provider: mistral
          name: mistral-embed
          options:
            upstream_url: https://api.mistral.ai/v1/embeddings
      vectordb:
        dimensions: 1024
        distance_metric: cosine
        strategy: redis
        threshold: 0.1
        redis:
          host: redis-stack.redis.svc.cluster.local
          port: 6379
Copied to clipboard!

Did this doc help?

Something wrong?

Help us make these docs great!

Kong Developer docs are open source. If you find these useful and want to make them better, contribute today!