This guide walks you through setting up the AI Proxy plugin with Hugging Face.
For all providers, the Kong AI Proxy plugin attaches to route entities.
It can be installed into one route per operation, for example:
- OpenAI
chat
route - Cohere
chat
route - Cohere
completions
route
Each of these AI-enabled routes must point to a null service. This service doesn’t need to map to any real upstream URL,
it can point somewhere empty (for example, http://localhost:32000
), because the AI Proxy plugin overwrites the upstream URL.
This requirement will be removed in a later Kong revision.
Prerequisites
- Hugging Face account and subscription
- You need a service to contain the route for the LLM provider. Create a service first:
curl -X POST http://localhost:8001/services \ --data "name=ai-proxy" \ --data "url=http://localhost:32000"
Remember that the upstream URL can point anywhere empty, as it won’t be used by the plugin.
- Hugging Face access token with permissions to make calls to the Inference API
- Text-generation model from Hugging Face
Provider configuration
Set up route and plugin
Test the configuration
Make an llm/v1/chat
type request to test your new endpoint:
curl -X POST http://localhost:8000/huggingface-chat \
-H 'Content-Type: application/json' \
--data-raw '{ "messages": [ { "role": "system", "content": "You are a mathematician" }, { "role": "user", "content": "What is 1+1?"} ] }'