Introduction

This guide will walk you through the process of setting up a Meilisearch REST embedder with Hugging Face Inference Endpoints to enable semantic search capabilities.

You can use Hugging Face and Meilisearch in two ways: running the model locally by setting the embedder source to huggingface, or remotely in Hugging Face’s servers by setting the embeder source to rest.

Requirements

To follow this guide, you’ll need:

  • A Meilisearch Cloud project running version >=1.13
  • A Hugging Face account with a deployed inference endpoint
  • The endpoint URL and API key of the deployed model on your Hugging Face account

Configure the embedder

Set up an embedder using the update settings endpoint:

{
  "hf-inference": {
    "source": "rest",
    "url": "ENDPOINT_URL",
    "apiKey": "API_KEY",
    "dimensions": 384,
    "documentTemplate": "CUSTOM_LIQUID_TEMPLATE",
    "request": {
      "inputs": ["{{text}}", "{{..}}"],
      "model": "baai/bge-small-en-v1.5"
    },
    "response": ["{{embedding}}", "{{..}}"]
  }
}

In this configuration:

  • source: declares Meilisearch should connect to this embedder via its REST API
  • url: replace ENDPOINT_URL with the address of your Hugging Face model endpoint
  • apiKey: replace API_KEY with your Hugging Face API key
  • dimensions: specifies the dimensions of the embeddings, which are 384 for baai/bge-small-en-v1.5
  • documentTemplate: an optional but recommended template for the data you will send the embedder
  • request: defines the structure and parameters of the request Meilisearch will send to the embedder
  • response: defines the structure of the embedder’s response

Once you’ve configured the embedder, Meilisearch will automatically generate embeddings for your documents. Monitor the task using the Cloud UI or the get task endpoint.

This example uses BAAI/bge-small-en-v1.5 as its model, but Hugging Face offers other options that may fit your dataset better.

With the embedder set up, you can now perform semantic searches. Make a search request with the hybrid search parameter, setting semanticRatio to 1:

{
  "q": "QUERY_TERMS",
  "hybrid": {
    "semanticRatio": 1,
    "embedder": "hf-inference"
  }
}

In this request:

  • q: the search query
  • hybrid: enables AI-powered search functionality
    • semanticRatio: controls the balance between semantic search and full-text search. Setting it to 1 means you will only receive semantic search results
    • embedder: the name of the embedder used for generating embeddings

Conclusion

You have set up with an embedder using Hugging Face Inference Endpoints. This allows you to use pure semantic search capabilities in your application.

Consult the embedder setting documentation for more information on other embedder configuration options.