Ask AI is a beta feature under the Algolia Terms of Service (“Beta Services”). Use of this feature is subject to Algolia’s GenAI Addendum.

Guides / Algolia AI / Ask AI

The Ask AI API enables developers to build custom chat interfaces powered by Algolia’s AI assistant. Use these endpoints to create tailored conversational experiences that search your Algolia index and generate contextual responses using your own LLM provider.

Key features:

  • Real-time streaming responses for better user experience
  • Advanced facet filtering to control AI context
  • Hash-based Message Authentication Code (HMAC) token authentication for secure API access
  • Full compatibility with popular frameworks like Next.js and Vercel AI SDK

This API documentation is primarily for developers building custom Ask AI integrations. If you want to use Ask AI and add it to your site, see Ask AI.

Overview

The Algolia Ask AI API provides endpoints for integrating with an Algolia Ask AI assistant. You can use this API to build custom chat interfaces and integrate Algolia with your LLM.

Base URL: https://askai.algolia.com

All endpoints allow cross-origin requests (CORS) from browser-based apps.

Authentication

Ask AI uses HMAC tokens for authentication. Tokens expire after 5 minutes, so you’ll need to request a new one before each chat request.

Get an HMAC token

POST /chat/token

Headers:

  • X-Algolia-Assistant-Id: Your Ask AI assistant configuration ID
  • Origin (optional): Request origin for CORS validation
  • Referer (optional): Full URL of the requesting page

Response:

1
2
3
4
{
  "success": true,
  "token": "HMAC_TOKEN"
}

Endpoints

Chat with Ask AI

POST /chat

Start or continue a chat with the AI assistant. The response is streamed in real-time using server-sent events, letting you display the AI’s response as it’s being generated.

Headers:

  • X-Algolia-Application-Id: Your Algolia application ID
  • X-Algolia-API-Key: Your Algolia API key
  • X-Algolia-Index-Name: Algolia index to use
  • X-Algolia-Assistant-Id: Ask AI assistant configuration ID
  • Authorization: HMAC token (retrieved from /chat/token)

Request Body:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
{
  "id": "your-conversation-id",
  "messages": [
    {
      "role": "user",
      "content": "What is Algolia?",
      "id": "msg-123",
      "createdAt": "2025-01-01T12:00:00.000Z",
      "parts": [
        {
          "type": "text",
          "text": "What is Algolia?"
        }
      ]
    }
  ],
  "searchParameters": {
    "facetFilters": ["language:en", "version:latest"]
  }
}

Request body parameters:

  • id (string, required): Unique conversation identifier
  • messages (array, required): Array of conversation messages
    • role (string): “user” or “assistant”
    • content (string): Message content
    • id (string): Unique message ID
    • createdAt (string, optional): ISO timestamp
    • parts (array, optional): Message parts (used by Vercel AI SDK)
  • searchParameters (object, optional): Search configuration
    • facetFilters (array, optional): Filter the context used by Ask AI

Using search parameters:

Search parameters let you control how Ask AI searches your index:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
{
  "id": "conversation-1",
  "messages": [
    {
      "role": "user", 
      "content": "How do I configure the API?",
      "id": "msg-1"
    }
  ],
  "searchParameters": {
    "facetFilters": [
      "language:en",
      "version:latest", 
      "type:content"
    ]
  }
}

Advanced facet filtering with OR logic:

You can use nested arrays for OR logic within facet filters:

1
2
3
4
5
6
7
8
9
10
11
{
  "searchParameters": {
    "facetFilters": [
      "language:en",
      [
        "docusaurus_tag:default",
        "docusaurus_tag:docs-default-current"
      ]
    ]
  }
}

This example filters to: language:en AND (docusaurus_tag:default OR docusaurus_tag:docs-default-current)

Common use cases:

  • Multi-language sites: ["language:en"]
  • Versioned documentation: ["version:latest"] or ["version:v2.0"]
  • Content types: ["type:content"] to exclude navigation/metadata
  • Multiple tags: [["tag:api", "tag:tutorial"]] for OR logic
  • Categories with fallbacks: [["category:advanced", "category:intermediate"]]

Response:

  • Content-Type: text/event-stream
  • Format: Server-sent events with incremental AI response chunks
  • Benefits: Real-time response display, better user experience, lower perceived latency

Streaming responses:

1
2
3
4
5
6
7
8
9
10
11
12
const response = await fetch('/chat', { /* ... */ });
const reader = response.body.getReader();
const decoder = new TextDecoder();

while (true) {
  const { done, value } = await reader.read();
  if (done) break;
  
  const chunk = decoder.decode(value);
  // Display chunk immediately in your UI
  console.log('Received chunk:', chunk);
}

Submit feedback

POST /chat/feedback

Submit thumbs up/down feedback for a chat message.

Headers:

  • X-Algolia-Assistant-Id: Your Ask AI assistant configuration ID
  • Authorization: HMAC token

Request Body:

1
2
3
4
5
{
  "appId": "YOUR_APP_ID",
  "messageId": "msg-123",
  "thumbs": 1
}
  • thumbs: 1 for positive feedback, 0 for negative

Response:

1
2
3
4
{
  "success": true,
  "message": "Feedback was successfully submitted."
}

Health check

GET /chat/health

Check the operational status of the Ask AI service.

Response: OK (text/plain)

Custom integration examples

Basic chat implementation

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
class AskAIChat {
  constructor({ appId, apiKey, indexName, assistantId }) {
    this.appId = appId;
    this.apiKey = apiKey;
    this.indexName = indexName;
    this.assistantId = assistantId;
    this.baseUrl = 'https://askai.algolia.com';
  }

  async getToken() {
    const response = await fetch(`${this.baseUrl}/chat/token`, {
      method: 'POST',
      headers: {
        'X-Algolia-Assistant-Id': this.assistantId,
      },
    });
    const data = await response.json();
    return data.token;
  }

  async sendMessage(conversationId, messages, searchParameters = {}) {
    const token = await this.getToken();

    const response = await fetch(`${this.baseUrl}/chat`, {
      method: 'POST',
      headers: {
        'Content-Type': 'application/json',
        'X-Algolia-Application-Id': this.appId,
        'X-Algolia-API-Key': this.apiKey,
        'X-Algolia-Index-Name': this.indexName,
        'X-Algolia-Assistant-Id': this.assistantId,
        'Authorization': token,
      },
      body: JSON.stringify({
        id: conversationId,
        messages,
        ...(Object.keys(searchParameters).length > 0 && { searchParameters }),
      }),
    });

    if (!response.ok) {
      throw new Error(`HTTP error! status: ${response.status}`);
    }

    // Return a streaming iterator for real-time response handling
    return {
      async *[Symbol.asyncIterator]() {
        const reader = response.body.getReader();
        const decoder = new TextDecoder();

        try {
          while (true) {
            const { done, value } = await reader.read();
            if (done) break;

            // Decode and yield each chunk as it arrives
            const chunk = decoder.decode(value, { stream: true });
            if (chunk.trim()) {
              yield chunk;
            }
          }
        } finally {
          reader.releaseLock();
        }
      }
    };
  }

  async submitFeedback(messageId, thumbs) {
    const token = await this.getToken();

    const response = await fetch(`${this.baseUrl}/chat/feedback`, {
      method: 'POST',
      headers: {
        'Content-Type': 'application/json',
        'X-Algolia-Assistant-Id': this.assistantId,
        'Authorization': token,
      },
      body: JSON.stringify({
        appId: this.appId,
        messageId,
        thumbs,
      }),
    });

    return response.json();
  }
}

// Usage with streaming
const chat = new AskAIChat({
  appId: 'YOUR_APP_ID',
  apiKey: 'YOUR_API_KEY',
  indexName: 'YOUR_INDEX_NAME',
  assistantId: 'YOUR_ASSISTANT_ID',
});

// Send message and handle streaming response
const stream = await chat.sendMessage('conversation-1', [
  {
    role: 'user',
    content: 'What is Algolia?',
    id: 'msg-1',
  },
], {
  facetFilters: ['language:en', 'type:content']
}); // Add search parameters

// Display response as it streams in real-time
let fullResponse = '';
for await (const chunk of stream) {
  fullResponse += chunk;
  console.log('Received chunk:', chunk);
  // Update your UI immediately with each chunk
  // e.g., appendToMessageUI(chunk);
}
console.log('Complete response:', fullResponse);

With Vercel AI SDK

The Vercel AI SDK (version 4) provides automatic handling of the request format and streaming.

Integrating the chat with a Next.js proxy has these benefits:

  • Security: API keys stay on the server
  • Token management: Automatic token refresh
  • Error handling: Centralized error management
  • CORS: No cross-origin issues
  • Caching: Can add caching logic if needed

Create a Next.js API route as a proxy:

  • Pages router: pages/api/chat.ts
  • App router: app/api/chat/route.ts
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
import { StreamingTextResponse } from 'ai';

export const runtime = 'edge';

async function getToken(assistantId: string, origin: string) {
  const tokenRes = await fetch('https://askai.algolia.com/chat/token', {
    method: 'POST',
    headers: {
      'X-Algolia-Assistant-Id': assistantId,
      'Origin': origin,
    },
  });
  
  const tokenData = await tokenRes.json();
  if (!tokenData.success) {
    throw new Error(tokenData.message || 'Failed to get token');
  }
  return tokenData.token;
}

export default async function handler(req: Request) {
  try {
    const body = await req.json();
    const assistantId = process.env.ALGOLIA_ASSISTANT_ID!;
    const origin = req.headers.get('origin') || '';

    // Fetch a new token before each chat call
    const token = await getToken(assistantId, origin);

    // Prepare headers for Algolia Ask AI
    const headers = {
      'X-Algolia-Application-Id': process.env.ALGOLIA_APP_ID!,
      'X-Algolia-API-Key': process.env.ALGOLIA_API_KEY!,
      'X-Algolia-Index-Name': process.env.ALGOLIA_INDEX_NAME!,
      'X-Algolia-Assistant-Id': assistantId,
      'Authorization': token,
      'Content-Type': 'application/json',
    };

    // Forward the request to Algolia Ask AI
    const response = await fetch('https://askai.algolia.com/chat', {
      method: 'POST',
      headers,
      body: JSON.stringify(body),
    });

    if (!response.ok) {
      throw new Error(`Ask AI API error: ${response.status}`);
    }

    // Stream the response back to the client
    return new StreamingTextResponse(response.body);
  } catch (error) {
    console.error('Chat API error:', error);
    return new Response(
      JSON.stringify({ error: 'Internal server error' }), 
      { status: 500, headers: { 'Content-Type': 'application/json' } }
    );
  }
}

Environment variables:

1
2
3
4
5
# .env.local
ALGOLIA_APP_ID=your_app_id
ALGOLIA_API_KEY=your_api_key
ALGOLIA_INDEX_NAME=your_index_name
ALGOLIA_ASSISTANT_ID=your_assistant_id

Frontend with useChat:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
import { useChat } from 'ai/react';

function ChatComponent() {
  const { messages, input, handleInputChange, handleSubmit, isLoading } = useChat({
    api: '/api/chat', // Use your Next.js API route
    body: {
      searchParameters: {
        facetFilters: ['language:en', 'type:content']
      },
    },
  });

  return (
    <div className="chat-container">
      <div className="messages">
        {messages.map(m => (
          <div key={m.id} className={`message ${m.role}`}>
            <strong>{m.role === 'user' ? 'You' : 'AI'}:</strong>
            <div>{m.content}</div>
          </div>
        ))}
        {isLoading && <div className="loading">AI is thinking...</div>}
      </div>

      <form onSubmit={handleSubmit}>
        <input
          value={input}
          placeholder="Ask a question..."
          onChange={handleInputChange}
          disabled={isLoading}
        />
        <button type="submit" disabled={isLoading}>
          {isLoading ? 'Sending...' : 'Send'}
        </button>
      </form>
    </div>
  );
}

Direct integration

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
import { useChat } from 'ai/react';

function ChatComponent() {
  const { messages, input, handleInputChange, handleSubmit } = useChat({
    api: 'https://askai.algolia.com/chat',
    headers: {
      'X-Algolia-Application-Id': 'YOUR_APP_ID',
      'X-Algolia-API-Key': 'YOUR_API_KEY',
      'X-Algolia-Index-Name': 'YOUR_INDEX_NAME',
      'X-Algolia-Assistant-Id': 'YOUR_ASSISTANT_ID',
    },
  });

  return (
    <div>
      {messages.map(m => (
        <div key={m.id}>
          {m.role === 'user' ? 'User: ' : 'AI: '}
          {m.content}
        </div>
      ))}

      <form onSubmit={handleSubmit}>
        <input
          value={input}
          placeholder="Say something..."
          onChange={handleInputChange}
        />
      </form>
    </div>
  );
}

Error handling

All error responses follow this format:

1
2
3
4
{
  "success": false,
  "message": "Error description"
}

Common error scenarios:

  • Invalid assistant ID: Configuration doesn’t exist
  • Expired token: Request a new HMAC token
  • Rate limiting: Too many requests
  • Invalid index: Index name doesn’t exist or isn’t accessible

Best practices

  • Token management. Always request a fresh HMAC token before chat requests.
  • Error Handling. Implement retry logic for network failures.
  • Streaming. Handle server-sent events properly for real-time responses.
  • Feedback. Implement thumbs up/down for continuous improvement.
  • CORS. Ensure your domain is allowed in your Ask AI configuration.
Did you find this page helpful?