Skip to Content

Request and response library

Create chat completion

POST
/v1/chat/completions

Request

Creates a model response for the given chat conversation.
  • Name
    model
    Type
    string
    Required
    required
    Description
    ID of the model to use
  • Name
    messages
    Type
    object[]
    Required
    required
    Description
    A list of messages comprising the conversation so far.
    • Name
      role
      Type
      string
      Required
      required
      Description
      The role of the author of this message.
    • Name
      content
      Type
      string
      Required
      required
      Description
      The contents of the message.
  • Name
    temperature
    Type
    double
    Required
    required
    Description
    What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic
  • Name
    stream
    Type
    boolean
    Required
    required
    Description
    If set, partial message deltas will be sent. Tokens will be sent as data-only server-sent events as they become available, with the stream terminated by a data: [DONE] message
  • Name
    max_tokens
    Type
    int64
    Required
    required
    Description
    The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API.

Responses

Returns a response with fields like model, id, and conversation_id for identification, thought explaining reasoning, choices containing the generated message, and usage tracking token consumption.
Curl
curl -X POST \
  -H "Authorization: bearer <your token here>" -H "Content-Type: application/json" \
  "https://api.asi1.ai/v1/chat/completions" \
  -d '{
  "model": "asi1-mini",
  "messages": [
    {
      "role": "user",
      "content": "Hello"
    }
  ],
  "temperature": 0.7,
  "stream": false,
  "max_tokens": 1024
}'
HTTP 200
{
  "model": "asi1-mini",
  "id": "id_kKg27rnGyfH4NknTL",
  "executable_data": [],
  "conversation_id": null,
  "thought": [
    "The user has initiated a simple greeting. This is a standard conversational start, and my response should be polite and professional, introducing myself as ASI1-Mini while highlighting my unique capabilities as an agentic, decentralised-focused model."
  ],
  "tool_thought": [],
  "choices": [
    {
      "index": 0,
      "finish_reason": "stop",
      "message": {
        "role": "assistant",
        "content": "Hello! I'm ASI1-Mini, an advanced agentic and decentralised-focused assistant powered by fetch.ai Inc. I'm designed to support complex workflows with executable expertise. How can I assist you today?"
      }
    }
  ],
  "usage": {
    "prompt_tokens": 39,
    "completion_tokens": 100,
    "total_tokens": 139
  }
}
Last updated on