POST
/
chat
curl --request POST \
  --url https://3428-49-207-209-69.ngrok-free.app/chat \
  --header 'Authorization: Bearer <token>' \
  --header 'Content-Type: application/json' \
  --data '{
  "messages": [
    {
      "role": "user",
      "content": "<string>"
    }
  ],
  "model": "gpt-4o",
  "stream": false,
  "temperature": 1
}'
{
  "id": "<string>",
  "object": "chat.completion",
  "created": 123,
  "model": "<string>",
  "system_fingerprint": "<string>",
  "service_tier": "<string>",
  "choices": [
    {
      "index": 123,
      "message": {
        "role": "assistant",
        "content": "<string>",
        "function_call": {},
        "tool_calls": [
          {}
        ],
        "refusal": "<string>"
      },
      "finish_reason": "<string>",
      "logprobs": {}
    }
  ],
  "usage": {
    "prompt_tokens": 123,
    "completion_tokens": 123,
    "total_tokens": 123,
    "completion_token_details": {
      "reasoning_tokens": 123,
      "audio_tokens": 123,
      "accepted_prediction_tokens": 123,
      "rejected_prediction_tokens": 123
    },
    "prompt_token_details": {
      "cached_tokens": 123,
      "audio_tokens": 123
    }
  }
}

Authorizations

Authorization
string
header
required

Bearer authentication header of the form Bearer <token>, where <token> is your auth token.

Query Parameters

repo_name
string
required

Name of the repository to query (e.g., github.com/calcom/cal.com)

Body

application/json
messages
object[]
required

A list of messages comprising the conversation so far.

model
enum<string>
required

The model to use for chat completion. Only gpt-4o is supported in free tier.

Available options:
gpt-4o
stream
boolean
default:false

Whether to stream the response

temperature
number

What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.

Required range: 0 <= x <= 2

Response

200
application/json
Successful chat response
id
string

A unique identifier for the chat completion.

object
enum<string>

The object type, which is always chat.completion.

Available options:
chat.completion
created
integer

The Unix timestamp (in seconds) of when the chat completion was created.

model
string

The model used for the chat completion.

system_fingerprint
string

This fingerprint represents the backend configuration that the model runs with.

service_tier
string | null

The service tier used for the chat completion.

choices
object[]

A list of chat completion choices.

usage
object

Usage statistics for the completion request.