⚡ Zen LM

Chat Completions

Generate text with any of the 14 Zen models using the OpenAI-compatible chat completions endpoint

Chat Completions

Generate text responses using any Zen model.

Endpoint

POST https://api.hanzo.ai/v1/chat/completions

Request Body

ParameterTypeRequiredDescription
modelstringYesModel ID (e.g., zen4, zen4-coder, zen3-omni)
messagesarrayYesArray of message objects with role and content
temperaturefloatNoSampling temperature (0-2). Default: 1.0
top_pfloatNoNucleus sampling. Default: 1.0
max_tokensintegerNoMaximum tokens to generate
streambooleanNoEnable streaming responses. Default: false
stopstring/arrayNoStop sequences
frequency_penaltyfloatNoFrequency penalty (-2.0 to 2.0). Default: 0
presence_penaltyfloatNoPresence penalty (-2.0 to 2.0). Default: 0

Message Roles

RoleDescription
systemSystem prompt / instructions
userUser message
assistantModel response (for multi-turn)

Example Request

curl https://api.hanzo.ai/v1/chat/completions \
  -H "Authorization: Bearer $HANZO_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "zen4",
    "messages": [
      {"role": "system", "content": "You are a helpful assistant."},
      {"role": "user", "content": "Write a haiku about programming."}
    ],
    "temperature": 0.7,
    "max_tokens": 100
  }'

Example Response

{
  "id": "chatcmpl-abc123",
  "object": "chat.completion",
  "created": 1708000000,
  "model": "zen4",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "Lines of logic flow\nBugs hide in silent syntax\nCompile, debug, grow"
      },
      "finish_reason": "stop"
    }
  ],
  "usage": {
    "prompt_tokens": 25,
    "completion_tokens": 18,
    "total_tokens": 43
  }
}

Streaming

Set stream: true to receive server-sent events:

from hanzoai import Hanzo

client = Hanzo(api_key="hk-your-key")

stream = client.chat.completions.create(
    model="zen4",
    messages=[{"role": "user", "content": "Tell me a story"}],
    stream=True,
)

for chunk in stream:
    if chunk.choices[0].delta.content:
        print(chunk.choices[0].delta.content, end="")

Vision (zen3-vl, zen3-omni)

Vision models accept image URLs or base64 images in the content array:

response = client.chat.completions.create(
    model="zen3-vl",
    messages=[{
        "role": "user",
        "content": [
            {"type": "text", "text": "What's in this image?"},
            {"type": "image_url", "image_url": {"url": "https://example.com/photo.jpg"}}
        ]
    }],
)

Reasoning (zen4-thinking, zen4-ultra)

Reasoning models show their chain-of-thought process:

response = client.chat.completions.create(
    model="zen4-thinking",
    messages=[{"role": "user", "content": "Solve: If 2^x = 1024, what is x?"}],
)

Code Generation (zen4-coder, zen4-coder-pro, zen4-coder-flash)

Code models support up to 262K context for full-repository understanding:

response = client.chat.completions.create(
    model="zen4-coder",
    messages=[{"role": "user", "content": "Write a Go HTTP server with graceful shutdown."}],
)

Available Models

All 14 Zen models work with this endpoint (except zen3-embedding which uses /v1/embeddings):

ModelContextBest For
zen4202KGeneral flagship
zen4-ultra202KMaximum reasoning
zen4-pro131KHigh capability
zen4-max131KLarge documents
zen4-mini40KFast and cheap
zen4-thinking131KChain-of-thought
zen4-coder262KCode generation
zen4-coder-pro262KPremium code
zen4-coder-flash262KFast code
zen3-omni202KMultimodal
zen3-vl131KVision-language
zen3-nano40KEdge
zen3-guard40KContent safety

On this page