Open Foundation Models for Agentic AI

30+ models from 0.6B to 1T parameters across language, vision, audio, video, and 3D

Zen LM provides production-ready AI models for agentic coding, multimodal understanding, and creative generation. Our flagship Zen Coder models are trained on 8.47 billion tokens of real-world programming sessions, delivering state-of-the-art performance on agentic programming tasks.

Complete AI Model Ecosystem

🧠

Language Models

6 core models from 0.6B to 32B. zen-nano for edge, zen-eco for efficiency, zen-omni for multimodal, zen-next for frontier reasoning.

💻

Zen Coder

5 coding models from 4B to 1T trained on 8.47B tokens of agentic programming data. State-of-the-art on tool use and multi-step coding.

👁️

Vision & Multimodal

zen-vl for vision-language, zen-designer for visual understanding, zen-artist for image generation, zen-omni for unified multimodal.

🎬

Video & 3D

zen-director for video generation, zen-video for high-quality synthesis, zen-3d for 3D assets, zen-world for world simulation.

🎵

Audio

zen-musician for music generation, zen-foley for sound effects, zen-scribe for transcription, zen-dub for voice dubbing.

🛡️

Specialized

zen-guard for safety, zen-embedding for vectors, zen-reranker for search, zen-translator for translation, zen-agent for tool use.

Zen Agentic Dataset

8.47 Billion Tokens of Real-World Agentic Programming

8.47B

Tokens

Total training tokens across all data sources

3.35M

Samples

Training samples with conversation context

1,452

Repositories

Open source and private codebases

15yr

History

Years of development history (2010-2025)

Available for research and commercial licensing.

Request AccessView on HuggingFace

Get Started

HuggingFace

Access all 30+ models via HuggingFace Hub

Visit HuggingFace

GitHub

Training code, documentation, and source

View on GitHub

zen-trainer

Fine-tune models on your own data

pip install zen-trainer

Research

Technical papers and whitepapers

Read Papers