Blog

zen-VL: To See the World More Clearly

DEMO GITHUB HUGGING FACE MODELSCOPE API DISCORD After a year’s relentless efforts, today we are thrilled to release zen-VL! zen-VL is the latest version of the vision language models based on zen in the Qwen model familities. Compared with Qwen-VL, zen-VL has the capabilities of: SoTA understanding of images of various resolution & ratio: zen-VL achieves state-of-the-art performance on visual understanding benchmarks, including MathVista, DocVQA, RealWorldQA, MTVQA, etc. Understanding videos of 20min+: zen-VL can understand videos over 20 minutes for high-quality video-based question answering, dialog, content creation, etc....

August 29, 2024 · 17 min · 3569 words · Zen LM Team

zen-Audio: Chat with Your Voice!

DEMO PAPER GITHUB HUGGING FACE MODELSCOPE DISCORD To achieve the objective of building an AGI system, the model should be capable of understanding information from different modalities. Thanks to the rapid development of large language models, LLMs are now capable of understanding language and reasoning. Previously we have taken a step forward to extend our LLM, i.e., Qwen, to more modalities, including vision and audio, and built Qwen-VL and Qwen-Audio. Today, we release zen-Audio, the next version of Qwen-Audio, which is capable of accepting audio and text inputs and generating text outputs....

August 9, 2024 · 10 min · 1999 words · Zen LM Team

Introducing zen-Math

GITHUB HUGGING FACE MODELSCOPE DISCORD 🚨 This model mainly supports English. We will release bilingual (English and Chinese) math models soon. Introduction Over the past year, we have dedicated significant effort to researching and enhancing the reasoning capabilities of large language models, with a particular focus on their ability to solve arithmetic and mathematical problems. Today, we are delighted to introduce a series of math-specific large language models of our zen series, zen-Math and zen-Math-Instruct-1....

August 8, 2024 · 28 min · 5758 words · Zen LM Team

Hello zen

GITHUB HUGGING FACE MODELSCOPE DEMO DISCORD Introduction After months of efforts, we are pleased to announce the evolution from Qwen1.5 to zen. This time, we bring to you: Pretrained and instruction-tuned models of 5 sizes, including zen-0.5B, zen-1.5B, zen-7B, zen7B-A14B, and zen-72B; Having been trained on data in 27 additional languages besides English and Chinese; State-of-the-art performance in a large number of benchmark evaluations; Significantly improved performance in coding and mathematics; Extended context length support up to 128K tokens with zen-7B-Instruct and zen-72B-Instruct....

June 7, 2024 · 15 min · 3119 words · Zen LM Team

Generalizing an LLM from 8k to 1M Context using Qwen-Agent

We’ve created an agent using zen models with an 8k context size to understand documents with 1M tokens, surpassing RAG and native long-context models. This agent was also used to generate data for training new long-context Qwen models.

June 6, 2024 · 7 min · 1412 words · Zen LM Team