Moxin LM
Open Source Foundation Models

From reasoning to speech, our models are designed for the next generation of human-computer interaction.

Visit Moxin LM Hugging Face Page

Learn more about our foundation models and research.

huggingface.co/moxin-org

Open Creation

The Moxin-7B series is our truly open, SOTA-performing LLM and VLM. We build, fine-tune, and openly release our own models, ensuring complete reproducibility and transparency.

Moxin-7B-LLM

Our flagship general-purpose model. Fine-tuned for instruction following, coding, and reasoning.

7B
Params
32k
Context
SOTA
Perf

Moxin-7B-VLM

Vision-Language Model capable of understanding images, charts, and diagrams with high precision.

Efficient Deployment

We specialize in extreme quantization, creating resource-efficient variants of popular models (like DeepSeek and Kimi) to run anywhere. We unleash the power of reproducible AI 🚀.

Kimi K2 Thinking

Optimized GGUF version of Kimi K2 Thinking model.

MiniMax M2

Efficient GGUF quantization for MiniMax M2.

Qwen3 Next 80B

A3B Instruct GGUF version of Qwen3 Next 80B.

Qwen3 235B

Massive 235B parameter model quantized for deployment.

DeepSeek V3

Latest DeepSeek V3 model optimized for Moxin.

GLM 4.6

General Language Model 4.6 GGUF quantization.

DeepSeek R1

Reasoning model optimized for efficient deployment.

Voice

NEW

Moxin Voice

Human-like Text-to-Speech and Automatic Speech Recognition running entirely locally. No cloud APIs, no latency.

Build with Moxin LM

Robotics & Automation

Fine-tune for specific robotics commands and industrial applications.

Edge AI Solutions

Run AI directly on devices for privacy-first, low-latency applications.

Research Platform

Ideal for academic research with full reproducibility and transparency.