Moxin LM
Open Source Foundation Models

From reasoning to speech, our models are designed for the next generation of human-computer interaction.

Visit Moxin LM Hugging Face Page

Learn more about our foundation models and research.

huggingface.co/moxin-org

Open Creation

The Moxin-7B series is our truly open, SOTA-performing LLM and VLM. We build, fine-tune, and openly release our own models, ensuring complete reproducibility and transparency.

Moxin-7B-LLM

Our flagship general-purpose model. Fine-tuned for instruction following, coding, and reasoning.

7B
Params
32k
Context
SOTA
Perf

Moxin-7B-VLM

Vision-Language Model capable of understanding images, charts, and diagrams with high precision.

Efficient Deployment

We specialize in extreme quantization, creating resource-efficient variants of popular models (like DeepSeek and Kimi) to run anywhere. We unleash the power of reproducible AI 🚀.

NEW

Kimi K2

Highly efficient, quantized version. Optimized for edge deployment.

4-bit Marlin

DeepSeek-R1

Reasoning model optimized for efficient deployment.

Q4_K_M Q8_0

DeepSeek-V3

Optimized for Ominix Edge.

Voice

NEW

Moxin Voice

Human-like Text-to-Speech and Automatic Speech Recognition running entirely locally. No cloud APIs, no latency.

Build with Moxin LM

Robotics & Automation

Fine-tune for specific robotics commands and industrial applications.

Edge AI Solutions

Run AI directly on devices for privacy-first, low-latency applications.

Research Platform

Ideal for academic research with full reproducibility and transparency.