Moxin LM
Open Source Foundation Models
From reasoning to speech, our models are designed for the next generation of human-computer interaction.
Visit Moxin LM Hugging Face Page
Learn more about our foundation models and research.
huggingface.co/moxin-orgOpen Creation
The Moxin-7B series is our truly open, SOTA-performing LLM and VLM. We build, fine-tune, and openly release our own models, ensuring complete reproducibility and transparency.
Moxin-7B-LLM
Our flagship general-purpose model. Fine-tuned for instruction following, coding, and reasoning.
Moxin-7B-VLM
Vision-Language Model capable of understanding images, charts, and diagrams with high precision.
Efficient Deployment
We specialize in extreme quantization, creating resource-efficient variants of popular models (like DeepSeek and Kimi) to run anywhere. We unleash the power of reproducible AI 🚀.
Kimi K2
Highly efficient, quantized version. Optimized for edge deployment.
DeepSeek-V3
Optimized for Ominix Edge.Voice
Moxin Voice
Human-like Text-to-Speech and Automatic Speech Recognition running entirely locally. No cloud APIs, no latency.
Build with Moxin LM
Robotics & Automation
Fine-tune for specific robotics commands and industrial applications.
Edge AI Solutions
Run AI directly on devices for privacy-first, low-latency applications.
Research Platform
Ideal for academic research with full reproducibility and transparency.