Moxin AI Hero Image

Model Hub

Discover our family of open-source models, each tailored for specific tasks from general-purpose use to advanced reasoning and multimodal understanding.

Moxin-7B-Base

The foundational pre-trained model, ideal for researchers and developers looking to create highly customized, fine-tuned models. Based on an enhanced Mistral architecture, it was trained on over 2 trillion tokens from curated open datasets.

Moxin-7B-Instruct

A helpful and harmless AI assistant fine-tuned for instruction following and dialogue. This model was created by applying Supervised Fine-Tuning (SFT) and Direct Preference Optimization (DPO) to the Base model using the open Tülu 3 framework.

Moxin-7B-Reasoning

A specialist model with advanced reasoning capabilities for complex tasks like math, logic, and coding. It is enhanced with Group Relative Policy Optimization (GRPO), demonstrating that reinforcement learning is highly effective even for 7B-scale models.

Moxin-7B-VLM

A powerful vision-language model (VLM) for sophisticated multimodal understanding. It uses Moxin-7B-Base as its LLM backbone combined with DINOv2 and SigLIP visual encoders. This VLM outperforms other models with similar backbones on key benchmarks.

Ready to Build?

All Moxin AI models, code, and data are available on Hugging Face and GitHub. Start experimenting and innovating today.