Moxin LLM/VLM
Truly Open, Fully Reproducible

Achieving SOTA performance on Zero-shot tasks, it is designed for efficiency, especially on edge devices.

Moxin AI Hero Image
Moxin AI Core Philosophy: Truly Open, Fully Reproducible, High-Performance.

Key Features

Why Choose Moxin AI?

A transparent and powerful language model for developers and researchers.

Truly Open & Reproducible

Complete access to model weights, training data, and scripts for full transparency and customization.

SOTA Performance

State-of-the-art results on Zero-shot benchmarks, comparable to leading models like DeepSeek.

Edge AI Optimized

Designed for efficient performance on edge devices with OminiX integration.

Advanced Architecture

GPRO enhanced learning and Tokenizer MOE for superior performance and efficiency.

Applications

Build with Moxin AI

Flexible AI for innovative applications.

Robotics & Automation

Fine-tune for specific robotics commands and industrial applications.

Edge AI Solutions

Run AI directly on devices for privacy-first, low-latency applications.

Research Platform

Ideal for academic research with full reproducibility and transparency.

Moxin Ecosystem

The Moxin Personal AI Stack

A Comprehensive AI Ecosystem

Moxin AI, together with MoFa, Moly, and OminiX, forms the Moxin Personal AI Stack, aiming to build a strong contributor community.

Moxin AI: The Core Model

A high-performance, truly open, and fully reproducible language model at the heart of the ecosystem.

MoFa: Intelligent Agent Framework

Leverages the capabilities of Moxin AI to build intelligent agents.

Moly: Rust LLM Client

Developer tools, including a Rust LLM client, for interacting with the Moxin ecosystem.

OminiX: Edge Inference & Fine-tuning Engine

Ensures Moxin AI runs efficiently on edge devices with optimized performance.

Get Started with Moxin AI

Step 1: Download Models

Get Moxin AI models from Hugging Face: Base, Chat, Instruct, and Reasoning variants.

Step 2: Deploy with OminiX

Use OminiX engine for optimal performance on edge devices.

Ready to Build!

Start creating with truly open and reproducible AI.

Steps to use Moxin AI Concept Image

FAQs

Frequently Asked Questions

Common questions about Moxin AI's capabilities and usage.

What makes Moxin AI "truly open"?

Complete access to model weights, training data composition, and scripts for full reproducibility.

How do I run Moxin AI on edge devices?

Use the OminiX inference engine, optimized for various hardware including NPUs.

What applications can I build?

Robotics, translation, on-device assistants, and local knowledge base applications.

Where can I find models and resources?

Models on Hugging Face (moxin-org), code and docs on GitHub (moxin-org/Moxin-LLM).

2K+
Hugging Face Downloads
124+
GitHub Stars
4+
Active Models
Growing
Community Members

Join the Moxin
Community

Become part of a movement towards truly open, reproducible, and high-performance AI.Start building and contributing today!