Moxin LLM/VLM
Truly Open, Fully Reproducible

Achieving SOTA performance on Zero-shot tasks, it is designed for efficiency, especially on edge devices.

Moxin AI Hero Image
Moxin AI Core Philosophy: Truly Open, Fully Reproducible, High-Performance.

Key Features

Why Choose Moxin AI?

Discover the advantages of Moxin AI, designed for developers and researchers seeking a transparent and powerful language model.

Thorough Openness & Reproducibility

We provide model weights, training data details, and scripts, ensuring transparency and enabling deep research and customization.

SOTA-level Performance

Achieves state-of-the-art results on multiple Zero-shot benchmarks, comparable to leading models like DeepSeek.

Powerful Engine for Edge AI

Moxin AI aims to be a leader in open-source edge language models, comparable to Phi and Gemma, with superior reproducibility.

Advanced Technology Stack

Utilizes GPRO enhanced learning and a Tokenizer MOE architecture for improved performance and efficiency.

Seamless OminiX Integration

Works with the self-developed OminiX inference and fine-tuning engine for optimal performance on various edge hardware, including domestic NPUs.

Data Transparency & Customizability

Access to model weights, training data composition, and scripts allows for efficient fine-tuning for specific applications like robotics or translation.

Application Potential

Unlocking New Possibilities with Moxin AI

Moxin AI's flexibility opens doors to a wide range of innovative uses.

Robotics Command Fine-tuning

Leverage data transparency to efficiently fine-tune Moxin AI for specific robotics instructions and applications.

Professional Translation

Customize the model with specialized terminology for high-quality, domain-specific translation tasks.

Edge AI Innovations

Power AI applications directly on devices like mobile phones and personal computers, ensuring privacy and low latency.

Research and Development

The open and reproducible nature of Moxin AI makes it an ideal platform for academic research and exploring new frontiers in AI.

Moxin Ecosystem

The Moxin Personal AI Stack

A Comprehensive AI Ecosystem

Moxin AI, together with MoFa, Moly, and OminiX, forms the Moxin Personal AI Stack, aiming to build a strong contributor community.

Moxin AI: The Core Model

A high-performance, truly open, and fully reproducible language model at the heart of the ecosystem.

MoFa: Intelligent Agent Framework

Leverages the capabilities of Moxin AI to build intelligent agents.

Moly: Rust LLM Client

Developer tools, including a Rust LLM client, for interacting with the Moxin ecosystem.

OminiX: Edge Inference & Fine-tuning Engine

Ensures Moxin AI runs efficiently on edge devices with optimized performance.

Get Started with Moxin AI.

Step 1: Explore Models

Visit our Hugging Face page to discover Moxin AI models like Moxin-7B-Base, Chat, Instruct, and Reasoning.

Step 2: Access Resources

Check out our GitHub repository for technical reports, scripts, and guides on how to use and fine-tune Moxin AI.

Step 3: Deploy with OminiX

Utilize the OminiX engine for optimal performance when deploying Moxin AI on your edge devices.

Ready to Innovate!

Start building your AI applications with a truly open and reproducible LLM.

Steps to use Moxin AI Concept Image

Moxin AI FAQs

Frequently Asked Questions about Moxin AI

Find answers to common questions about Moxin AI's capabilities, openness, and how to get involved.

What makes Moxin AI "truly open"?

Moxin AI provides not only model weights but also detailed training data composition and scripts, allowing for complete reproducibility and deep customization.

What are the key technical advantages of Moxin AI?

Moxin AI utilizes GPRO enhanced learning and a Tokenizer MOE architecture, contributing to its SOTA-level performance and efficiency, especially on edge devices.

How can I run Moxin AI on edge devices?

Moxin AI is designed for edge AI and can be efficiently run using the OminiX inference and fine-tuning engine, which is optimized for various hardware including NPUs.

What kind of applications can I build with Moxin AI?

Its customizability makes it suitable for a range of applications, including robotics, professional translation, on-device intelligent assistants, and local knowledge base applications.

How can I contribute to the Moxin AI project?

We welcome contributions! You can contribute to model optimization, develop new use cases, improve documentation, or help build the OminiX engine. Join our GitHub and Discord communities to learn more.

Where can I find the Moxin AI models and resources?

Models are available on Hugging Face (moxin-org), and you can find code, technical reports, and more on our GitHub repository (moxin-org/Moxin-LLM).

2K+
Hugging Face Downloads
124+
GitHub Stars
4+
Active Models
Growing
Community Members

Join the Moxin
Community

Become part of a movement towards truly open, reproducible, and high-performance AI.Start building and contributing today!