Moxin Studio
Native Desktop AI App in Pure Rust

Run local LLMs, generate images, clone voices, and transcribe speech — all on your own hardware, without a Python runtime. Built with pure Rust and Makepad.

Explore on GitHub

Browse the source, contribute, and build with Moxin Studio.

github.com/moxin-org/Moxin-Studio

6

Modalities (LLM, VLM, ASR, TTS, Image, Video)

20+

Supported Local Models

0

Python Dependencies

Runs on macOS 14.0+ (Sonoma) • Apple Silicon (M1–M5)

See It in Action

Moxin Studio Image Generation

Image Generation

Generate images locally with FLUX.2-klein

Moxin Studio Vision Language Model

Vision Language Model

Describe and understand images with Moxin-7B VLM

Moxin Studio Model Hub

Model Hub

Discover, download, and manage on-device models

Moxin Studio Session History

Session History

Persistent, searchable conversation history

Features

Local AI Inference

Run LLMs, vision models, image generation, speech recognition, and TTS directly on your Mac via OminiX-API.

Model Hub

Discover, download, and run models directly from the app. One-click download with automatic backend setup.

Voice I/O

Speech-to-text and text-to-speech with voice cloning. Powered by Qwen3-ASR, GPT-SoVITS, and more.

Image & Video Generation

Generate images with FLUX.2-klein and Z-Image. Edit images with Qwen-Image-Edit. Create video with Wan2.2.

MCP Support

Model Context Protocol for tool use. Connect your AI to external tools and data sources.

Chat History

Persistent, searchable conversation history. Pick up where you left off across sessions.

Supported Local Models

Every model has a dedicated, optimized implementation — not a generic wrapper. Pure Rust models run directly via OminiX-MLX with Metal GPU acceleration.

LLM

Qwen3 (0.6B–8B) Qwen3.5-27B GLM-4 / 4.5 / 4.7 Mistral / Nemo Mixtral MiniCPM-SALA

Vision

Qwen3-VL Moxin-7B DeepSeek-OCR-2

Speech Recognition

Qwen3-ASR (30+ langs) Paraformer FunASR-Nano SenseVoice + Qwen3

Text to Speech

Qwen3-TTS GPT-SoVITS Step-Audio 2

Image Generation

FLUX.2-klein Z-Image-Turbo Qwen-Image Cosmos Predict2 14B

Video Generation

Wan2.2 5B

Platform Architecture

Moxin Studio

Desktop UI — Rust + Makepad

Chat Model Hub Voice MCP
OpenAI-compatible REST/WS

OminiX-API

Local inference server — pure Rust

LLM ASR TTS Image
Rust crate interface

OminiX-MLX

On-device inference backend — Metal-accelerated

Apple Silicon GPU

Quick Start

Requires macOS 14.0+ (Sonoma) • Apple Silicon (M1–M5) • Rust 1.82+ • Xcode Command Line Tools

# 1. Install OminiX-API (local inference server)

curl -fsSL https://raw.githubusercontent.com/OminiX-ai/OminiX-API/main/install.sh | sh

# 2. Clone and build Moxin Studio

git clone https://github.com/moxin-org/Moxin-Studio.git

cd Moxin-Studio

cargo run -p moly-shell --bin moxin-studio

# 3. Open Model Hub, download a model, and start chatting!