SharpAI/SwiftLM: ⚡ Native MLX Swift LLM inference server for Apple Silicon. OpenAI-compatible API, SSD streaming for 100B+ MoE models, TurboQuant KV cache compression, + iOS iPhone app. · GitHub

💥 Discover this awesome post from Hacker News 📖

📂 **Category**:

💡 **What You’ll Learn**:

A blazingly fast, native Swift inference server that serves MLX models with a strict OpenAI-compatible API.

No Python runtime, no Global Interpreter Lock (GIL), no unnecessary memory copies. Just bare-metal Apple Silicon performance compiled to a single binary.

SwiftLM Chat iOS demo

  • 🍎 100% Native Apple Silicon: Powered natively by Metal and Swift.
  • 🔌 OpenAI-compatible: Drop-in replacement for OpenAI SDKs (/v1/chat/completions, streaming, etc).
  • 🧠 Smart Model Routing: Loads HuggingFace format models directly, with native Safetensors parsing.
  • ⚡️ TurboQuantization Integrated: Custom low-level MLX Metal primitives that apply extremely fast quantization for KV caching out-of-the-box.
  • 💾 SSD Expert Streaming: Experimental zero-copy streaming that swaps Mixture of Experts (MoE) layers directly from the NVMe SSD to the GPU command buffer without trashing macOS Unified Memory (prevents Watchdog OS kernel panics on 122B+ models).
  • 🎛️ Granular Memory Control: Integrated Layer Partitioning (--gpu-layers) and Wisdom Auto-Calibration for squeezing massive models into RAM.

⚡️ TurboQuantization: KV Cache Compression

SwiftLM implements a hybrid V2+V3 TurboQuant architecture for on-the-fly KV cache compression. At roughly ~3.6 bits per coordinate overall, the KV cache is compressed ~3.5× vs FP16 with near-zero accuracy loss.

By combining V2 Speed with V3 Quality:

Recent reproductions of the TurboQuant algorithm (e.g., turboquant-mlx) revealed two distinct paths:

  1. V2 (Hardware-Accelerated): Fast, but uses linear affine quantization which degrades quality at 3-bit.
  2. V3 (Paper-Correct): Excellent quality using non-linear Lloyd-Max codebooks, but painfully slow due to software dequantization.

We built the “Holy Grail” hybrid: We ported the V3 non-linear Lloyd-Max codebooks directly into the native C++ encoding path, and process the dequantization natively in fused Metal (bggml-metal) shaders. This achieves V3 quality at V2 speeds, completely detached from Python overhead.

K-Cache (3-bit PolarQuant + 1-bit QJL) = 4.25 bits/dim

  1. Extract L2 norm and normalize: x̂ = x / ‖x‖
  2. Apply Fast Walsh-Hadamard Transform (WHT) rotation to distribute outliers evenly.
  3. Quantize each coordinate using 3-bit non-linear Lloyd-Max centroids.
  4. Compute the residual error between the original vector and the quantized approximation.
  5. Project the residual via a random Johnson-Lindenstrauss (QJL) matrix and store the 1-bit signs.
    (Why QJL? QJL acts as an additional regularizer that prevents centroid resolution loss from degrading the attention dot-product.)

V-Cache (3-bit PolarQuant) = 3.125 bits/dim
Because the V-cache matrix is not used for inner-product attention scoring, the QJL error correction provides no benefit. We cleanly disable QJL for the V-cache, extracting an additional 25% memory savings without sacrificing quality.

Reference implementations: turboquant-mlx | turboquant_plus | Paper: TurboQuant, Google 2504.19874


💻 Tested Hardware & Benchmarks

To reliably run massive 122B parameter MoE models over SSD streaming, SwiftLM was designed and benchmarked natively on the following hardware:

  • Machine: MacBook Pro, Apple M5 Pro
  • Memory: 64 GB Unified Memory
  • Model: Qwen3.5-122B-A10B-4bit
  • SSD: Internal Apple NVMe (Zero-Copy Streaming)

⚠️ Quantization Disclaimer: While heavier quantization shrinks the required memory footprint, 4-bit quantization remains the strict production standard for MoE models. Our metrics indicated that aggressive 2-bit quantization heavily destabilizes JSON grammars—routinely producing broken keys like \name\ instead of "name"—which systematically breaks OpenAI-compatible tool calling.



📱 SwiftLM Chat — iOS App

A native iPhone & iPad companion app that downloads MLX models directly from HuggingFace and runs inference on-device via MLX Swift.

  • Tab UI: Chat · Models · Settings
  • Live download progress with speed indicator and circular progress ring
  • Model catalog: Qwen3, Phi-3.5, Mistral, Llama — with on-device RAM fit indicators
  • HuggingFace search — find any mlx-community model by name
  • Context-aware empty states — downloading ring, loading spinner, idle prompt
  • iOS lifecycle hardened — model unload only fires on true background (not notification banners); 30-second grace period on app-switch
cd SwiftLMChat
python3 generate_xcodeproj.py       # Generates SwiftLMChat.xcodeproj
open SwiftLMChat.xcodeproj

Then in Xcode:

  1. Select the SwiftLMChat target → Signing & Capabilities
  2. Set your Team (your Apple Developer account)
  3. Select your iPhone as the run destination
  4. ⌘R to build and run

Note for contributors: The .xcodeproj is git-ignored (it contains your personal Team ID). Run generate_xcodeproj.py after cloning to regenerate it locally. Your Team ID is never committed.


🛠️ Quick Start (macOS Server)

Fastest: Download Pre-built Binary

The absolute fastest way to get started is to download the latest pre-compiled macOS binary directly from the Releases page. Just extract it and run!

Run (Downloads model natively on first launch)

.build/release/SwiftLM \
  --model Qwen3.5-122B-A10B-4bit \
  --stream-experts true \
  --port 5413

(Note: Add --stream-experts=true if you are attempting to run oversized MoE models like Qwen3.5 122B to bypass macOS virtual memory swapping!)


Endpoint Method Description
/health GET Server health + loaded model capabilities
/v1/models GET List available models
/v1/chat/completions POST Chat completions (LLM and VLM support, multi-turn, system prompts)

Chat Completion (Streaming)

Drop-in compatible with standard OpenAI HTTP consumers:

curl http://localhost:5413/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{
    "model": "Qwen3.5-122B-A10B-4bit",
    "stream": true,
    "messages": [
      ⚡,
      {"role": "user", "content": "Clip 1: Delivery person drops package at 14:02. Clip 2: Delivery person walks away down driveway at 14:03. Do these clips represent the same security event? Output a JSON object with a `duplicate` boolean and a `reason` string."}
    ]
  }'

Option Default Description
--model (required) HuggingFace model ID or local path
--port 5413 Port to listen on
--host 127.0.0.1 Host to bind
--max-tokens 2048 Max tokens limit per generation
--gpu-layers model_default Restrict the amount of layers allocated to GPU hardware
--stream-experts false Enable experimental SSD streaming for MoE model expert matrices

  • macOS 14.0+
  • Apple Silicon (M1/M2/M3/M4/M5)
  • Xcode Command Line Tools
  • Metal Toolchain (xcodebuild -downloadComponent MetalToolchain)

📄 Dependencies & License

Built entirely on the hard work of the Apple MLX community.

The TurboQuant KV cache compression implemented in SwiftLM is directly based on the following open-source work and research:

  • TheTom/llama-cpp-turboquant — The primary reference for the C and Metal GPU implementation. The turbo-wht.h Fast Walsh-Hadamard kernel, WHT sign arrays (seed=42), Lloyd-Max centroid tables, and the ggml-turbo-quant.c quantize/dequantize logic were ported directly from this repository into our MLX C++ and Metal backend.

  • TheTom/turboquant_plus — Python reference implementation used to validate the algorithm math, codebook construction (Lloyd’s algorithm for N(0, 1/d)), and KV cache integration design.

  • TurboQuant Paper“TurboQuant: Online Vector Quantization with Near-optimal Distortion Rate”, Zandieh et al., AISTATS/ICLR 2026. The two-stage PolarQuant + QJL algorithm described in Section 3 and Appendix A is the mathematical foundation of this implementation.

  • amirzandieh/QJL — Original Quantized Johnson-Lindenstrauss (QJL) 1-bit residual correction implementation by the paper authors.

MIT License

{💬|⚡|🔥} **What’s your take?**
Share your thoughts in the comments below!

#️⃣ **#SharpAISwiftLM #Native #MLX #Swift #LLM #inference #server #Apple #Silicon #OpenAIcompatible #API #SSD #streaming #100B #MoE #models #TurboQuant #cache #compression #iOS #iPhone #app #GitHub**

🕒 **Posted on**: 1775071801

🌟 **Want more?** Click here for more info! 🌟

By

Leave a Reply

Your email address will not be published. Required fields are marked *