chiennv2000/orthrus: Fast, lossless LLM inference via dual-view diffusion decoding. · GitHub

🔥 Explore this must-read post from Hacker News 📖

📂 **Category**:

💡 **What You’ll Learn**:

Orthrus logo

Official implementation and model checkpoints for Orthrus, a dual-architecture framework that unifies the exact generation fidelity of autoregressive Large Language Models (LLMs) with the high-speed parallel token generation of diffusion models.

Orthrus Architecture


demo_orthrus.mp4


All models use a Qwen3 backbone and guarantee strictly lossless generation.


uv pip install -e .
uv pip install ninja packaging
uv pip install flash-attn --no-build-isolation # or: pip install "flash-attn-4[cu13]" if your device supports it

We recommend uv for fast dependency resolution.


import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
 

model = AutoModelForCausalLM.from_pretrained(
    "chiennv/Orthrus-Qwen3-8B",
    dtype=torch.bfloat16, device_map="cuda",
    attn_implementation="flash_attention_2",  # use flash_attention_4 if your system does support
    trust_remote_code=True,
).eval()
tokenizer = AutoTokenizer.from_pretrained("chiennv/Orthrus-Qwen3-8B")
 
prompt = "Write a program to count the frequency of each word in a paragraph."
messages = [🔥, ⚡]
input_ids = tokenizer.apply_chat_template(messages, return_tensors="pt", add_generation_prompt=True, enable_thinking=False).input_ids

output_ids = model.generate(
    input_ids=input_ids.to(model.device), 
    max_new_tokens=2048,
    use_diffusion_mode=True, 
    streamer=TextStreamer(tokenizer, skip_prompt=True) # enable streaming generation
)

Coming soon: Native integration with vLLM and SGLang is coming soon. Stay tuned!

  • Significant Inference Acceleration: Breaks the sequential bottleneck of standard autoregressive decoding, delivering up to a $7.8\times$ speedup on generation tasks.
  • Strictly Lossless Generation: Employs an exact intra-model consensus mechanism to guarantee that the output matches the original base model’s exact predictive distribution.
  • Zero Redundant Memory Overhead: Both the autoregressive and diffusion views attend to the exact same high-fidelity Key-Value (KV) cache natively, resulting in only an $O(1)$ memory cache overhead.
  • Parameter Efficient: Parallel generation capabilities are injected by fine-tuning only 16% of the total model parameters while keeping the base LLM strictly frozen.

Performance Comparison: Orthrus vs. Speculative Decoding

Orthrus outperforms speculative decoding methods like EAGLE-3, DFlash. By natively sharing the exact same KV cache across dual views, Orthrus avoids the redundant memory overhead of draft models, resulting in significantly higher token acceptance rates and faster inference times, especially as context length scales.

Average Acceptance Length Comparison
Long Context Generation Time Benchmark

Left: Average verified tokens per forward pass compared to EAGLE-3 and DFlash. Right: Simulated generation time across scaling context lengths compared to DFlash.


Comparison with State-of-the-Art Diffusion Models

While recent diffusion language models (dLLMs) offer parallel decoding, they often suffer from significant conditional drift and severe accuracy degradation on complex reasoning tasks. Orthrus resolves this by decoupling parallel generation from sequential constraints, establishing a new state-of-the-art for parallel generation fidelity.

Throughput vs. Accuracy on MATH-500

Throughput vs. Accuracy on MATH-500. Orthrus delivers a ~6x speedup over the Qwen3-8B baseline with strictly lossless performance, whereas adaptations like Fast-dLLM-v2 suffer significant accuracy drops.


If you find this model or architecture useful in your work, please cite our paper:

@misc{vannguyen2026orthrusmemoryefficientparalleltoken,
      title={Orthrus: Memory-Efficient Parallel Token Generation via Dual-View Diffusion}, 
      author={Chien Van Nguyen and Chaitra Hegde and Van Cuong Pham and Ryan A. Rossi and Franck Dernoncourt and Thien Huu Nguyen},
      year={2026},
      eprint={2605.12825},
      archivePrefix={arXiv},
      primaryClass={cs.LG},
      url={https://arxiv.org/abs/2605.12825}, 
}

{💬|⚡|🔥} **What’s your take?**
Share your thoughts in the comments below!

#️⃣ **#chiennv2000orthrus #Fast #lossless #LLM #inference #dualview #diffusion #decoding #GitHub**

🕒 **Posted on**: 1778922365

🌟 **Want more?** Click here for more info! 🌟

By

Leave a Reply

Your email address will not be published. Required fields are marked *