resilientworkflowsentinel/resilient-workflow-sentinel: Local, offline 7B LLM task orchestrator — analyzes urgency, debates assignment, balances load. Runs on RTX 3080/4090. Chaos mode included.

✨ Check out this must-read post from Hacker News 📖

📂 **Category**:

💡 **What You’ll Learn**:

Local demo of LLM-powered orchestrator for intelligent task routing.

# create venv
python -m venv .venv
.venv\Scripts\activate

# install requirements
pip install -r requirements.txt

# download local LLM model
python models/download_model.py

# start LLM service (port 8000)
uvicorn app.local_llm_service.llm_app:app --host 127.0.0.1 --port 8000 --reload

# start orchestrator (port 8100)
uvicorn app.main:app --host 127.0.0.1 --port 8100 --reload

# start UI (NiceGUI)
python ui/nicegui_app.py

-------------------------------------------------------------------------------------------

## Windows Batch Script Options (Alternative)

# One-time setup scripts
download_model.bat
install_and_run.bat

# Start services individually
run_llm.bat # Start LLM service
run_api.bat # Start orchestrator API
run_ui.bat # Start NiceGUI interface

⚡ **What’s your take?**
Share your thoughts in the comments below!

#️⃣ **#resilientworkflowsentinelresilientworkflowsentinel #Local #offline #LLM #task #orchestrator #analyzes #urgency #debates #assignment #balances #load #Runs #RTX #Chaos #mode #included**

🕒 **Posted on**: 1770347792

🌟 **Want more?** Click here for more info! 🌟

By

Leave a Reply

Your email address will not be published. Required fields are marked *