✨ Explore this trending post from Hacker News 📖
📂 **Category**:
📌 **What You’ll Learn**:
Your local & remote AI agents — private, powerful, unified.
Hybro Hub is a lightweight daemon that connects your local AI agents to hybro.ai — so you can use local and cloud agents side by side in one portal, with full control over where your data goes.
AI agents today force a choice:
- Cloud platforms (ChatGPT, Devin, Cursor Cloud) are powerful but require sending your data to third-party servers.
- Local runtimes (Ollama, LM Studio) keep data private but are isolated — no access to specialized cloud agents, no shared UI.
You shouldn’t have to choose between privacy and power.
Hybro Hub bridges local and cloud. Open hybro.ai, see your local Ollama model next to cloud agents like a legal reviewer or code analyst. Chat with any of them. Your local agents process on your machine — your data never leaves. Cloud agents are there when you need more capability.
One portal. Your choice, per conversation.
Go to hybro.ai/d/discovery-api-keys → API Keys → Generate New Key. Copy the key (starts with hybro_).
hybro-hub start --api-key hybro_your_key_here
The hub starts as a background daemon and returns you to the prompt immediately. Logs are written to ~/.hybro/hub.log. The API key is saved to ~/.hybro/config.yaml — subsequent starts don’t need it.
hybro-hub status # check local daemon state and cloud connection
hybro-hub stop # stop the daemon gracefully
Start a local LLM as an A2A agent (requires Ollama installed):
hybro-hub agent start ollama --model llama3.2
You’ll see:
🔗 Connected to hybro.ai
📡 Found 1 local agent:
• My Ollama Chat (llama3.2) — localhost:10010
Agents synced to hybro.ai. Open hybro.ai to start chatting.
Refresh hybro.ai. Your local agent appears alongside cloud agents:
☁️ Legal Contract Reviewer (cloud)
☁️ Code Review Pro (cloud)
🏠 My Ollama Chat (llama3.2) (local · online)
Add it to a room, send a message. The response streams back with a 🏠 Local badge — your data never left your machine.
┌─────────────────────────────────────────────────┐
│ Your Machine │
│ │
│ Hybro Hub (background daemon) │
│ ├── Your local agents (Ollama, custom, etc.) │
│ ├── Privacy router │
│ └── Relay client ──outbound HTTPS only──┐ │
│ │ │
└───────────────────────────────────────────┼─────┘
│
▼
┌─────────────────────────────────────────────────┐
│ hybro.ai Cloud │
│ │
│ ├── Web portal (your browser) │
│ ├── Cloud agents (marketplace) │
│ ├── Relay service (routes to your hub) │
│ └── Message history & rooms │
└─────────────────────────────────────────────────┘
Key properties:
- Outbound-only — the hub initiates all connections. No inbound ports, no firewall changes, works behind NAT.
- Portal-first — you always use hybro.ai. No localhost URLs, no mode switching. Local agents just appear as more agents in the same portal.
- A2A protocol — local and cloud agents speak the same Agent-to-Agent protocol. Any A2A-compatible agent works.
- Graceful degradation — if the hub is offline, cloud agents still work. Local agents show as “offline” and messages queue until the hub reconnects.
Hybro Hub doesn’t just promise privacy — the architecture enforces it.
Your data stays local when you use local agents. Messages to local agents route through the relay to your hub, get processed entirely on your machine, and only the response travels back. The cloud relay sees message metadata (routing info), not your content.
Privacy indicators in the UI
Every message in hybro.ai shows where it was processed:
- 🏠 Local (green) — processed on your machine, data did not leave
- ☁️ Cloud (blue) — processed by a cloud agent via hybro.ai
The hub scans outbound messages for sensitive content before they reach cloud agents:
- PII detection — emails, phone numbers, SSNs, credit cards, API keys
- Custom keywords — configure terms like “medical”, “financial”, “confidential”
- Custom patterns — add regex rules for project-specific data (e.g.,
PROJ-\d⚡)
Currently logs detections only. Active blocking and anonymization are planned for a future release.
Start the hub daemon. Connects to hybro.ai, discovers local agents, and syncs them to the cloud. The process detaches immediately and runs in the background.
hybro-hub start --api-key hybro_...
The API key is saved to ~/.hybro/config.yaml after first use — subsequent starts don’t need it. Only one instance can run per machine; a second start will exit with an error if the daemon is already running.
Options:
| Option | Description |
|---|---|
--api-key |
Hybro API key (also saves to ~/.hybro/config.yaml) |
--foreground, -f |
Run in the foreground instead of daemonizing (useful for debugging) |
Daemon logs are written to ~/.hybro/hub.log (rotating, max 10 MB × 3 files).
Gracefully stop the background daemon. Sends SIGTERM and waits up to 10 seconds before sending SIGKILL. Removes the PID lock file on success so that hybro-hub status correctly shows “Stopped”.
Show the state of the local daemon and its connection to the cloud relay.
Example output when running:
Local daemon: Running (PID 12345)
Log file: /Users/you/.hybro/hub.log
Cloud relay: Online (hub abc123...)
Agents: 3 total (3 active, 0 inactive)
Example output when stopped:
Local daemon: Stopped
Cloud relay: Online (hub abc123...)
Agents: 4 total (3 active, 1 inactive)
The cloud relay section queries hybro.ai directly, so it reflects the last known state even when the local daemon is not running.
List all discovered local agents and their health status.
Launch a local A2A agent from a bundled adapter. Supported adapters: ollama, openclaw, n8n.
Ollama — local LLM (requires Ollama):
hybro-hub agent start ollama
hybro-hub agent start ollama --model mistral:7b --port 10020 --system-prompt "You are a helpful assistant"
OpenClaw — AI coding agent (requires OpenClaw):
hybro-hub agent start openclaw
hybro-hub agent start openclaw --thinking medium --agent-id main
n8n — workflow automation (requires a running n8n instance):
hybro-hub agent start n8n --webhook-url http://localhost:5678/webhook/my-agent
Common options:
| Option | Default | Description |
|---|---|---|
--port |
10010 |
Port for the A2A agent server |
--name |
auto | Agent display name |
--timeout |
varies | Request timeout in seconds |
Adapter-specific options:
| Option | Adapter | Description |
|---|---|---|
--model |
ollama | Ollama model (default: llama3.2) |
--system-prompt |
ollama | Custom system prompt |
--thinking |
openclaw | Thinking level: off/minimal/low/medium/high/xhigh |
--agent-id |
openclaw | OpenClaw agent ID |
--openclaw-path |
openclaw | Path to the openclaw binary |
--webhook-url |
n8n | Webhook URL (required) |
Requires the
a2a-adapterpackage:pip install a2a-adapter
The hub reads from ~/.hybro/config.yaml. A minimal working example:
cloud:
api_key: $⚡ # or paste directly: "hybro_your_key_here"
A fully-annotated template is available in config.yaml.example. Below is a representative example covering all sections:
# Cloud connection
cloud:
api_key: ${HYBRO_API_KEY} # or ${HYBRO_API_KEY:-} to allow unset
gateway_url: "https://api.hybro.ai"
# Agent discovery
agents:
auto_discover: true # probe localhost ports for A2A agents
auto_discover_exclude_ports: # skip non-agent ports (built-in defaults shown)
- 22 # SSH
- 53 # DNS
- 80 # HTTP
- 443 # HTTPS
- 3306 # MySQL
- 5432 # PostgreSQL
- 6379 # Redis
- 27017 # MongoDB
# auto_discover_scan_range: [10000, 11000] # restrict scan to a port range
local: # always-registered agents
- name: "My Custom Agent"
url: "http://localhost:9001"
# Privacy (classification is logging-only; messages are not blocked or rerouted)
privacy:
sensitive_keywords: ["medical", "financial", "confidential"]
sensitive_patterns: ["PROJ-\\d{4}"]
# Offline resilience — events that fail delivery are queued to disk and retried
publish_queue:
enabled: true
max_size_mb: 50
ttl_hours: 24
drain_interval: 30 # seconds between retry cycles
drain_batch_size: 20
max_retries_critical: 20 # agent_response, agent_error, processing_status
max_retries_normal: 5 # task_submitted, artifact_update, task_status
# Heartbeat interval (seconds)
heartbeat_interval: 30
The config file supports ${VAR}, ${VAR:-default}, and $${VAR} (literal) environment variable references (expanded before parsing, matching the OTel Collector convention). To set the API key via a shell environment variable without editing the file, add to your shell profile:
export HYBRO_API_KEY="hybro_..."
Or set it once via the CLI (saves the literal key to the config file):
hybro-hub start --api-key hybro_...
Any agent that speaks the A2A protocol works with Hybro Hub.
With auto_discover: true (the default), the hub automatically finds A2A agents running on localhost by probing listening TCP ports for agent cards at /.well-known/agent.json or /.well-known/agent-card.json. Just start your agent — the hub will find it.
Add agents to ~/.hybro/config.yaml:
agents:
local:
- name: "My Research Agent"
url: "http://localhost:8001"
- name: "Team Agent"
url: "http://192.168.1.50:8080" # LAN agents work too
Use the a2a-python SDK to build a compatible agent:
from a2a.server.apps import A2AStarletteApplication
from a2a.server.request_handlers import DefaultRequestHandler
app = A2AStarletteApplication(
agent_card=my_card,
http_handler=DefaultRequestHandler(agent_executor=my_executor),
)
The hub discovers it automatically and syncs it to hybro.ai.
Hybro SDK (Python Client)
The repo also ships hybro_hub — a Python client for calling cloud agents programmatically via the Hybro Gateway API. Use this when you want to integrate cloud agents into your own code, outside of the hub.
import asyncio
from hybro_hub import HybroGateway
async def main():
async with HybroGateway(api_key="hybro_...") as gw:
agents = await gw.discover("legal contract review")
async for event in gw.stream(agents[0].agent_id, "Review this NDA"):
print(event.data)
asyncio.run(main())
| Method | Description |
|---|---|
discover(query, *, limit=None) |
Search for agents by natural language. Returns list[AgentInfo]. |
send(agent_id, text, *, context_id=None) |
Send a message, get the full response. Returns dict. |
stream(agent_id, text, *, context_id=None) |
Stream a response via SSE. Yields StreamEvent. |
get_card(agent_id) |
Fetch an agent’s A2A card. Returns dict. |
from hybro_hub import AuthError, RateLimitError, AgentNotFoundError
try:
result = await gw.send(agent_id, "Hello")
except AuthError:
print("Invalid API key")
except AgentNotFoundError:
print("Agent not found")
except RateLimitError as e:
print(f"Rate limited — retry after {e.retry_after}s")
| Exception | Status | Cause |
|---|---|---|
AuthError |
401 | Invalid API key |
AccessDeniedError |
403 | No access to agent |
AgentNotFoundError |
404 | Agent not found / inactive |
RateLimitError |
429 | Rate limit exceeded |
AgentCommunicationError |
502 | Upstream agent error |
GatewayError |
any | Base class |
- Python 3.11+
- Ollama (optional, for the built-in Ollama adapter)
- A hybro.ai account with an API key
git clone https://github.com/hybro-ai/hybro-hub.git
cd hybro-hub
pip install -e ".[dev]"
pytest
Apache License 2.0 — see LICENSE for details.
{💬|⚡|🔥} **What’s your take?**
Share your thoughts in the comments below!
#️⃣ **#hybroaihybrohub #HYBRO #Hub #local #remote #agents #private #powerful #unified #GitHub**
🕒 **Posted on**: 1775327593
🌟 **Want more?** Click here for more info! 🌟
