Expose your local Ollama, vLLM, or any AI service to the internet with API key auth. No port forwarding, no static IP, no nginx configs. One command.
npx easytyga
$ npx easytyga easytyga v1.0.0 GPU: NVIDIA GeForce RTX 4090 Target: http://localhost:11434 Models: llama3.2, mistral, codellama Tunnel active Public URL: https://abc123.easytyga.com API Key: et_a1b2c3d4e5f6... Press Ctrl+C to disconnect.
You have a GPU. You're running Ollama. But you can't access it from anywhere else.
Ollama has zero built-in auth. Expose it publicly and anyone can use your GPU.
Home router, CGNAT, firewall -- your GPU isn't reachable from the internet.
nginx + Cloudflare + Let's Encrypt + basic auth. For a local LLM? Really?
Port forwarding leaks your home IP to every client that connects.
npx easytyga. That's it. No install, no config files, no Docker. Works in 10 seconds.
Every tunnel gets a unique API key. Every request is authenticated. No anonymous access.
Clients connect through our relay. Your home IP is never exposed. Works behind any NAT.
Detects your NVIDIA, AMD, or Apple Silicon GPU automatically. Knows what you're running.
Ollama, vLLM, LocalAI, ComfyUI -- any local HTTP service. Not locked to one framework.
Connection drops? easytyga reconnects automatically with exponential backoff. Set and forget.
Your GPU sits idle most of the day. Add one flag and it earns money on gpusmarket.com -- the peer-to-peer GPU rental marketplace.
npx easytyga --list
Your AI forgets everything between sessions. Add one flag and conversations persist across sessions, devices, and channels.
npx easytyga --memory
"What model did we discuss?"
"I don't have context about previous conversations."
"What model did we discuss?"
"We discussed Llama 3 70B for your RAG pipeline. You preferred it over Mistral for accuracy."