Open Source · MIT Licensed

Tunnel your local AI
to the internet

Expose your local Ollama, vLLM, or any AI service to the internet with API key auth. No port forwarding, no static IP, no nginx configs. One command.

npx easytyga
$ npx easytyga

  easytyga v1.0.0

  GPU:     NVIDIA GeForce RTX 4090
  Target:  http://localhost:11434
  Models:  llama3.2, mistral, codellama

  Tunnel active

  Public URL:  https://abc123.easytyga.com
  API Key:     et_a1b2c3d4e5f6...

  Press Ctrl+C to disconnect.

The problem

You have a GPU. You're running Ollama. But you can't access it from anywhere else.

No authentication

Ollama has zero built-in auth. Expose it publicly and anyone can use your GPU.

Behind NAT

Home router, CGNAT, firewall -- your GPU isn't reachable from the internet.

Config nightmare

nginx + Cloudflare + Let's Encrypt + basic auth. For a local LLM? Really?

IP exposed

Port forwarding leaks your home IP to every client that connects.

easytyga fixes all of this

One command

npx easytyga. That's it. No install, no config files, no Docker. Works in 10 seconds.

Auto API key auth

Every tunnel gets a unique API key. Every request is authenticated. No anonymous access.

IP stays private

Clients connect through our relay. Your home IP is never exposed. Works behind any NAT.

Auto GPU detection

Detects your NVIDIA, AMD, or Apple Silicon GPU automatically. Knows what you're running.

Works with anything

Ollama, vLLM, LocalAI, ComfyUI -- any local HTTP service. Not locked to one framework.

Auto-reconnect

Connection drops? easytyga reconnects automatically with exponential backoff. Set and forget.

Powered by gpusmarket.com

Earn money from your GPU

Your GPU sits idle most of the day. Add one flag and it earns money on gpusmarket.com -- the peer-to-peer GPU rental marketplace.

npx easytyga --list
  • Set your own price per hour
  • Get paid via Stripe
  • Keep 100% of rental income
  • Stop any time with Ctrl+C
RTX 4090 $0.40/hr · ~$9.60/day
RTX 3090 $0.20/hr · ~$4.80/day
M4 Max $0.55/hr · ~$13.20/day
A100 80GB $1.45/hr · ~$34.80/day
Powered by agenticmemory.ai

Add persistent memory

Your AI forgets everything between sessions. Add one flag and conversations persist across sessions, devices, and channels.

npx easytyga --memory
  • Conversations stored automatically
  • Context recalled on next session
  • Works across devices
  • Sub-millisecond reads
Without memory

"What model did we discuss?"

"I don't have context about previous conversations."

With --memory

"What model did we discuss?"

"We discussed Llama 3 70B for your RAG pipeline. You preferred it over Mistral for accuracy."

Get started in 10 seconds

npx easytyga