proxer tinyrack
winget install --id=tinyrack.proxer -e Reverse-tunnel CLI for exposing local HTTP, SSE, and WebSocket services
winget install --id=tinyrack.proxer -e Reverse-tunnel CLI for exposing local HTTP, SSE, and WebSocket services
Reverse-tunnel CLI for exposing local HTTP, SSE, and WebSocket services through a persistent client tunnel.
Proxer is an ngrok/Pinggy-style reverse tunnel for self-hosted development and private infrastructure.
Run one public Proxer server, then connect tunnel clients from private networks. Public HTTP requests, Server-Sent Events, and WebSocket upgrades are forwarded over a persistent client-initiated WebSocket control connection, so the local service does not need to accept inbound internet traffic.
Use npm on any platform, or as a cross-platform option.
npm install -g @tinyrack/proxer
brew install tinyrack-net/tap/proxer
Prebuilt standalone executables are published for Linux, macOS, and Windows from the GitHub Releases page.
The OCI image is published to Docker Hub (tinyrack/proxer) and GHCR (ghcr.io/tinyrack-net/proxer). Docker Hub is the shortest image name:
docker run --rm tinyrack/proxer --version
Run a public Proxer server in Docker and publish port 8080:
docker run --rm -p 8080:8080 tinyrack/proxer server --listen 0.0.0.0:8080 --domain your-server.example.com --token dev-token
For internet-facing deployments, terminate TLS with Caddy, Traefik, NGINX, or a load balancer and forward HTTP/WebSocket traffic to Proxer over loopback or a private network.
The dev-token value in examples is only for demos and local testing. For real deployments, use a long random token supplied through PROXER_TOKEN, Docker/Kubernetes secrets, or your platform's secret mechanism because CLI args can appear in shell history and process lists.
On Linux, run the tunnel client with host networking when it needs to reach a service on the Docker host:
docker run --rm --network host tinyrack/proxer http 3000 --server ws://127.0.0.1:8080 --subdomain demo --token dev-token
On macOS and Windows Docker Desktop, use host.docker.internal instead of 127.0.0.1 when the container needs to reach a local service on the host.
Proxer can install a small markdown skill file for AI agents:
proxer skill install ~/.hermes/skills/proxer
proxer skill install ~/.hermes/skills/proxer --dry-run
proxer skill install ~/.hermes/skills/proxer --force
The command writes /proxer.md and has no network side effects.
Kubernetes liveness and readiness probes can use the built-in single-port health endpoints. These endpoints do not require a tunnel or token:
livenessProbe:
httpGet:
path: /__proxer__/health/live
port: 8080
readinessProbe:
httpGet:
path: /__proxer__/health/ready
port: 8080
The public wss:// and https:// examples below assume TLS is terminated by a reverse proxy or load balancer in front of Proxer, forwarding to Proxer over loopback or a private network.
Start the public Proxer server:
proxer server --listen 0.0.0.0:8080 --domain your-server.example.com --token dev-token
Start a local app on the client machine:
python3 -m http.server 3000 --bind 127.0.0.1
Connect a tunnel client for the root domain route:
proxer http 3000 \
--server wss://your-server.example.com \
--token dev-token
Then open the public listener with the configured root domain Host:
curl https://your-server.example.com/
Alternatively, connect a tunnel client for a specific subdomain:
proxer http 3000 \
--server wss://your-server.example.com \
--subdomain demo \
--token dev-token
Then route by the matching subdomain Host:
curl https://demo.your-server.example.com/
Requests for unregistered subdomains return 404. Proxer does not route direct localhost/IP requests to a single connected client automatically.
The dev-token value in these examples is only for demos and local testing. For real deployments, use a long random token supplied through PROXER_TOKEN, Docker/Kubernetes secrets, or your platform's secret mechanism because CLI args can appear in shell history and process lists.
CLI flags have highest precedence, then PROXER_ environment variables, then built-in defaults.
proxer server supports:
| CLI flag | Environment variable | Default |
|---|---|---|
--listen | PROXER_LISTEN | 127.0.0.1:8080 |
--domain | PROXER_DOMAIN | unset |
--token | PROXER_TOKEN | unset |
--trusted-proxy | PROXER_TRUSTED_PROXIES | unset |
proxer http supports:
| CLI flag | Environment variable | Default |
|---|---|---|
--server | PROXER_SERVER | ws://127.0.0.1:8080 |
--subdomain | PROXER_SUBDOMAIN | unset |
--token | PROXER_TOKEN | unset |
The local port for proxer http remains positional and does not have an environment variable.
--trusted-proxy is a singular repeatable flag:
proxer server \
--listen 0.0.0.0:8080 \
--domain proxy.example.com \
--trusted-proxy loopback \
--trusted-proxy private \
--token "$PROXER_TOKEN"
PROXER_TRUSTED_PROXIES is plural and comma-separated:
PROXER_LISTEN=0.0.0.0:8080 \
PROXER_DOMAIN=proxy.example.com \
PROXER_TRUSTED_PROXIES=loopback,private,10.42.0.0/16 \
PROXER_TOKEN=secret \
proxer server
Supported trusted proxy values are loopback, private, IP literals, and CIDR ranges. Only trust reverse proxies you control. When --trusted-proxy or PROXER_TRUSTED_PROXIES is configured, that reverse proxy must overwrite or strip inbound X-Forwarded-* and X-Real-IP headers from external clients before forwarding to Proxer, because Proxer trusts those headers from configured TCP peers.
When Proxer runs behind Caddy, Traefik, NGINX, a load balancer, or another reverse proxy, terminate TLS there and forward to Proxer over loopback or a private network. Configure the TCP peer addresses that may supply X-Forwarded-* headers. The proxy must overwrite or strip inbound X-Forwarded-* and X-Real-IP headers from external clients before forwarding to Proxer, because Proxer trusts those headers from configured TCP peers. Without trusted proxies, Proxer ignores forwarded host/protocol/client IP headers for routing.
PROXER_LISTEN=0.0.0.0:8080 \
PROXER_DOMAIN=proxy.intranet.example.com \
PROXER_TRUSTED_PROXIES=private,loopback \
PROXER_TOKEN=... \
proxer server
Example client configuration through the proxy:
PROXER_SERVER=wss://proxy.intranet.example.com \
PROXER_SUBDOMAIN=demo \
PROXER_TOKEN=... \
proxer http 3000
Proxer uses one HTTP/WebSocket listener. Public traffic enters through that listener, while tunnel clients connect to a reserved control WebSocket path on the same port.
Fixed control path:
/__proxer__/control
You do not need to type this path on the client. Pass only the server base URL and Proxer appends the fixed control path internally. All /__proxer__/* paths are reserved for Proxer internal endpoints and are not proxied to applications.
HTTP requests outside /__proxer__/* -> public HTTP proxy
WebSocket upgrade /__proxer__/control -> tunnel control connection
WebSocket upgrade outside /__proxer__/* -> public WebSocket proxy
Single-port mode works with or without TLS. Use TLS for public deployments, typically terminated before Proxer:
http://host:8080 + ws://host:8080 -> no TLS
https://host + wss://host -> TLS
Start a local HTTP service:
python3 -m http.server 3000 --bind 127.0.0.1
Start the Proxer server in another terminal:
proxer server --listen 127.0.0.1:8080 --domain proxy.localhost --token dev-token
Start the tunnel client in a third terminal:
proxer http 3000 --server ws://127.0.0.1:8080 --subdomain demo --token dev-token
Call the public listener:
curl -H 'Host: demo.proxy.localhost' http://127.0.0.1:8080/
Start a local SSE service on port 3000:
node --input-type=module <<'EOF'
import http from "node:http";
http
.createServer((request, response) => {
if (request.url !== "/events") {
response.writeHead(404);
response.end("Not found\n");
return;
}
response.writeHead(200, {
"cache-control": "no-cache",
"content-type": "text/event-stream",
});
response.write("data: one\n\n");
setTimeout(() => {
response.write("data: two\n\n");
response.end();
}, 1000);
})
.listen(3000, "127.0.0.1", () => {
console.log("SSE server listening on http://127.0.0.1:3000/events");
});
EOF
Run the same proxer server and proxer http commands from the HTTP example, then stream events through the tunnel:
curl -N -H 'Host: demo.proxy.localhost' http://127.0.0.1:8080/events
Start a local WebSocket echo service on port 3000:
node --input-type=module <<'EOF'
import http from "node:http";
import { WebSocketServer } from "ws";
const server = http.createServer();
const wss = new WebSocketServer({ server });
wss.on("connection", (socket) => {
socket.on("message", (data, isBinary) => {
socket.send(data, { binary: isBinary });
});
});
server.listen(3000, "127.0.0.1", () => {
console.log("WebSocket echo listening on ws://127.0.0.1:3000");
});
EOF
Run the same proxer server and proxer http commands from the HTTP example, then connect through the public listener:
node --input-type=module <<'EOF'
import { WebSocket } from "ws";
const socket = new WebSocket("ws://127.0.0.1:8080/echo", {
headers: { host: "demo.proxy.localhost" },
});
socket.on("open", () => socket.send("hello"));
socket.on("message", (data) => {
console.log(data.toString());
socket.close();
});
EOF
mise exec -- pnpm install
mise exec -- pnpm run build
mise exec -- pnpm run typecheck
mise exec -- pnpm run test
mise exec -- pnpm run format:check
Run the CLI from this repository:
mise exec -- pnpm --filter @tinyrack/proxer start --help
mise exec -- pnpm --filter @tinyrack/proxer start server --listen 127.0.0.1:8080 --token dev-token
mise exec -- pnpm --filter @tinyrack/proxer start http 3000 --server ws://127.0.0.1:8080 --subdomain demo --token dev-token
Build and smoke-test the standalone executable:
mise exec -- pnpm run pkg:build
mise exec -- pnpm run pkg:smoke -- --skip-build
The default build writes packages/cli/dist/pkg/proxer. Release builds produce Linux, macOS, and Windows artifacts.