$3 bonusNew accounts get $3 in credits to test real AI workload routing.Claim itGPU orchestration platform for AI workloads.

Stop picking GPUs.Ship models.

Jungle Grid routes inference, training, and batch workloads across fragmented GPU infrastructure with explicit fit checks, health-aware placement, and recovery when nodes go bad instead of leaving you to juggle providers, GPU families, and fallback paths by hand.

$npx jungle-grid@latest login
MCPnpx @jungle-grid/mcp
Inference Service
Intent accepted

Run this 7B AI workload
with balanced routing and
strict VRAM-fit checks
before dispatch.

Jungle Grid
Intent classified: inference
VRAM fit confirmed before dispatch
Signals scored: price + latency + node health
Dispatching to healthy best-fit node
Best-Fit Node
A10G

Who uses Jungle Grid

AI engineers

Run inference, training, and batch workloads

  • CLI / SDK
  • Submit jobs, get estimates, logs + results
  • One execution surface across providers
Get the CLI →
Apps and agents

Route workloads programmatically

  • API / MCP
  • Trigger from apps, agents, or pipelines
  • Keep provider logic out of product code
Explore the API →

What Jungle Grid actually does

Reliable execution across fragmented GPU capacity

Describe the workload, not the hardware

Describe the workload, model size, and optimization goal from the CLI, API, or MCP. Jungle Grid turns inference, training, and batch requests into placement decisions without making you guess GPU, storage, region, or provider combinations up front.

  • Pass workload type and model size instead of provider-specific hardware names
  • Choose cost, speed, or balanced routing only when you need to steer placement
  • Track job state from the CLI or portal instead of hopping across provider consoles

Score live capacity and recover cleanly

Placement decisions account for price, reliability, latency, queue depth, VRAM fit, and thermal state before dispatch, so bad placements fail clearly and degraded nodes do not silently ruin a run.

  • Reject requests that cannot fit current VRAM instead of sitting pending forever
  • Requeue affected jobs automatically when nodes go stale, unreachable, or unhealthy
  • Route across mixed GPU pools without hand-tuning every provider path

Routing behavior

How Jungle Grid avoids bad placements and stalled jobs

VRAM Fit
Admission guard
Jobs that cannot fit current capacity are rejected explicitly instead of sitting pending forever.
Price + Latency
Placement signals
Scheduler scoring blends cost, queue depth, reliability, latency, and thermal state so you do not guess hardware manually or land on obviously degraded nodes.
Auto Requeue
Failure recovery
When a node drops or goes stale, affected jobs are requeued onto healthy capacity automatically instead of dying with the first bad placement.

Compute network

Absorb fragmented capacity, not just one cloud.

Jungle Grid dispatches across managed providers and independently operated nodes, absorbing fragmented capacity into one execution surface so failed provider paths do not turn into manual fallback work.

Managed providers

Consumer + data center

Largest GPU spot marketplace. Broad fallback capacity across regions and hardware classes.

Spot marketplace

Community-driven GPU rental. Useful spillover capacity when tighter clouds cannot place the workload.

ML-optimised cloud

Purpose-built ML cloud. A100 and H100 pools for heavier jobs that need predictable storage and networking.

Enterprise HPC

Kubernetes-native HPC cloud. Adds more controlled capacity when noisier pools are not a fit.

Sustainable compute

Low-carbon GPU cloud that broadens regional coverage and supply diversity.

+ Independent nodes
Decentralised · 247 nodes live

Independent nodes join the same execution pool.

Independent operators register nodes directly. The orchestrator validates hardware signals, measures latency, and folds those nodes into the same dispatch pool automatically. New capacity shows up without giving users another provider workflow to manage.

247Nodes online
18Countries
34GPU models
112msAvg dispatch
Register your node
jungle · node setup
$jungle node register --dispatch-url http://0.0.0.0:8090 --location eu-west
→ Measuring latency… 42ms
→ Validating GPU signals… ok
→ Payout account linked ok
✓ Node registered rtx-4090
$jungle node start --daemon
→ Installing node-agent… ok
✓ Daemon running pid 14822

Changelog

Recent updates

View all changes →
Apr 12, 2026
CLI v1.2 — Device auth stability
Improved token refresh handling and error recovery for long-running CLI sessions.
Apr 8, 2026
Thermal-aware rerouting now GA
Jobs are automatically rerouted when a node hits thermal thresholds, with zero manual intervention.
Apr 5, 2026
Jungle Grid v1.0 — Public launch
Inference orchestration platform is generally available. Submit your first routed request in under a minute.
Mar 28, 2026
Email OTP auth beta
Passwordless sign-up via one-time email codes. No passwords, no OAuth complexity.

Community

What developers are running without provider chaos

Pointed our 7B chat endpoint at Jungle Grid and stopped guessing providers. If a node cannot take the job, we know immediately instead of finding out twenty minutes later.

P
Priya M.
ML Engineer · Series A startup
$jungle submit --workload inference --model-size 13 --name chat-api
→ VRAM fit confirmed · healthy node selected · running

We stopped jumping between provider dashboards to understand failed runs. One job view, one set of logs, one status model.

D
Daniel K.
Platform Engineer · DevTools
$jungle submit --workload embeddings --model-size 1.5 --optimize-for cost
→ Healthy consumer node matched · $0.003 / run · running

Thermal rerouting caught a degraded node mid-run before the batch was ruined.

M
Marcus T.
Senior SWE · AI infra team
$jungle jobs
→ 3 running · 1 requeued · 12 completed

We stopped manually trying GPU, storage, and region combinations. We describe the job and let Jungle Grid find live capacity.

S
Sara K.
Infra Lead · B2B SaaS
$jungle submit --workload inference --model-size 70 --optimize-for speed
→ No 48GB fit · queued 8.2s · running on H100

Pointed our 7B chat endpoint at Jungle Grid and stopped guessing providers. If a node cannot take the job, we know immediately instead of finding out twenty minutes later.

P
Priya M.
ML Engineer · Series A startup
$jungle submit --workload inference --model-size 13 --name chat-api
→ VRAM fit confirmed · healthy node selected · running

We stopped jumping between provider dashboards to understand failed runs. One job view, one set of logs, one status model.

D
Daniel K.
Platform Engineer · DevTools
$jungle submit --workload embeddings --model-size 1.5 --optimize-for cost
→ Healthy consumer node matched · $0.003 / run · running

Thermal rerouting caught a degraded node mid-run before the batch was ruined.

M
Marcus T.
Senior SWE · AI infra team
$jungle jobs
→ 3 running · 1 requeued · 12 completed

We stopped manually trying GPU, storage, and region combinations. We describe the job and let Jungle Grid find live capacity.

S
Sara K.
Infra Lead · B2B SaaS
$jungle submit --workload inference --model-size 70 --optimize-for speed
→ No 48GB fit · queued 8.2s · running on H100
$jungle status job-7f3a2b
→ running · 04:21 elapsed · node healthy

Large image generation queue, no GPU config, no region picking, no manual fallback when one market dried up.

L
Lena R.
Creative Dev · Generative AI studio
$jungle node register --dispatch-url http://0.0.0.0:8090 --location eu-west
→ Node registered · latency 42ms · payout linked

Node health signals kept us off overloaded machines during inference spikes. That cut down the weird intermittent failures.

Y
Yusuf A.
Research Engineer · NLP lab
$jungle submit --workload transcription --model-size 0.3 --name whisper-batch
→ Dispatched · 67ms · fit confirmed

10k image encode job finished in under 8 minutes, and we never had to babysit the cross-node placement.

J
James O.
Backend Engineer · Media platform
$jungle node status
→ daemon: running · 2 active jobs · uptime 14h 32m

When one provider path dried up, the workload just moved. We do not keep a manual fallback playbook anymore.

A
Aisha B.
AI Platform Lead · Fortune 500
$jungle status job-7f3a2b
→ running · 04:21 elapsed · node healthy

Large image generation queue, no GPU config, no region picking, no manual fallback when one market dried up.

L
Lena R.
Creative Dev · Generative AI studio
$jungle node register --dispatch-url http://0.0.0.0:8090 --location eu-west
→ Node registered · latency 42ms · payout linked

Node health signals kept us off overloaded machines during inference spikes. That cut down the weird intermittent failures.

Y
Yusuf A.
Research Engineer · NLP lab
$jungle submit --workload transcription --model-size 0.3 --name whisper-batch
→ Dispatched · 67ms · fit confirmed

10k image encode job finished in under 8 minutes, and we never had to babysit the cross-node placement.

J
James O.
Backend Engineer · Media platform
$jungle node status
→ daemon: running · 2 active jobs · uptime 14h 32m

When one provider path dried up, the workload just moved. We do not keep a manual fallback playbook anymore.

A
Aisha B.
AI Platform Lead · Fortune 500

FAQ

Frequently asked

Describe the workload. Let Jungle Grid route the execution.

New accounts get $3 in credits to test live routing on real capacity. Submit an inference, training, or batch workload, see whether it fits, and let Jungle Grid handle placement and recovery without you juggling providers manually.

Create account and claim $3
Free $3 credits for new accountsExplicit fit checks before dispatchCLI, API, MCP, and portal entrypoints