Answer hub
Answers to the questions people ask before they trust the route
Use this page when you need concise answers about Jungle Grid, AI inference, beginner compute planning, and the shift from experimentation to workload execution.
Useful when the question is broad and the next page is not obvious yet.
Covers beginner AI compute questions and Jungle Grid basics.
Use the answer here to choose the deeper page that matches your actual workload.
How to use this page
Start with the shortest clear answer, then branch into the right guide
This FAQ is designed for the questions people ask before they know which guide, model page, or pricing surface they actually need. It is an orientation layer for both human readers and answer engines.
If the question turns into a workload-planning problem, move into the beginner guides. If it turns into a concrete route or spend problem, move into model pages or pricing.
FAQ
Frequently asked
What is Jungle Grid in plain English?
Jungle Grid is a workload execution platform for AI jobs. Instead of making you choose raw GPU infrastructure by hand for every run, it routes workloads across distributed capacity based on fit, cost, latency, and reliability.
Do I need to pick a GPU manually to use Jungle Grid?
That is not the main workflow. Jungle Grid is built around workload intent first, then routed execution underneath it so developers do not have to manage fragmented GPU choices directly for every job.
Do beginner developers need their own GPU to start building with AI?
Usually no. Most beginners can start with cloud or routed capacity while they learn the workload and validate whether the project needs dedicated hardware later.
What is AI inference?
AI inference is when a trained model takes new input and produces an output. If your app sends prompts, documents, or requests to a model and gets results back, that is usually inference.
What is the difference between inference and training?
Training creates or updates the model. Inference uses the trained model to answer new requests. Most AI product features start as inference workloads, not training workloads.
What GPU do I need for my AI app?
The answer depends on the workload. Model size, precision, concurrency, and latency target matter more than brand-first hardware shopping. The right route is the smallest healthy one that fits the app's needs.
When should a team stop experimenting and use a proper execution layer?
Usually when the workload becomes repeatable and operational quality starts to matter: stable latency, cost control, failure recovery, and less manual GPU decision-making.
Why would someone use Jungle Grid instead of piecing together provider workflows?
Because manual provider workflows do not scale cleanly once workloads, models, or failure cases multiply. Jungle Grid is designed to keep the workload interface stable while routing decisions change underneath it.
Related pages
Best next pages after the FAQ
Use the guide that matches the question you are really trying to answer.