InsForge integration
Pair InsForge backend state with Jungle Grid execution
InsForge handles backend concerns such as state, storage, auth, and app data. Jungle Grid handles remote AI workload execution when your product needs compute-heavy jobs without turning the backend into a GPU scheduler.
Unofficial demo and reference integration. This page describes a practical split between InsForge as a backend layer and Jungle Grid as a remote execution layer.
InsForge
Backend, state, storage, auth, and application data services.
Remote execution
Remote workload executor behind the product backend.
AI products that need a backend state layer plus tracked remote execution for heavier workloads.
Open InsForge docsInsForge covers app data, auth, storage, and backend patterns designed for AI-assisted development.
Jungle Grid runs the job and returns lifecycle updates the app can store or display.
Good when an app needs both backend state and a clean remote execution layer.
How it works
How InsForge and Jungle Grid fit together
InsForge is useful for the product layer around an AI workload: user identity, application state, storage, file upload handling, and result metadata. Those concerns are different from scheduling and executing remote AI jobs.
A practical split is to store the workload record in InsForge, submit the actual job to Jungle Grid, then persist status transitions and result metadata back into InsForge as the job progresses. That keeps the backend product model clean while Jungle Grid handles the workload runtime.
- Use InsForge for backend state, auth, storage, and app records.
- Use Jungle Grid for actual workload placement and remote execution.
- Store job IDs, status changes, logs metadata, and result pointers back in your backend layer.
InsForge stays in charge of orchestration
InsForge handles backend and app logic. Jungle Grid handles remote AI workload execution.
Keep workflow state, app behavior, and orchestration inside the integration tool. Use Jungle Grid when the job itself should run remotely with tracked status, logs, and results.
Architecture flow
Where Jungle Grid fits in the stack
The app backend owns state and user context. Jungle Grid owns the execution lifecycle for the AI job itself.
User / Agent / Workflow
A user action or system event starts the process that needs remote AI execution.
Integration tool
The orchestration layer validates input, manages state, and decides when a remote job should run.
Jungle Grid API
The workflow layer submits a workload request and receives a tracked job identifier back.
Remote AI workload execution
Jungle Grid handles placement, execution, provider capacity, and lifecycle control for the job.
Status + logs + result
The integration layer polls state, reads logs, and collects result payloads or failure reasons.
App / Agent / Workflow
The original system turns job state and outputs into the next user-facing or automated step.
Example workflow
Example workflow
A backend-driven product can keep its core state in InsForge while sending heavier execution to Jungle Grid and syncing status back into the app.
A user submits a workload request inside an application backed by InsForge.
InsForge stores the initial workload record, input references, and user context.
The app or an edge function submits the actual job to Jungle Grid.
Jungle Grid executes the job and exposes status, logs, and terminal state.
InsForge stores status transitions and result metadata so the app can render the final outcome.
Code example
Submit a workload and track the job
This example represents an app route or backend function that receives a product-level request, submits a Jungle Grid job, and then stores the returned ID in your backend state layer.
The important split is the same on every page: the integration tool decides when work should run, and Jungle Grid executes the workload remotely.
In this demo, /api/junglegrid/jobs is an app-side route or helper endpoint that forwards work to Jungle Grid. It is shown as integration glue code, not as a claim about a fixed official URL path.
The kind of job Jungle Grid should execute, such as inference, training, fine-tuning, or a batch run.
A rough sizing hint so Jungle Grid can match the workload to healthy capacity without manual GPU selection.
The container image Jungle Grid should launch for the workload.
The command executed inside the container when the job starts.
Use cases
Good fits for this pattern
This pattern is strongest when developers want to keep workflow logic inside the integration tool and avoid manually choosing GPUs, providers, or regions for each job.
AI SaaS products that need auth, state, uploads, and remote model execution in one product flow.
Document or media workflows where the backend stores asset metadata and Jungle Grid runs the compute-heavy transform job.
Applications that need reliable job tracking in product state without turning the backend into a scheduler.
Copy for LLM
Prompt an LLM with the right layer split
This prompt keeps InsForge in the orchestration role and Jungle Grid in the execution role so generated demos follow the intended architecture.
Prompt template for InsForge demos
Use this prompt when you want an LLM to scaffold a reference integration without collapsing orchestration into the execution layer.
Build a demo that uses InsForge for backend state and application logic and Jungle Grid for remote AI workload execution. The demo should submit a workload to Jungle Grid, poll job status, fetch logs, display results, and show failure/retry states. Use Jungle Grid as the execution layer, not as a replacement for InsForge.FAQ
Frequently asked
Is this an official InsForge integration?
No. This page is an unofficial reference pattern that shows how a backend/state layer such as InsForge can work alongside Jungle Grid for remote AI workload execution.
What should stay in InsForge versus Jungle Grid?
Keep auth, data records, storage, user state, and result metadata in InsForge. Keep placement, remote execution, logs, and workload lifecycle handling in Jungle Grid.
Why split backend state from execution infrastructure?
Because product data and execution scheduling solve different problems. The split keeps your backend simpler and lets Jungle Grid focus on workload fit, capacity, and recovery.
Next step
Keep backend state and workload execution separate
Use InsForge for the app-facing backend layer and Jungle Grid for remote AI job execution so each layer stays focused on the job it is best at.