Flowise integration

Add remote execution to Flowise workflows

Flowise handles visual AI workflows, agents, and chatflows. Jungle Grid handles remote AI workload execution when a flow needs compute-heavy jobs that should run outside the visual builder runtime.

Tool role

Flowise

Visual builder for AI agents, chatflows, and workflow orchestration.

Jungle Grid role

Remote execution

Remote execution backend for compute-heavy workload steps.

Best use case

Visual AI workflows that need tracked remote execution for bigger jobs.

Open Flowise docs
Read the docsBrowse integrations
Visual orchestration
Tool role

Flowise gives you a visual layer for assistants, chatflows, and agent workflows.

Remote execution
Jungle Grid role

Jungle Grid runs the heavy job and returns job lifecycle data to the flow.

API node handoff
Best fit

Useful when a Flowise workflow should submit a tracked remote job through HTTP or a custom tool.

How it works

How Flowise and Jungle Grid work together

Flowise gives teams a visual way to build chatflows, assistants, and more complex agent workflows. That visual layer is useful for orchestration, but it is usually not the right place to own remote GPU scheduling and workload recovery.

A cleaner pattern is to let Flowise call Jungle Grid through a custom tool or API node. Flowise stays responsible for the user-facing workflow while Jungle Grid executes the heavier job remotely and returns status, logs, and results.

  • Use Flowise for the visual workflow and agent-building experience.
  • Submit remote jobs to Jungle Grid through an HTTP or custom tool node.
  • Surface job progress and outputs back inside the Flowise workflow or UI.
Quick answer

Flowise stays in charge of orchestration

Flowise handles visual workflow logic. Jungle Grid handles remote AI workload execution.

Keep workflow state, app behavior, and orchestration inside the integration tool. Use Jungle Grid when the job itself should run remotely with tracked status, logs, and results.

Architecture flow

Where Jungle Grid fits in the stack

Flowise owns the visual workflow. Jungle Grid owns the job runtime behind heavier execution steps.

1

User / Agent / Workflow

A user action or system event starts the process that needs remote AI execution.

2

Integration tool

The orchestration layer validates input, manages state, and decides when a remote job should run.

3

Jungle Grid API

The workflow layer submits a workload request and receives a tracked job identifier back.

4

Remote AI workload execution

Jungle Grid handles placement, execution, provider capacity, and lifecycle control for the job.

5

Status + logs + result

The integration layer polls state, reads logs, and collects result payloads or failure reasons.

6

App / Agent / Workflow

The original system turns job state and outputs into the next user-facing or automated step.

Example workflow

Example workflow

A Flowise chatflow can treat Jungle Grid like an external execution backend whenever a step becomes too heavy for inline processing.

1

A Flowise chatflow or agentflow receives a request from the user.

2

A custom tool or API node decides a remote AI workload should be launched.

3

The node submits the job to Jungle Grid.

4

Flowise displays status, logs summaries, or progress while the workload runs remotely.

5

The workflow reads the final result and continues the visual flow once Jungle Grid completes the job.

Code example

Submit a workload and track the job

This example fits an API node, custom tool, or lightweight backend helper that a Flowise workflow can call when it needs remote execution.

The important split is the same on every page: the integration tool decides when work should run, and Jungle Grid executes the workload remotely.

Small code exampleDemo glue code that forwards work to Jungle Grid and then checks the returned job state.
const response = await fetch("/api/junglegrid/jobs", {
  method: "POST",
  headers: {
    "Content-Type": "application/json"
  },
  body: JSON.stringify({
    workload: "inference",
    model_size: 7,
    image: "your-image:latest",
    command: "python run.py"
  })
});

const job = await response.json();

const statusResponse = await fetch("/api/junglegrid/jobs/" + job.id);
const status = await statusResponse.json();

In this demo, /api/junglegrid/jobs is an app-side route or helper endpoint that forwards work to Jungle Grid. It is shown as integration glue code, not as a claim about a fixed official URL path.

workload

The kind of job Jungle Grid should execute, such as inference, training, fine-tuning, or a batch run.

model_size

A rough sizing hint so Jungle Grid can match the workload to healthy capacity without manual GPU selection.

image

The container image Jungle Grid should launch for the workload.

command

The command executed inside the container when the job starts.

Use cases

Good fits for this pattern

This pattern is strongest when developers want to keep workflow logic inside the integration tool and avoid manually choosing GPUs, providers, or regions for each job.

Use case

Visual AI workflows that need to trigger tracked remote jobs instead of keeping all work inside the builder runtime.

Use case

Assistant or chatflow experiences that should expose logs, status, and result state from a remote execution layer.

Use case

Prototype and production flows where developers want a clear separation between orchestration UX and compute execution.

Copy for LLM

Prompt an LLM with the right layer split

This prompt keeps Flowise in the orchestration role and Jungle Grid in the execution role so generated demos follow the intended architecture.

Copy for LLM

Prompt template for Flowise demos

Use this prompt when you want an LLM to scaffold a reference integration without collapsing orchestration into the execution layer.

Build a demo that uses Flowise for visual AI workflow orchestration and Jungle Grid for remote AI workload execution. The demo should submit a workload to Jungle Grid, poll job status, fetch logs, display results, and show failure/retry states. Use Jungle Grid as the execution layer, not as a replacement for Flowise.

FAQ

Frequently asked

Do I need a custom Flowise node to use Jungle Grid?

Not necessarily. An HTTP or API node can be enough for many cases, though a custom tool node may give a better developer experience if Jungle Grid becomes a common execution target in your flows.

Should Flowise run the heavy AI job directly?

Usually no. Flowise is a stronger fit for visual orchestration and tool logic. Jungle Grid should handle heavier remote execution when the job needs tracked runtime behavior.

Can a Flowise UI show status and logs from Jungle Grid?

Yes. The flow can store the returned job ID, poll Jungle Grid, and surface status, logs summaries, and final results back into the visual workflow or UI.