AI Completion

Run LLM completions from multiple providers within a flow. Supports tool calling, streaming, and sandboxed execution.

Configuration

- ai_completion:
    name: analyze
    provider: google
    model: gemini-2.5-flash-lite
    credentials_path: /path/to/credentials.json
    prompt: "Analyze this data: {{event.data}}"

Fields

FieldTypeDefaultDescription
namestringrequiredTask name.
providerstringrequiredLLM provider (google, anthropic, openai).
modelstringrequiredModel identifier.
credentials_pathstringrequiredPath to provider credentials.
promptstring/resourcerequiredPrompt template. Supports templating and resource files.
system_promptstring/resourceSystem prompt.
max_tokensintMaximum tokens in response.
temperaturefloatSampling temperature.
toolslistTool definitions for function calling.
sandboxobjectSandbox configuration for tool execution.
depends_onlistUpstream task names.
retryobjectRetry configuration.

Sandbox

Optional sandboxing for tool execution via nsjail. Rhai scripts do not need sandboxing (safe by design).

- ai_completion:
    name: agent
    provider: google
    model: gemini-2.5-flash-lite
    prompt: "{{event.data}}"
    sandbox:
      memory_limit_mb: 512
      time_limit_seconds: 30
      max_pids: 10
      allow_network: false