Articles in this section

AI Agents Integration in FlowMattic

Updated:

FlowMattic’s AI Agent is a powerful automation component that integrates multiple AI providers with your workflows. It allows AI models to make decisions, execute actions through tool calling, maintain conversation memory, and return structured responses — all within your automated workflows.


How AI Agents Work

The AI Agent operates in an iterative loop:

  1. You provide a system prompt (instructions) and a user message (the task or question).
  2. The AI processes the input along with any available tools (FlowMattic actions the AI can execute).
  3. If the AI decides to use a tool, it calls it with the required parameters.
  4. The tool executes and the result is sent back to the AI.
  5. The AI can call more tools or provide a final response.
  6. This loop repeats for up to 10 iterations until the AI produces a final answer.

Ai agents workflow builder screen.png


Supported AI Providers

FlowMattic supports 10 AI providers out of the box:

Provider Models Tool Calling Notes
OpenAI GPT-4o, GPT-4 Turbo, o3-pro, GPT-3.5 Turbo Native Supports Responses API for reasoning models
Anthropic Claude Opus 4, Claude Sonnet 4, Claude Haiku Native Full tool calling support
Google Gemini Gemini 2.5 Pro, Gemini 2.5 Flash, Gemini 2.0 Flash Native Uses Google AI Studio API keys
Groq Llama 4 Maverick, Llama 4 Scout, Qwen, DeepSeek Native Ultra-fast inference
Mistral AI Mistral Large, Medium, Small Native European AI provider
xAI (Grok) Grok 3, Grok 3 Fast, Grok 3 Mini Native By xAI
DeepSeek DeepSeek-V3, DeepSeek-R1 Native Reasoning model support
OpenRouter 100+ models from multiple providers Native Unified API gateway
Straico Multi-provider aggregator Native Access OpenAI, Anthropic, Google models
Perplexity Sonar, Sonar Pro, Sonar Reasoning Pro Prompt-based Web-grounded responses with citations

Setting Up a Connection

Each AI provider requires an API key configured in FlowMattic’s connection system.

Getting API Keys

Provider Where to Get API Key
OpenAI platform.openai.com/api-keys
Anthropic console.anthropic.com
Google Gemini aistudio.google.com/app/apikey
Groq console.groq.com
Mistral AI console.mistral.ai
xAI (Grok) x.ai
DeepSeek platform.deepseek.com
OpenRouter openrouter.ai
Straico straico.com
Perplexity perplexity.ai

Creating a Connection

  1. Go to FlowMattic > Connects in your WordPress dashboard.
  2. Click Add New Connection.
  3. Search for the AI provider name (e.g., “OpenAI”).
  4. Enter your API Key (or Bearer Token, depending on the provider).
  5. Click Save to store the connection.

Tip: You can create multiple connections for the same provider — useful if you have separate API keys for different projects or billing accounts.


Configuring an AI Agent

The AI Agent component has three main configuration sections: Chat Model, Memory, and Tools.

Chat Model Configuration

This is the core section where you define the AI model and its behavior.

ai-chat-odel-provider.png

Required Settings

Setting Description
AI Provider Select from the 10 available providers.
Connection Choose the API key connection for the selected provider.
AI Model Select the specific model to use (models are fetched from the provider’s API).
System Prompt Instructions that define the agent’s behavior, role, and constraints.
User Input The actual message or task for the AI. Supports dynamic variables from triggers and previous steps.

Model Parameters (Optional)

Fine-tune the AI’s response behavior with these parameters:

Parameter Default Range Description
Max Tokens 4096 1–unlimited Maximum length of the AI’s response.
Temperature 0.7 0–2 Controls randomness. Lower values (0.1) = focused and consistent. Higher values (1.5) = creative and varied.
Top P 1 0–1 Nucleus sampling. Controls the diversity of vocabulary selection.
Top K 0 (disabled) 0–100 Limits token selection to the top K candidates. Only supported by some providers.
Frequency Penalty 0 -2 to 2 Penalizes tokens that appear frequently, reducing repetition.
Presence Penalty 0 -2 to 2 Encourages the model to explore new topics.

Tip: For most use cases, the default parameters work well. Adjust Temperature first — use 0.1–0.3 for factual tasks and 0.7–1.0 for creative tasks.

Output Format

Choose how the AI should format its response:

  • Text (default) — The AI returns a plain text response.
  • JSON — The AI returns structured JSON matching a schema you define.

When using JSON output format, provide a JSON Schema that describes the expected structure:

{
  "type": "object",
  "properties": {
    "title": { "type": "string" },
    "summary": { "type": "string" },
    "sentiment": { "enum": ["positive", "negative", "neutral"] },
    "tags": {
      "type": "array",
      "items": { "type": "string" }
    }
  },
  "required": ["title", "summary", "sentiment"]
}

OpenAI Responses API

For OpenAI reasoning models like o3-pro, codex-mini, or gpt-5.1-codex-mini, FlowMattic supports the Responses API:

Setting Options Description
Use Responses API No (Auto-detect) FlowMattic automatically detects if the model requires the Responses API.
Yes Force using the Responses API endpoint.

Note: In most cases, leave this set to “No (Auto-detect)”. FlowMattic will automatically switch to the Responses API when needed.


Memory Configuration

Memory allows the AI Agent to remember previous conversations and maintain context across multiple workflow executions.

ai-agents-memory.png

Memory Types

Type Storage Persistence Best For
No Memory None Stateless One-off queries where no context is needed.
Window Buffer Memory WordPress transients 24 hours Short-term conversations, testing, and demos.
FlowMattic Tables Custom database table Permanent Production use, multi-user support, audit trails.
WordPress Options wp_options table Permanent Simple single-site setups.

Memory Settings

Window Buffer Memory:

Setting Default Description
Window Size 10 Number of message pairs (user + assistant) to keep in memory.

FlowMattic Tables:

Setting Default Description
Database Local Select local WordPress database or an external database connection.
Table Name Name of the table to store conversation history.
Session ID Column session_id Column name used for session tracking.
Max History Length 20 Maximum number of messages to retrieve per execution.

WordPress Options:

Setting Default Description
Option Prefix fm_agent_memory_ Prefix for option names in wp_options table.
Max History Length 20 Maximum number of messages to keep.

Session Identifier

The Session ID determines which conversation thread the AI continues. It works the same across all memory types:

  • Leave empty — Auto-generates a unique session ID per execution.
  • Pass a specific ID — Continue an existing conversation. Use dynamic variables like {Webhook1.session_id} or {Trigger1.user_id} to group conversations by user or thread.

Example: A chatbot workflow using a webhook trigger can pass {Webhook1.session_id} to maintain separate conversations for each user.


Tools Configuration

Tools allow the AI to execute FlowMattic actions during its reasoning process. This is what makes the AI Agent truly powerful — it can take actions, not just generate text.

Tool Source: FlowMattic Actions

Use any action from any installed FlowMattic app as a tool:

  1. Click Add Tool.
  2. Select the Application (e.g., HTTP, Email, Slack, WooCommerce).
  3. Select the Action (e.g., “Send Email”, “Make HTTP Request”, “Create Post”).
  4. Write a clear Tool Description — this tells the AI when and how to use the tool.
  5. Configure the action’s parameters (some can be left for the AI to fill dynamically).

Workflow Builder ‹ FlowMattic — WordPress 2026-03-01 at 12.52.16 AM.png

Important: Tool descriptions are critical. A well-written description helps the AI understand when to use each tool. For example: “Use this tool to send an email notification to the customer. Requires: recipient email address, subject line, and message body.”

Tool Source: Custom Tools (JSON Schema)

Define custom tools with JSON schemas for integrations not available as FlowMattic apps:

Setting Description
Tool Name Unique identifier (use snake_case, e.g., check_inventory).
Tool Description Explains to the AI when to use this tool.
Parameters Schema JSON schema defining the tool’s input parameters.
Webhook URL Optional external endpoint to call when the tool is invoked.

Apps Excluded from Tool Use

The following apps cannot be used as tools (to prevent recursion and control flow issues):

  • Control flow: Branch, Router, Filter, Iterator
  • Workflow control: Delay, Sub-Workflow
  • Triggers: Schedule, Webhook, Webhook Response, MCP Trigger
  • Special: Human-in-the-Loop, AI Agent

How Tool Calling Works

When the AI Agent has tools available, it follows this process:

Iteration 1:
  AI receives → System prompt + User message + Available tools
  AI decides  → Call a tool OR respond directly
  If tool calledExecute tool, send result back to AI

Iteration 2-10:
  AI receives → Previous context + Tool results
  AI decides  → Call another tool OR provide final response
  Loop continues until done or max iterations (10) reached

Example: Multi-Step Automation

Scenario: “Research a topic and send a summary email”

  1. Iteration 1: AI calls the “HTTP Request” tool to fetch data from an API.
  2. Iteration 2: AI receives the API response, processes it, and calls the “Send Email” tool with a formatted summary.
  3. Iteration 3: AI receives confirmation that the email was sent and provides a final response: “I’ve researched the topic and sent a summary to [email protected].”

Perplexity — Prompt-Based Tool Calling

Perplexity’s sonar models do not support native function calling. Instead, FlowMattic uses prompt-based tool calling:

  • Tool definitions are embedded in the system prompt.
  • When the AI wants to call a tool, it outputs: TOOL_CALL: {"name": "tool_name", "arguments": {...}}.
  • FlowMattic parses this output, executes the tool, and sends the result back to the AI.

This happens transparently — you configure tools the same way as with other providers.


Using AI Agent in a Workflow

Step 1: Add the AI Agent Step

In your workflow editor, add a new step and select AI Agent as the application.

ai-agents.png

Step 2: Configure the Agent

Set up the Chat Model, Memory, and Tools sections as described above.

Step 3: Use Dynamic Values

Reference data from triggers and previous steps using FlowMattic’s variable syntax:

System Prompt:

You are a customer support agent for {{company_name}}.
The customer's name is {Trigger1.customer_name} and their email is {Trigger1.customer_email}.
Help them resolve their issue professionally.

User Input:

{Trigger1.support_question}

Step 4: Access Results in Subsequent Steps

After the AI Agent step executes, you can use its output in later steps:

Variable Description
{AIAgent1.message} The AI’s final text response.
{AIAgent1.status} success or error.
{AIAgent1.provider} The AI provider used (e.g., “openai”).
{AIAgent1.model_id} The specific model used (e.g., “gpt-4o”).
{AIAgent1.session_id} The memory session ID.
{AIAgent1.memory.message_count} Number of messages in conversation history.
{AIAgent1.memory.memory_type} The memory type configured.

Real-World Examples

Customer Support Chatbot

  • Provider: OpenAI (gpt-4o)
  • Memory: FlowMattic Tables (persistent, multi-user)
  • Tools: Send Email, Create Support Ticket, Look Up Order
  • System Prompt: “You are a helpful customer support agent. Always be polite and try to resolve issues. If you can’t resolve it, create a support ticket.”
  • Flow: Webhook Trigger → AI Agent → Response sent back via webhook

Content Generation with Structured Output

  • Provider: Anthropic (Claude Sonnet)
  • Memory: None (one-shot generation)
  • Tools: Save to Database, Send Slack Notification
  • Output Format: JSON with schema for title, body, tags, and SEO metadata
  • Flow: Manual Trigger → AI Agent generates content as JSON → Save to WordPress → Notify team on Slack

Data Processing Pipeline

  • Provider: Groq (Llama 4 Maverick) — for speed
  • Memory: Window Buffer (short-term)
  • Tools: HTTP Request, FlowMattic Tables Insert
  • System Prompt: “You are a data analyst. Fetch data from the provided API, analyze it, and store the results.”
  • Flow: Schedule Trigger → AI Agent fetches and processes data → Results stored in database

Troubleshooting

Common Issues

“Failed to get API key from connection”

  • Verify the connection is active in FlowMattic > Connects.
  • Check that the API key is valid and has sufficient credits/quota.

AI not calling tools

  • Ensure tool descriptions are clear and specific.
  • Check that the system prompt instructs the AI to use tools when appropriate.
  • Some models are better at tool calling than others — GPT-4o and Claude Sonnet are recommended.

JSON output is invalid or wrapped in markdown

  • FlowMattic automatically strips markdown fences (```json ... ```) from responses.
  • If you still get invalid JSON, try a more capable model or simplify your JSON schema.

Tool execution fails

  • The AI receives the error and can attempt a different approach.
  • Check that the tool’s action is configured correctly (correct connection, required fields).
  • Review the workflow execution log for details.

Memory not working

  • Verify the Session ID is consistent across executions. If auto-generated, each execution creates a new session.
  • For FlowMattic Tables memory, ensure the table exists with the required columns.

Model not found or “not a chat model” error

  • For OpenAI reasoning models (o3-pro, codex-mini), enable the Responses API toggle.
  • Click the refresh button to re-fetch the latest models from the provider.

Debug Options

Enable these options for troubleshooting:

  • Include Execution Log — Logs each iteration step, showing what the AI decided and which tools were called.
  • Include Request Body — Records the full API requests sent to the provider.

These are saved in the workflow execution history and can be reviewed from the FlowMattic > Task History page.


Summary

Feature Details
Supported Providers 10 (OpenAI, Anthropic, Google, Groq, Mistral, xAI, DeepSeek, OpenRouter, Straico, Perplexity)
Tool Calling Native (9 providers) + Prompt-based (Perplexity)
Memory Options Window Buffer, FlowMattic Tables, WordPress Options
Output Formats Text, JSON (with schema validation)
Max Iterations 10 per execution
Responses API Supported for OpenAI reasoning models
Dynamic Variables Full support for trigger and step data
Was this article useful?
Like
Dislike
Help us improve this page
Please provide feedback or comments
Access denied
Access denied