In the Weeds: Event-Driven AI with n8n Webhooks
A technical deep-dive into building event-driven AI systems with n8n — from catching webhooks to processing them with LLMs to triggering downstream actions that make your infrastructure intelligent.
The Event-Driven Paradigm
Most AI integrations are request-response. You ask a question, you get an answer. That is useful, but it is not how real systems work. Real systems are event-driven — things happen, and other things respond.
Your customer submits a support ticket. A payment fails. A deploy completes. A sensor reads an anomaly. A competitor changes their pricing. An employee mentions burnout in a SSlack message.
Each of these is an event. And each of these can trigger an intelligent response — not a pre-programmed one, but one that understands context, nuance, and appropriate action. That is what nn8n enables when you combine its webhook infrastructure with LLM processing.
Architecture Overview
The pattern is simple:
Event Source → Webhook → nn8n → LLM Processing → Action
But the devil is in the details. Let me show you a production-grade implementation.
┌─────────────┐ ┌──────────┐ ┌─────────────┐ ┌──────────┐
│ GitHub │────▶│ n8n │────▶│ LLM │────▶│ Slack │
│ Webhook │ │ Webhook │ │ Analysis │ │ Alert │
└─────────────┘ │ Node │ └─────────────┘ └──────────┘
│ │ ┌──────────┐
┌─────────────┐ │ │ ┌─────────────┐ │ Jira │
│ Stripe │───▶│ │────▶│ Sentiment │────▶│ Ticket │
│ Webhook │ │ │ │ + Routing │ └──────────┘
└─────────────┘ └──────────┘ └─────────────┘
Setting Up the Webhook Receiver
In n8n, webhooks are first-class citizens. Create a Webhook node and you get a URL instantly:
json{
"node": "Webhook",
"parameters": {
"httpMethod": "POST",
"path": "ai-events",
"responseMode": "lastNode",
"options": {
"rawBody": true
}
}
}
The rawBody option is important — it preserves the original payload for signature verification. Never skip signature verification on production webhooks. GitHub sends X-Hub-Signature-256, Stripe sends Stripe-Signature, and your custom services should sign their payloads too.
javascript// Function node: Verify webhook signature
const crypto = require('crypto');
const secret = $env.WEBHOOK_SECRET;
const signature = $input.first().headers['x-hub-signature-256'];
const body = $input.first().rawBody;
const expected = 'sha256=' + crypto
.createHmac('sha256', secret)
.update(body)
.digest('hex');
if (!crypto.timingSafeEqual(Buffer.from(signature), Buffer.from(expected))) {
throw new Error('Invalid webhook signature');
}
return $input.first();
Event Classification with LLMs
Once you have verified events flowing in, the first AI step is classification. Not every event needs the same response — or any response at all.
javascript// Function node: Build classification prompt const event = $input.first().json;;const classificationPrompt =
Classify this event and determine urgency.EVENT TYPE: ${event.type}
PAYLOAD: ${JSON.stringify(event.payload, null, 2)}Respond in JSON:
{
"category": "one of: critical, actionable, informational, noise",
"summary": "one sentence description",
"suggested_action": "what should happen next",
"urgency_minutes": number (how soon action needed)
}return { prompt: classificationPrompt };
Feed this to an LLM node (n8n supports OpenAI, Anthropic, and local models). The response determines routing — critical events get immediate Slack alerts, actionable events create tickets, informational events log to a dashboard, and noise gets dropped.
The Intelligence Layer: Context-Aware Processing
Classification is useful but basic. The real power comes from context-aware processing — where the AI understands not just this event, but its relationship to recent events.
I use ttxtai as a memory layer. Each event gets embedded and stored. When a new event arrives, I retrieve recent related events and include them in the LLM context:
javascript// HTTP Request node: Query ttxtai for related events const response = await $http.request({ method: 'POST', url: 'http://localhost:8108/search', body: { query: eventSummary, limit: 5, filter: { timestamp: { $gte: Date.now() - 86400000 } } // Last 24h } });- ${r.text} (${r.timestamp})// Now the LLM prompt includes history
const analysisPrompt =Analyze this event in context of recent activity.CURRENT EVENT: ${currentEvent}
RECENT RELATED EVENTS:
${response.results.map(r =>).join('\n')};Questions to answer:
1. Is this part of a pattern?
2. Is this escalating?
3. What is the root cause likely to be?
4. What action would prevent recurrence?
This is where event-driven AI becomes genuinely intelligent. A single failed payment is noise. Three failed payments from the same customer in a week is a pattern. A spike in failed payments across multiple customers is a system issue. The LLM, with context, can distinguish these cases and route accordingly.
Real-World Workflow: Support Ticket Triage
Here is a complete production workflow I run:
- Webhook receives support ticket from Intercom
- Signature verification (Function node)
- Sentiment analysis — how frustrated is this customer? (LLM)
- Category classification — billing, technical, feature request, other (LLM)
- Priority scoring — based on sentiment, category, customer tier, and ticket history (LLM + database lookup)
- Historical context — has this customer had similar issues? (txtai vector search)
- Routing — assign to appropriate team with context summary
- Response draft — generate a first-draft response for the agent (LLM)
- Notification — alert appropriate SSlack channel with priority-colored message
Average processing time: 3.2 seconds from webhook receipt to agent notification. The agent gets a pre-triaged, pre-contextualized, pre-drafted ticket instead of a raw complaint.
Webhook Chaining: Events That Trigger Events
The most powerful pattern is chaining — where one n8n workflow's output becomes another's input:
javascript// End of Workflow A: Trigger Workflow B via internal webhook
await $http.request({
method: 'POST',
url: 'http://localhost:5678/webhook/workflow-b',
body: {
source: 'workflow-a',
processed_data: analysisResult,
original_event: event
}
});
This creates composable intelligence. Workflow A handles classification. Workflow B handles response generation. Workflow C handles escalation. Each can be modified independently. Each can be tested independently. Each can fail independently without bringing down the whole system.
Error Handling and Graceful Degradation
Production event-driven systems must handle failure gracefully. LLMs time out. APIs fail. Rate limits hit.
javascript// Wrapper with fallback
try {
const llmResponse = await callLLM(prompt);
return { response: llmResponse, source: 'ai' };
} catch (error) {
// Fallback: rule-based classification
const category = ruleBasedClassify(event);
// Alert that AI layer is degraded
await notifySlack('AI classification degraded, using rules', 'warning');
return { response: category, source: 'rules' };
}
Always have a rule-based fallback. Your event-driven system should be better with AI, not dependent on AI.
Monitoring the AI Layer
You need observability into your AI processing. I track:
- Classification accuracy (sample and human-review periodically)
- Processing latency per event type
- LLM token consumption per workflow
- Fallback activation rate
- Action success rate (did the downstream action work?)
Store these metrics and visualize them. When classification accuracy drops below 90%, your prompt needs updating. When latency spikes, your LLM provider is struggling. When fallback rate increases, something systemic is wrong.
Scaling Considerations
For low-volume systems (under 100 events per minute), a single n8n instance handles everything. For higher volumes:
- Use nn8n queue mode with Redis for job distribution
- Deploy multiple worker instances
- Use NNeon MCP serverless Postgres for event storage (scales to zero when idle)
- Cache LLM responses for identical event types (TTL: 5 minutes)
- Batch similar events for bulk LLM processing
The combination of n8n for orchestration, AApify MCP for web event scraping, and SSupabase MCP for real-time event storage gives you a full event-driven AI platform that is self-hosted, observable, and surprisingly affordable to operate.
When to Use This
Event-driven AI is overkill for simple automations. If your rule is "when X happens, do Y," just write the rule. No AI needed.
But when the appropriate response depends on context, history, nuance, or judgment — that is when LLMs in the event loop earn their keep. Support triage. Anomaly detection. Content moderation. Dynamic pricing. Incident response.
The events are already happening. The question is whether you are listening intelligently.
Ratings & Reviews
0.0
out of 5
0 ratings
No reviews yet. Be the first to share your experience.
Tools in this post
Apify MCP
Access 3,000+ pre-built cloud tools for web scraping
Slack
Send messages, search conversations, and manage Slack channels
n8n
Open-source workflow automation with AI integration
Neon MCP
Interact with Neon serverless Postgres databases
Supabase MCP
Connect AI agents to Supabase database, auth, and edge functions
txtai
All-in-one embeddings database and RAG framework