
AI support agent (n8n + PDF RAG)
Waybox - 2024-2025
Overview
AI agent connected to an internal PDF knowledge base to speed up support and improve answer quality.
Design and delivery of an n8n-orchestrated RAG pipeline: PDF preprocessing, cleaning, semantic chunking, embeddings, and vector indexing. The agent generates contextual, sourced answers (citations) with a confidence threshold and automatic human escalation when uncertainty is high. Goal: reduce response time, standardize quality, and improve knowledge capitalization.
Gallery

Tech
- • Node.js
- • n8n
- • Qdrant/FAISS
- • Python
- • OpenAI/Ollama
- • Docker
Problem & role
Design & implementation
Challenges
- • Heterogeneous PDF quality (OCR, layouts, noise)
- • Need for sourced answers and strict uncertainty handling
- • Reduce resolution time without degrading quality
Solutions & impact
Solutions
- • Cleaning pipeline + OCR when needed, normalization, and detection of useful sections
- • Semantic chunking + citations + confidence threshold
- • Automated human fallback and improvement loop via support feedback
Impact
- • More consistent answers thanks to internalized documentation
- • Estimate: response time for simple requests divided by ~2
- • Estimate: level-2 escalations reduced by ~20-40%
Tags
- n8n
- RAG
- Applied AI