Skip to content
S
Back to work

AI-Driven Workflow Engine

RAG + LLM orchestration layer that turns repetitive ops work into automated, reviewable decisions.

Backend / AI Engineer
NEWPRODATA
2024 — Present
Shipped

Teams were losing hours to repetitive triage, data enrichment, and document-writing tasks. Off-the-shelf LLM tooling either hallucinated or couldn't ground on internal data.

Built a Django-native workflow engine around a RAG pipeline: chunk + embed internal documents into a vector store, retrieve top-k on each prompt, hand the LLM grounded context with explicit JSON schemas. Routed between Claude, ChatGPT, and Bedrock per task cost/latency. Added an audit log so every AI decision is reviewable. Structured output validation before anything writes back to the DB.

Replaced multiple manual workflows while keeping humans in the loop where it matters. Cut average task turnaround materially while preserving traceability.

  • Grounded answers — no orphan hallucinations in production

  • Full audit trail for every AI-generated artifact

  • Multi-model routing (Claude / ChatGPT / Bedrock)

  • Retrieval latency under 200ms p95

Python Django AWS Bedrock Claude ChatGPT Vector DB RAG FAISS PostgreSQL Celery