Case study · 2024 — Present
AI-Driven Workflow Engine
RAG + LLM orchestration layer that turns repetitive ops work into automated, reviewable decisions.
- Role
- Backend / AI Engineer
- Company
- NEWPRODATA
- Year
- 2024 — Present
- Status
- Shipped
01 — Problem
Teams were losing hours to repetitive triage, data enrichment, and document-writing tasks. Off-the-shelf LLM tooling either hallucinated or couldn't ground on internal data.
02 — Approach
Built a Django-native workflow engine around a RAG pipeline: chunk + embed internal documents into a vector store, retrieve top-k on each prompt, hand the LLM grounded context with explicit JSON schemas. Routed between Claude, ChatGPT, and Bedrock per task cost/latency. Added an audit log so every AI decision is reviewable. Structured output validation before anything writes back to the DB.
03 — Outcome
Replaced multiple manual workflows while keeping humans in the loop where it matters. Cut average task turnaround materially while preserving traceability.
-
◆
Grounded answers — no orphan hallucinations in production
-
◆
Full audit trail for every AI-generated artifact
-
◆
Multi-model routing (Claude / ChatGPT / Bedrock)
-
◆
Retrieval latency under 200ms p95
04 — Tech stack
More work
2024 — Present
Datasprings — Data Automation & API Platform
Django + Neo4j employment platform with scraping pipelines, RAG-based search, and cloud-native deployment.
Read the case2024 — 2025
Timepaq — Attendance & Employee Management
Web-based attendance and HR system with biometric-device integration. Led design, delivery, and team coordination.
Read the case