The lab · work

Things I've shipped.

Studio engagements, lab projects, and applied research. Some under NDA, some open source, some written up. All real work.

Track · I

Studio engagements

Client work · anonymized

FMCG enterprise · contract · 2024 — present

Enterprise agentic-RAG chatbot

Multilingual MS Teams-embedded assistant serving thousands of users today, rolling to tens of thousands. LangGraph parent-router subgraphs with scoped tool calls, hybrid retrieval over Qdrant with payload-based ACLs at query time, FastAPI backend, Next.js admin UI. Azure infra in Terraform from day one. Coordinated global rollout with cross-functional partners.

LangGraphQdrantFastAPINext.jsAzureTerraformLangfuse
Deep-dive tour ↗Code · under NDA

AI consultancy · 2024 · Budapest

AI assistant on AWS

Tech-lead on a four-person team. AWS-hosted conversational assistant with external API and database integrations, RAG-enriched context, and image generation. Owned the data science layer and the Python backend.

AWSBedrockRAGLoRAimage genLangfuse
Code · under NDA

Pharma enterprise · contract via integrator · 2021 — 2023

Enterprise data pipelines

Two years of production ETL — Talend Cloud, Informatica, Python (Pandas / NumPy) — feeding analytics off Oracle SQL. Later: migrated a critical SQL pipeline to Pandas on AWS with MSSQL. Direct client consulting on data quality and project direction.

TalendInformaticaPandasOracleAWSMSSQL
Code · under NDA
Track · II

Lab projects

Own work · side projects

Lab · ongoing · GCP

Fridge chatbot · 10-agent pipeline · LiveKit voice

A household assistant built by an autonomous Claude Code pipeline I wrote — ten agents that plan, write, review, and ship code under supervision. The fridge chatbot is what the pipeline produces; LiveKit handles real-time voice on a tablet so the family talks to it instead of typing. Hosted on GCP Cloud Run. The pipeline is the real artifact; the chatbot is what proves it ships.

Claude Codemulti-agentMCPLiveKitGCPCloud Run
Track · III

Applied research

Long-form · ongoing

Completed · 2024

LoRA fine-tuning · code helpers

End-to-end fine-tuning of Gemma and Llama 3.1 for a code-completion assistant. The interesting result wasn't the metric — it was the conclusion: fine-tuning is the right move when you need to change how a model behaves, not when you need it to know new things. External knowledge belongs in retrieval. Behaviour shifts — tone, format, refusal patterns, narrow task style — belong in an adapter. Confusing the two wastes weeks.

LoRAGemmaLlama 3.1behaviour vs knowledge
Results & notebook · under NDA
On GitHub · public cadence

778 contributions

In the last twelve months. Public repos only — most lab and studio work is private until it ships.

MAYJUNJULAUGSEPOCTNOVDECJANFEBMARAPRMAYMONWEDFRI
LessMore
View on GitHub ↗

Want the full story?

Some of the most interesting work isn't on this page because it's under NDA. Email me — I'll talk you through it.

Bio & channelsTalk to the lab