Nitta — a 12-service commerce platform with polyglot persistence and event-driven flows
Polyglot persistence, JWT-proxy gateway, Kafka event flows, AWS-deployable. Private repository — case study below.
Personal · 2026
title: 'Nitta — a 12-service commerce platform with polyglot persistence and event-driven flows' summary: 'Polyglot persistence, JWT-proxy gateway, Kafka event flows, AWS-deployable. Private repository — case study below.' role: 'Sole engineer' organization: 'Personal' period: '2026' stack:
- 'TypeScript'
- 'pnpm + Turborepo'
- 'Next.js'
- 'Node.js'
- 'PostgreSQL'
- 'MongoDB'
- 'Redis'
- 'Kafka'
- 'MinIO/S3'
- 'Docker'
- 'AWS ECS Fargate'
- 'RDS'
- 'DocumentDB'
- 'ElastiCache'
- 'Secrets Manager' repoStatus: 'private'
Why I built it
I wanted a sandbox where the constraints were realistic — multi-store persistence, real auth, real event flows, a path to production on AWS — and where every decision had to defend itself against an alternative. Nitta is that sandbox.
Architecture
┌──────────────────────┐
│ Web client (Next) │
└──────────┬───────────┘
│
┌──────────▼───────────┐
│ Gateway · 4000 │
│ JWT validation │
│ x-user-* headers │
└──┬─────┬─────┬───────┘
┌─────────────┘ │ └────────────────┐
▼ ▼ ▼
┌──────────┐ ┌──────────┐ ┌──────────┐
│ Auth │ │ User │ … │ Article │
│ Pg+Rd │ │ Pg │ │ Mo+S3 │
└──────────┘ └──────────┘ └──────────┘
Kafka topic flow
inventory ──▶ order ──▶ payment ──▶ notification
Twelve services, six datastores, one gateway. Local via Docker Compose; production on AWS ECS Fargate.
Service map
- Gateway (4000)
- JWT validation, header injection, prefix-based proxy
- Auth
- Postgres + Redis
- User
- Postgres
- Product
- Mongo + Redis
- Category
- Mongo
- Inventory
- Postgres + Kafka
- Order
- Postgres + Kafka
- Payment
- Postgres + Kafka (Stripe)
- Cart
- Redis
- Notification
- Kafka consumer
- Search
- Mongo
- Article
- Mongo + S3 / MinIO
Decisions and trade-offs
JWT proxy at the gateway vs per-service auth
The gateway validates the access token once and forwards identity headers (x-user-id, x-user-role, x-user-email) to every downstream service. That keeps token-decoding logic in one place across a polyglot fleet, lets each service trust its inputs without re-implementing JWT handling, and makes auth failures observable at a single hop. The cost is that the gateway is now a trust boundary that has to be hardened — but that is a problem worth having once, not once per service.
Kafka for inventory events vs synchronous calls
The inventory → order → payment → notification chain is event-driven on Kafka. Doing it synchronously across six datastores would couple every step to every other step: a slow notification path would back up payment, a payment failure would corrupt order state, and inventory rebalancing would block the request. Kafka turns that chain into independent consumers with their own retry and ordering semantics, which is the whole point.
MinIO in dev, S3 in prod
Object storage hides behind one S3-compatible client. MinIO ships in docker-compose for local dev so a fresh clone runs end-to-end without an AWS account; the same code talks to S3 in production. A small abstraction that keeps docker-compose up cheap without forking object-storage code paths.
Stack
TypeScript · pnpm + Turborepo · Next.js · Node.js · PostgreSQL · MongoDB · Redis · Kafka · MinIO/S3 · Docker · AWS ECS Fargate · RDS · DocumentDB · ElastiCache · Secrets Manager
Repository is private. Happy to walk through the code in an interview — see /contact.