
Backend Engineer
Production Stability,
System Architecture
& Data Infrastructure
I own backend systems end-to-end — from diagnosing production incidents under pressure to designing data pipelines that connect six databases into a single reporting layer.
Measured Outcomes
Production Impact
320 → 0
Database Connections Reduced
Architecture-level pool optimization across clustered processes
16 → 0
PM2 Clusters Optimized
Right-sized based on real CPU load and throughput analysis
0+
Days Zero Production Incidents
Consecutive days of stability after architectural fix
0
Production Databases Pipelined
Orchestrated into a unified BigQuery reporting layer
Featured Case Study
Production MySQL Connection Exhaustion
Logistics system — Node.js, Sequelize, PM2 cluster mode, MySQL
Connection Architecture — Before vs After
Production logistics system intermittently refused new connections. MySQL returned Too Many Connections under normal traffic load. System required manual PM2 restart to recover.
- Each PM2 worker instantiated a separate Sequelize instance — no shared connection singleton
- Default pool size of 20 applied per worker — 16 × 20 = 320 connections
- No graceful shutdown — connections leaked on process restart
- Singleton Sequelize instance across all workers
- Pool size reduced from 20 → 2 based on throughput profiling
- PM2 clusters right-sized from 16 → 8 per CPU load analysis
- SIGINT/SIGTERM handlers with
kill_timeoutfor clean shutdown - Connection monitoring guard for early anomaly detection
- Active connections: 320 → 16 (95% reduction)
Too Many Connectionserror eliminated- 7+ consecutive days zero incidents post-deploy
- Predictable resource usage — simplified capacity planning
System Ownership
Systems I've Engineered
Production systems where I owned backend architecture, performance, and operational stability.
Logistics Production System
High-availability order processing and fleet coordination
- Resolved critical connection exhaustion — reduced 320 DB connections to 16
- Redesigned PM2 cluster topology based on CPU and throughput profiling
- Implemented Singleton connection pattern and graceful shutdown lifecycle
- Deployed connection monitoring guard for anomaly detection
Fulfillment System
Order fulfillment pipeline and warehouse integration
- Built core fulfillment service handling order state transitions
- Designed idempotent API contracts for reliable warehouse integration
- Implemented structured error handling and retry mechanisms
- Optimized query patterns for high-throughput order processing
Learning Management System
Course delivery, user progress tracking, content management
- Architected modular service layer separating business logic from transport
- Designed document schema for flexible course and assessment structures
- Built role-based access control for multi-tenant content delivery
- Implemented pagination and query optimization for large dataset retrieval
Corporate Management System
Internal operations, reporting, and workflow automation
- Designed service architecture supporting multiple internal business units
- Built configurable workflow engine for operational process automation
- Implemented audit logging and activity tracking across modules
- Structured API layer for integration with third-party corporate tools
Data Pipeline Architecture
Cross-system data orchestration and business intelligence
- Designed DAG-based extraction pipeline across 6 production databases
- Automated staging to Google Cloud Storage with schema validation
- Built transformation layer in BigQuery for unified reporting datasets
- Connected final datasets to Looker Studio dashboards for stakeholders
Data Engineering
Data Pipeline & Analytics Architecture
Designed and implemented an end-to-end data pipeline connecting 6 production databases into a unified BigQuery reporting layer — orchestrated with Apache Airflow.
Pipeline Architecture — End to End
Source Databases
6 production databases across logistics, fulfillment, LMS, and corporate systems
Apache Airflow
DAG-based orchestration scheduling automated extraction jobs per source
Google Cloud Storage
Raw data staged with schema validation and partitioned by extraction date
BigQuery
Transformation and modeling layer — cleaning, joining, and aggregating cross-system datasets
Looker Studio
Final reporting dashboards consumed by operations, finance, and leadership teams
Fully automated — zero manual extraction or file transfers
Cross-system integration connecting MySQL and MongoDB sources
Fault-tolerant DAGs with retry logic and failure alerting
Stakeholder-facing dashboards updated on schedule without engineering intervention
Interactive
Ask My AI
Ask about experience, projects, tech stack, skills, education, certifications, and more