Backend Developer – Django / PostgreSQL
Posted 2026-05-05The system ingests operational data, computes industrial KPIs, generates structured AI insights, and exposes deterministic APIs for a mobile application.
This role is strictly backend-focused. No frontend work is included.
Backend Architecture
- The platform is built on:
- Django + Django REST Framework
- PostgreSQL with ELT structure: raw to staging to analytics
- Celery + Redis for task orchestration
- Stripe for billing boundary, already scoped separately
- Docker-based deployment
- Core Architectural Principles
- Multi-tenant isolation at organisation and site level
- Deterministic KPI recomputation
- Append-only raw data layer
- Strict schema validation for ingestion
- Versioned KPI logic
- AI outputs must be grounded in stored data
- No autonomous AI actions, advisory only
Backend Responsibilities High-Level
- 1. Data Ingestion Layer
- Build a robust CSV ingestion pipeline
- Implement header validation and schema enforcement
- Ensure idempotent file handling with no duplicate ingestion
- Transform raw data into the canonical ProductionFact model
- Maintain ingestion logs and validation reports
2. Manufacturing Data Model Refinement
- Refactor the ProductionFact schema to support:
- Workcenter context
- SKU and job granularity
- Structured downtime categorisation
- Cost attribution fields
- Additionally:
- Implement canonical master data tables
- Enforce referential integrity
- 3. KPI Engine Industrial-Grade
- Correct OEE computation including availability, performance, and quality
- Implement structured downtime loss logic
- Build reliability metrics foundation using event-based design
- Ensure deterministic recompute capability
- Support time-series aggregation
- 4. Dashboard APIs
- Expose pre-computed KPI endpoints
- Implement cached read APIs
- Support filtering by site, shift, and workcenter
- Enforce entitlement gating
5. AI Insight Layer Backend Only
- Generate and store:
- AI Suggestions
- AI Improvements
- AI Insights
- Additionally:
- Ensure traceability to source data
- Cache AI outputs
- No frontend integration required
6. Task Orchestration
Implement Celery task chains:
validate to transform to ingest to compute KPIs to generate AI
- Also include:
- Scheduled ingestion support
- Idempotent task handling
Phase 3 – Manufacturing Intelligence Expansion
1. Job-Level Margin Foundation Complete Implementation
Data Model Expansion
Extend the schema with a dedicated JobPerformance model. Do not overload ProductionFact.
- The model must include:
- job_id indexed and tenant-scoped
- site_id
- workcenter_id
- sku_id
- quoted_revenue
- quoted_material_cost
- quoted_labour_cost
- quoted_overhead_cost
- actual_material_cost
- actual_labour_cost
- allocated_overhead_cost
- downtime_cost
- scrap_cost
- revenue_recognised
- job_status
- job_start_date
- job_end_date
All monetary fields must use Decimal with currency support.
Margin Calculations Deterministic
Implement:
Actual Margin equals revenue_recognised minus actual_material plus actual_labour plus allocated_overhead plus downtime_cost plus scrap_cost.
Quoted Margin equals quoted_revenue minus quoted_material plus quoted_labour plus quoted_overhead.
Margin Variance percentage equals Actual minus Quoted divided by Quoted.
- Margin Erosion Attribution must break down percentage erosion into:
- Scrap contribution
- Downtime contribution
- Labour overrun
- Material price variance
All formulas must be versioned and logged.
- --
Margin APIs
- Build:
- api margin job job_id
- api margin site site_id
- api margin summary
- Responses must include:
- Margin values
- Variance percentage
- Erosion breakdown
- Financial impact
- Data lineage metadata
All results must be cacheable and recomputable.
2. Cost Attribution Logic Production-Grade
Deterministic Cost Model
Implement a cost engine with:
Material per good unit equals actual_material_cost divided by good_units.
Labour per runtime hour equals actual_labour_cost divided by runtime_hours.
- Overhead allocation must support configurable methods:
- Per shift
- Per runtime hour
- Per job
A configuration table must define the allocation rule per tenant.
KPI Endpoints
- Build:
- api kpi cost-per-unit
- api kpi cost-variance
- api kpi unit-economics
- All endpoints must support filtering by:
- site
- workcenter
- sku
- job
- time range
All responses must include formula version and input data range.
3. Cross-Site Normalised Benchmarking Internal
Normalisation Rules
- Standardise:
- OEE time-weighted
- Scrap percentage
- Cost per unit
- Ensure:
- Comparable time ranges
- Comparable shift hours
- Currency normalisation
Percentile Logic
- For each KPI:
- Compute distribution across sites
- Assign percentile rank
- Flag top performer
- Flag bottom performer
- Flag above or below median
Store benchmarking snapshots for reproducibility.
Benchmark APIs
- Build:
- api benchmark kpi kpi_name
- api benchmark site site_id
- Responses must return:
- Rank
- Percentile
- Group average
- Variance from average
- Financial