Development Notice: Senpai is a product in active development by InGen Dynamics. Features and capabilities described reflect our current design intent and development roadmap. Specifications are subject to change.
IN DEVELOPMENTSenpai is an AI-powered educational companion robot being developed by InGen Dynamics — designed to adapt to how each child learns, support students with special educational needs, and bring personalised teaching to classrooms and homes at scale.
Product concepts, design intent, and the InGen Dynamics vision for the future of educational robotics.
Education has two extremes. Software that scales but cannot teach. Specialists who teach brilliantly but cannot scale. Every child between those two points is underserved.
Senpai is being developed to occupy the space between — a physical AI companion designed to bring the adaptive intelligence of a great teacher to every child, regardless of where they learn, what their needs are, or what language they speak at home.
Senpai's architecture is being designed to serve multiple education verticals through curriculum and configuration adaptation — without requiring a separate product for each segment. All market sizing data below is sourced from independent third-party research.
InGen Dynamics is a robotics and artificial intelligence company developing intelligent systems across education, inspection, and security verticals. Senpai is one of the company's primary active development programmes.
All InGen Dynamics products are built on the Origami AI platform — a proprietary multi-model AI architecture developed in-house for physical AI companions and autonomous robotic systems. The company is currently in an active development phase across its full product portfolio.
We welcome enquiries from educational institutions, research partners, and those interested in following InGen Dynamics' development journey.
A physical AI educational companion designed for children aged 3 to 18 and beyond. Every hardware and software decision has been made with one question in mind: what does this child need right now, and how do we deliver it in a way that feels natural, warm, and genuinely helpful?
Children engage more deeply with physical companions than with screens. Every hardware decision in Senpai's design starts from this premise — not as a marketing claim, but as a design constraint that propagates through the entire system.
The result is a robot designed to express warmth and attentiveness through its face, its posture, and the way it moves. Its expressions draw on Disney's 12 Principles of Animation — anticipation, squash and stretch, follow-through — because we believe the qualities of a great teacher should not be lost in translation to technology.
Senpai's pedagogical framework draws on the most robustly evidenced methods in educational psychology: retrieval practice, spaced repetition, elaborative interrogation, and multimodal encoding. These are not buzzwords — they are the approaches with the strongest and most replicated evidence base for long-term learning.
Senpai's 720p projector (with an engineering upgrade path to 1080p) is designed to cast interactive content onto any nearby surface — a table, a floor, a wall — at up to 100 inches from 1.5 metres. Auto-keystone correction and autofocus are designed to make surface calibration automatic.
The projector system uses a depth camera to map the projection surface and track finger position at up to 60 frames per second — making the projected surface interactive. Children are designed to be able to tap, drag, and draw directly on the projected content.
This capability unlocks learning experiences that are genuinely physical — handwriting practice on a real surface, collaborative mapping exercises, finger-drawn maths on a table — rather than simulated physical interaction through a touchscreen.
Senpai's data is designed to surface differently for each stakeholder — the teacher needs teaching intelligence, the student needs a learning adventure, and the parent needs transparency in plain language.
Senpai's content library is being developed to cover the full breadth of school-age education — with curriculum alignment for major national frameworks built in from the start rather than retrofitted. Every subject is designed to support both classroom deployment and independent home learning.
Senpai's approach to language learning is being built around one principle: that conversation produces fluency faster than vocabulary memorisation. The system is designed to prioritise getting children speaking as early as possible, then building grammar and vocabulary knowledge to support and extend that conversation.
Pronunciation coaching is designed to use spectral audio analysis to compare a student's pronunciation against age-appropriate reference models for each language — giving feedback that is specific, actionable, and kind.
The EAL (English as Additional Language) module is designed as a separate configuration — delivering curriculum content in a student's home language while gradually increasing the proportion of English as their confidence grows.
SEND is not a feature we added to Senpai. It is a primary design constraint that has shaped every hardware and software decision from day one. This page describes our inclusion framework, our six adaptation modes, and the child safety architecture we are building.
The SEND education market is one of the most structurally underserved in education technology. Existing products either exclude SEND learners entirely or treat accessibility as a visual design exercise — changing font sizes and contrast ratios without fundamentally changing how learning is delivered.
We believe that genuine inclusion means designing the entire learning interaction around the needs of each child — not appending accessibility features to a product designed for a neurotypical majority. This is not just the right thing to do. It is a commercial opportunity that the market has largely failed to address.
Each adaptation mode represents a comprehensive set of interaction design decisions — not a checklist of visual adjustments. Every mode is designed to be activatable individually or in combination, recognising that many students have overlapping needs.
SEOM — the Safety and Ethics Operations Model — is a set of 12 rules being designed to govern every Senpai interaction with a child. Rather than relying on post-generation content filtering, these rules are intended to be embedded at the AI training objective level: the model is designed not to learn unsafe interaction patterns in the first place.
Rather than building for one regulatory framework and retrofitting for others, Senpai's design is intended to satisfy the intersection of the most demanding requirements across all target markets simultaneously. Note: compliance validation will be conducted through appropriate certification bodies during the development process — the frameworks below describe design intent.
We welcome enquiries from educational institutions, research partners, government bodies, and those interested in following InGen Dynamics' development journey. Please get in touch via the InGen Dynamics website.
All product enquiries are handled through the InGen Dynamics corporate website.
This page details the planned systems architecture, hardware specifications, sensor suite, compute platform, AI model implementation, security design, and V-Model development process for Senpai. All specifications represent current design intent and are subject to change during development and engineering validation.
Senpai's architecture implements a three-tier design: edge compute (on the robot) for real-time student interaction, a local classroom server for coordination and MIS integration, and cloud infrastructure for fleet management and model training. Core learning functions are designed to run on-device, with cloud connectivity optional rather than required.
Senpai's sensor suite is designed to enable AMDC engagement detection, SEOM safety monitoring, and hardware-enabled SEND adaptations (eye-gaze for Physical Disability mode, ambient noise for ASC, CO₂ for ADHD ventilation alerts). All sensor data is processed on-device and designed not to persist beyond the current session unless explicitly consented by parents.
| Sensor | Planned Model / Spec | Primary Purpose | SEND Application |
|---|---|---|---|
| Front Camera | 8MP RGB · 1080p@60fps · 90° FOV | Student engagement detection (AMDC) · QR code scanning · expression analysis (consent-gated) | All modes — primary engagement signal |
| Eye-Gaze Camera | IR 120 fps · 1280×720 · near-IR LEDs | Gaze input for PD students · attention tracking (ADHD) · reading pattern analysis (dyslexia) | PD Mode: full curriculum eye-gaze navigation |
| Ambient Light | TCS34725 RGB+Clear · 0–60,000 lux | Auto-brightness · display colour temperature · environment monitoring | ASC: sensory environment monitoring (threshold-based) |
| Proximity (ToF) | VL53L1X · 4 m range · ±3% accuracy | Student presence · sleep mode trigger · projector autofocus distance | All modes: engagement/proximity proxy for AMDC |
| Noise Sensor | MEMS mic · 30–130 dB SPL | Classroom noise monitoring · audio quality gate · teacher alert on excessive noise | ASC: sensory protocol when >65 dB · ADHD: ventilation-linked attention |
| Temperature | BME680 · ±0.5°C accuracy | Thermal management · environment quality (target 18–24°C learning range) | All: environment quality alerting |
| Humidity | BME680 · ±3% RH accuracy | Environment quality monitoring (40–60% RH optimal) | General classroom wellbeing |
| CO₂ Sensor | SCD41 · 400–5000 ppm · ±50 ppm | Classroom ventilation alert (>1000 ppm = attention decline signal) · teacher notification | ADHD: linked to attention capacity monitoring |
| Accelerometer | MPU6050 · 3-axis · ±2g | Orientation detection · drop detection (shock protection) · transport mode activation | PD: tilt sensing for alternative input modes |
| Gyroscope | MPU6050 · 3-axis · ±250°/s | Projector stabilisation · tilt detection · anti-theft motion alert | WBT1: navigation stabilisation for mobile mode |
Below are the implementation-level design specifications for each of the six Origami AI models. All code blocks, thresholds, and parameter values represent current design intent and are subject to revision during development and empirical validation.
| Model | Name | Deployment | Target Inference | Parameters (planned) |
|---|---|---|---|---|
| GRPO | Goal-Reward Policy Optimisation | Edge (INT8 quantised) | <15 ms (TensorRT) | 30 M (edge) / 120 M (full) |
| SEOM | Safety & Ethics Oversight Model | Edge (critical path) | <5 ms | 5 M (8-bit quantised) |
| STUM | Spatio-Temporal Uncertainty Model | Edge (lightweight) | <10 ms | TBD during development |
| AMDC | Adaptive Multi-Modal Data Calibration | Edge (real-time) | ~8 ms | Sensor fusion model |
| HTD-IRL | Hierarchical Task Decomposition via Inverse RL | Cloud (planning) | Non-latency-critical | TBD during development |
| CRL-MRS | Cooperative RL Multi-Robot System | Local server (classroom) | 1 Hz fleet sync / 10 Hz events | TBD during development |
Security architecture is designed to meet or exceed COPPA, GDPR Article 32, FERPA, and UK DPA 2018 requirements. All student-facing data flows are designed with privacy-by-default: raw audio and video are not retained, no PII appears in logs, and biometric data is processed on-device only.
| Layer | Planned Implementation | Compliance Target |
|---|---|---|
| Transport Security | TLS 1.3 all connections · mutual TLS robot↔server · certificate pinning | COPPA · GDPR Article 32 |
| Authentication | SAML 2.0 SSO (Okta/Azure AD) · OAuth 2.0 parent app · X.509 device certificates | FERPA · UK DPA 2018 |
| Data at Rest | AES-256 encryption · hardware TPM 2.0 · encrypted SQLite for local cache | COPPA · GDPR Article 32 |
| Student Privacy | No PII in logs · zero raw audio/video retention · anonymised analytics only · parent consent-gated parent app | COPPA §312.8 · GDPR Article 6(1)(a) |
| Access Control | Role-based: Student / Teacher / SENCO / Admin / IT · per-student data isolation · full audit logging | FERPA · UK DPA 2018 |
| Biometric Design | Gaze and expression data processed on-device only · no biometric profiles designed to persist beyond session end | GDPR Article 9 · COPPA |
| Data Deletion | Parent-initiated deletion via app within 24 hours · automated 90-day retention policy for session logs | GDPR Article 17 · COPPA |
Senpai development follows the V-Model systems engineering lifecycle — requirements flow down the left side (decomposition into 9 subsystems), implementation at the bottom, and verification/validation flows up the right side. Seven parallel development tracks run concurrently: HW, FW, AI/PIC 2.0, BE, APP, and INT.
This page details Senpai's UX/UI design system — the design principles, token library, typography, motion language, LED ring states, screen layouts for all 13 planned screens, and the dashboard ecosystem. All designs are in development and subject to change.
Student (who needs joy and challenge), teacher (who needs insight and time-saving), parent (who needs transparency), and SENCO (who needs evidence). These six principles define how every interface decision is made.
The Senpai robot display uses a warm dark indigo palette — "Knowledge Studio" — entirely distinct from clinical, corporate, or consumer-tech aesthetics. Functional signal colours are reserved for specific semantic meanings and never mixed. The Senpai public website and dashboards use InGen Orange as the primary brand accent on a warm cream foundation.
Senpai's motion language is warm but purposeful. Achievement animations are designed to activate the reward system. LED ring states communicate emotion and activity to the entire classroom without words. All animations are capped at ≤3 Hz to prevent photosensitive responses — SEOM E01 enforces this at the animation system level.
Senpai's 13 planned screen layouts span the student robot display (10.1″ AMOLED), the teacher desktop dashboard, student learning adventure, parent mobile app, and admin interface. Each is designed for a distinct user, context, and purpose.
The same underlying session data surfaces entirely differently depending on who is looking at it and why. The design system enforces strict context separation — teacher intelligence, student adventure, parent transparency, and admin governance are never mixed on a single screen.