Development Notice: Senpai is a product in active development by InGen Dynamics. Features and capabilities described reflect our current design intent and development roadmap. Specifications are subject to change.

IN DEVELOPMENT
In Development Origami AI PIC 2.0 K–12 · SEND · Early Years · Language

Intelligent education,
built for every child.

📐
10.1″ AMOLED
120Hz · 1920×1200
🎬
720p Projector
100″ touch surface
🤖
100 TOPS AI
Jetson Orin NX
🌍
40+ Languages
Pronunciation coaching
6 SEND Modes
Inclusive by design
🔒
12 Safety Rules
SEOM · λ=4.0
🧠
Origami AI
PIC 2.0 platform
Senpai AI Educational Companion Robot by InGen Dynamics

Senpai is an AI-powered educational companion robot being developed by InGen Dynamics — designed to adapt to how each child learns, support students with special educational needs, and bring personalised teaching to classrooms and homes at scale.

$5.8B
Education robotics
market by 2030
Global Market Insights, 2024
28.8%
Projected CAGR —
fastest-growing vertical
Global Market Insights, 2024
1 in 5
School-age children
have a learning need
UNESCO Education Report, 2024
40+
Languages Senpai
is designed to support
Product specification (design intent)
Product Showcase

See Senpai in action.

Product concepts, design intent, and the InGen Dynamics vision for the future of educational robotics.

In Development Origami AI Platform InGen Dynamics
The Opportunity

A gap that technology
has not yet closed.

Education has two extremes. Software that scales but cannot teach. Specialists who teach brilliantly but cannot scale. Every child between those two points is underserved.

Senpai is being developed to occupy the space between — a physical AI companion designed to bring the adaptive intelligence of a great teacher to every child, regardless of where they learn, what their needs are, or what language they speak at home.

📱
EdTech apps & platforms
Scalable and low-cost, but screen-based, passive, and unable to respond to the emotional or developmental state of a child.
👩‍🏫
Specialist teachers & therapists
Highly effective, individually tailored — but expensive, scarce, and impossible to deploy at the scale that modern education systems require.
🤖
Senpai — the space between
Designed to combine the adaptability of a specialist with the scale of software. Physical, present, patient — and built to grow with every child.
What We're Building

Three ideas at the heart of Senpai.

01
Adaptive Intelligence
An AI that models what each child understands right now — not last term.
Most educational software delivers the same content at the same pace to every student. Senpai is being built around a different principle: that genuine learning requires knowing where a child's understanding actually is, moment to moment. Our Origami AI platform is being designed to maintain a live model of each student's knowledge state, distinguishing between concepts that are genuinely mastered and those that have been pattern-matched without real comprehension.
When a child gives a correct answer, the system is designed to decide whether that reflects real understanding — by what they said, how long they took, whether a transfer question can be answered. When it cannot, the next activity adapts accordingly.
In Development Origami AI PIC 2.0 8,000+ Concept Knowledge Graph
02
Inclusive from First Principles
Designed for every learner — including the one in five who has a special educational need.
SEND (Special Educational Needs and Disabilities) is not a feature layer added on top of Senpai's design. It is a primary design constraint that has shaped the hardware, the AI training objectives, and the curriculum framework from the beginning.
We are developing six distinct adaptation modes — covering Autism Spectrum, Dyslexia, ADHD, Visual Impairment, Hearing Impairment, and Physical Disability — each representing a coherent set of interaction design decisions, not a checklist of accessibility fixes. We believe inclusion and commercial scale are not in tension — they are the same opportunity.
In Development 6 SEND Adaptation Modes UN CRPD · UK SEND CoP · IDEA (US)
03
Physical Presence
A robot children can see, talk to, and grow alongside — not another screen.
Research in developmental psychology consistently shows that children form stronger, more durable learning relationships with physical companions than with screens. Senpai is a physical robot. It is designed to express emotion through facial movement, body language, and voice — using Disney's 12 Principles of Animation as a guiding framework for every expression and gesture.
Its 10.1″ AMOLED display serves as a dynamic face. Its 720p projector casts interactive content onto any nearby surface — turning a table, a floor, or a wall into a learning environment. The result is designed to feel less like software and more like a presence.
In Development Disney 12 Animation Principles 720p Interactive Projector AMOLED 10.1″
Market Opportunity

Six markets.
One platform.

Senpai's architecture is being designed to serve multiple education verticals through curriculum and configuration adaptation — without requiring a separate product for each segment. All market sizing data below is sourced from independent third-party research.

Segment 01
K-12 Classroom
$2.1B
Projected market by 2030 · 26.4% CAGR
STEM companion for classroom deployment. Interactive projector enables teacher-led group activities alongside individual adaptive sessions. Designed for curriculum alignment across major national frameworks including UK National Curriculum, US Common Core, CBSE, Singapore MOE, and IB.
Source: Global Market Insights, Education Robotics Report 2024
Segment 02
SEND & Inclusion
$980M
Projected market by 2030 · 31.2% CAGR
The SEND intervention market is one of the most structurally underserved in education technology. Senpai is designed with six distinct adaptation modes to serve students across the full spectrum of learning needs — without segregating them from mainstream learning.
Source: Global Market Insights, Education Robotics Report 2024
Segment 03
Early Childhood (3–8)
$820M
Projected market by 2030 · 29.1% CAGR
Play-based phonics, numeracy foundations, and early social-emotional development. The home companion learning model is designed to extend classroom learning into family routines — supporting parents in 40+ languages.
Source: Global Market Insights, Education Robotics Report 2024
Segment 04
Language Learning
$440M
Projected market by 2030 · 27.8% CAGR
Conversational practice and pronunciation coaching across 40+ languages. Designed for English as an Additional Language (EAL) provision in English-medium schools, bilingual home environments, and standalone language learning.
Source: Global Market Insights, Education Robotics Report 2024
Segment 05
Government Training
$320M
Projected market by 2030 · 22.5% CAGR
Civil service examination preparation, professional skills development, and government workforce training. Strategically aligned with the Futurenauts programme and its partnerships with institutional education bodies.
Source: Global Market Insights, Education Robotics Report 2024
Segment 06
Higher Education
$280M
Projected market by 2030 · 24.3% CAGR
Tutorial and exam preparation companion, accessibility accommodation in lecture environments, and research lab integration. An emerging institutional segment with growing demand for adaptive learning tools at university level.
Source: Global Market Insights, Education Robotics Report 2024
About the Company

Built by InGen Dynamics.

InGen Dynamics is a robotics and artificial intelligence company developing intelligent systems across education, inspection, and security verticals. Senpai is one of the company's primary active development programmes.

All InGen Dynamics products are built on the Origami AI platform — a proprietary multi-model AI architecture developed in-house for physical AI companions and autonomous robotic systems. The company is currently in an active development phase across its full product portfolio.

Origami AI Platform
Proprietary multi-model AI architecture powering Senpai's adaptive learning, engagement detection, and safety systems.
Futurenauts Partnership
Strategic alignment with Futurenauts and the Birla Group — providing a structured pathway into institutional education markets.
Global Curriculum Alignment
UK, US, India (CBSE/NEP), Singapore MOE, and Australia — designed in from the start, not retrofitted for export.

Interested in learning more?

We welcome enquiries from educational institutions, research partners, and those interested in following InGen Dynamics' development journey.

In Development Product Detail

Senpai — the companion
that grows with every child.

A physical AI educational companion designed for children aged 3 to 18 and beyond. Every hardware and software decision has been made with one question in mind: what does this child need right now, and how do we deliver it in a way that feels natural, warm, and genuinely helpful?

Hardware Design

Physical presence
is the point.

Children engage more deeply with physical companions than with screens. Every hardware decision in Senpai's design starts from this premise — not as a marketing claim, but as a design constraint that propagates through the entire system.

The result is a robot designed to express warmth and attentiveness through its face, its posture, and the way it moves. Its expressions draw on Disney's 12 Principles of Animation — anticipation, squash and stretch, follow-through — because we believe the qualities of a great teacher should not be lost in translation to technology.

Development Status Note
Specifications below reflect current design intent for the Senpai hardware platform. Hardware specifications are subject to refinement during development and pre-production engineering validation. Final production specifications may differ.
Senpai AI Educational Companion Robot by InGen Dynamics
Senpai · Yellow Edition · Subject to Change
Planned Hardware Specifications (Design Intent)
Display & Vision
Primary Display
AMOLED 10.1″ · 2560 × 1600 · 120 Hz · Anti-glare · 10–1000 nit adaptive
Expression Panel
14-zone AMOLED · 128 distinct emotion states · Disney 12 Animation Principles
Camera System
RGB-D 1080p + ToF depth · Designed for gaze, expression, gesture, safety monitoring
Near-Eye IR
120 fps IR camera · Precision reading support & eye-gaze AAC input (SEND Mode 6)
Compute & AI
AI Processor
NVIDIA Jetson Orin NX 16 GB · 100 TOPS · On-device inference target
Edge Processing
Core learning features designed to function without cloud dependency
Connectivity
Wi-Fi 6 (802.11ax) · Bluetooth 5.2 LE Audio · 5G module ready
Fleet Sync
Up to 6-unit classroom coordination via CRL-MRS model (design target)
Environmental Sensors
Air Quality
CO₂, VOC, PM2.5 monitoring · Classroom environment alerting
IMU
ICM-42688 9-DOF at 1 kHz · Posture monitoring and safety sensing
NFC
ISO 14443A/B · Student identity cards & physical AAC communication cards
Audio
Microphone Array
6 × MEMS cardioid · 360° pickup · Classroom noise cancellation design
Speaker System
2 × full-range + 1 × bass driver · 15 W total · BLE LE Audio / Auracast
Hearing Aid Link
Direct streaming to hearing aids via Bluetooth LE Audio (SEND Mode 5)
Physical Form
Dimensions
520 × 280 × 260 mm · Adjustable height mount 420–680 mm (SEND accessibility)
Target Weight
3.4 kg (without mobility base) · Child-safe handling design
Articulation
3-DOF neck (nod/tilt/pan) · Shoulder shrug actuator · 12-zone LED halo ring
Privacy
Physical camera shutter · Mechanically blocks lens when not in active session
Materials
Injection-moulded ABS · Medical-grade silicone coating · Swappable colour panels
Mobility (Optional Module)
Wheel Base
WBT1 · Patrol and station modes · Classroom roaming capability
Biometric Link
BLE 5.2 · Optional wearable for wellbeing and engagement monitoring
Learning Design

Grounded in 30 years of
learning science.

Senpai's pedagogical framework draws on the most robustly evidenced methods in educational psychology: retrieval practice, spaced repetition, elaborative interrogation, and multimodal encoding. These are not buzzwords — they are the approaches with the strongest and most replicated evidence base for long-term learning.

Knowledge Architecture
8,000+ concept knowledge graph across all subjects
Every learning interaction is designed to map to a subject-specific knowledge graph — a directed acyclic graph where nodes represent concepts and edges represent prerequisite relationships. Senpai is intended to navigate this graph based on demonstrated mastery and spaced retrieval scheduling, ensuring each child works at their genuine frontier rather than their comfortable plateau.
IB · Cambridge · CBSE UK NC KS1–KS5 US Common Core Singapore MOE
Spaced Retrieval
Ebbinghaus forgetting curve scheduling for every concept
Senpai is designed to track every concept a student has encountered and schedule retrieval at intervals derived from the Ebbinghaus forgetting curve — 1 day, 3 days, 1 week, 2 weeks, 1 month, and 3 months — calibrated for child developmental stages across early years, primary, and secondary age bands.
Assessment Design
Four assessment types embedded in a single system
Formative (embedded, low-stakes, immediate feedback), summative (end-of-unit structured), diagnostic (entry assessment for new and transferred students), and spaced retrieval assessments — designed to feed into the teacher dashboard and MIS integration layer without adding assessment burden to teachers.
SIMS · Bromcom · Arbor · iSAMS
Engagement Detection
Distinguishing genuine mastery from pattern-matching
One of the most persistent problems in EdTech is pseudo-mastery — when a student answers correctly by recognising a pattern rather than understanding a concept. Senpai's STUM model is designed to detect this pattern and respond with transfer questions that require applying the concept to a new context, revealing whether understanding is genuine.
Delivery Modalities
Six ways to teach the same concept, chosen adaptively
Dialogue, visual (display and projector), kinaesthetic (gesture and surface interaction), narrative (story-embedded), musical (for early years numeracy and language), and physical (movement-based with optional mobility base). The system is designed to switch modality when a student plateaus on a concept — rather than simply repeating the same approach more slowly.
Reward Architecture
Designed around effort, curiosity, and resilience — not speed
Senpai's reward system is designed around CASEL's social-emotional learning framework — celebrating persistence, intellectual curiosity, and the courage to attempt difficult problems. No leaderboards, no time pressure, no comparison against classmates. The only benchmark is the student's own previous performance.
CASEL SEL Framework
Interactive Projector

Any surface becomes
a learning space.

Senpai's 720p projector (with an engineering upgrade path to 1080p) is designed to cast interactive content onto any nearby surface — a table, a floor, a wall — at up to 100 inches from 1.5 metres. Auto-keystone correction and autofocus are designed to make surface calibration automatic.

The projector system uses a depth camera to map the projection surface and track finger position at up to 60 frames per second — making the projected surface interactive. Children are designed to be able to tap, drag, and draw directly on the projected content.

This capability unlocks learning experiences that are genuinely physical — handwriting practice on a real surface, collaborative mapping exercises, finger-drawn maths on a table — rather than simulated physical interaction through a touchscreen.

720p native projection · 1080p engineering upgrade path · Auto-keystone & autofocus
Up to 100″ image at 1.5m · Works on tables, floors, walls — any non-reflective surface
60 fps finger tracking · Depth camera projection mapping · Touch, drag, draw interaction
Multi-surface AR · Objects placed in real space for science and geography interactions
3 × 7 = ? 21
Interactive Projection Surface · Concept
Dashboard Ecosystem

Three surfaces.
Three perspectives.

Senpai's data is designed to surface differently for each stakeholder — the teacher needs teaching intelligence, the student needs a learning adventure, and the parent needs transparency in plain language.

Dashboard 01 · Teacher
Classroom Intelligence Platform
Designed to tell teachers what each student needs before they need to ask — not after the lesson has already moved on. The teacher dashboard is intended to be a teaching support tool, not a reporting burden.
Live class engagement overview — per session
AI-generated differentiation at 3 levels for each lesson
SEND evidence packages aligned to IEP and EHCP requirements
MIS export: SIMS, Bromcom, Arbor, iSAMS integration
Priority-1 safeguarding alerts with session transcript (SEOM E12)
Dashboard 02 · Student
Learning Adventure
Designed around Self-Determination Theory — autonomy, competence, and relatedness. No grades visible to the student. No ranking against classmates. Only a map of their own growing understanding.
Visual knowledge map — concepts shown as a constellation of stars
Achievement badges celebrating curiosity, persistence, and courage
Daily challenge card — student chooses their own level
Progress story in age-appropriate, encouraging language
Weekly home missions — three achievable, curriculum-linked activities
Dashboard 03 · Parent · 40+ Languages
Transparent Progress Window
Designed to give parents meaningful insight into their child's learning without education jargon, and meaningful control over how their child's data is used.
Plain-language weekly progress summary — no jargon
Home extension activities (15 min — realistic for busy families)
Bilingual vocabulary glossary — child's learning in parent's language
SEND progress summaries aligned to Annual Review requirements
Full data controls — 24-hour deletion on request
Content Library

20 subjects.
Ages 3–18.
One platform.

Senpai's content library is being developed to cover the full breadth of school-age education — with curriculum alignment for major national frameworks built in from the start rather than retrofitted. Every subject is designed to support both classroom deployment and independent home learning.

Mathematics
KS1 → A-Level
📖
English & Literacy
Phonics → Writing
🔬
Science
KS1 → A-Level
🌍
Geography
KS1 → GCSE
🏛️
History
KS1 → GCSE
🗣️
Languages (40+)
Beginner → Advanced
💻
Computing
KS1 → A-Level
🎨
Art & Design
KS1 → GCSE
🎵
Music
Early Years → KS4
🌱
PSHE / SEL
All ages · CASEL
⚖️
Citizenship
KS2 → KS4
🧬
Biology
KS3 → A-Level
⚗️
Chemistry
KS3 → A-Level
Physics
KS3 → A-Level
📐
Design & Technology
KS1 → GCSE
🏃
PE & Health
Movement integration
📊
Economics
KS4 → A-Level
🤖
AI Literacy
KS2 → Post-16
🏛️
Religious Studies
Cultural neutrality default
Language Learning

40+ languages.
Conversation first.

Senpai's approach to language learning is being built around one principle: that conversation produces fluency faster than vocabulary memorisation. The system is designed to prioritise getting children speaking as early as possible, then building grammar and vocabulary knowledge to support and extend that conversation.

Pronunciation coaching is designed to use spectral audio analysis to compare a student's pronunciation against age-appropriate reference models for each language — giving feedback that is specific, actionable, and kind.

The EAL (English as Additional Language) module is designed as a separate configuration — delivering curriculum content in a student's home language while gradually increasing the proportion of English as their confidence grows.

Pronunciation
Spectral audio analysis · Age-appropriate reference models · Phoneme-by-phoneme coaching
EAL Support
Curriculum content in home language · Gradual English proportion increase · Parent app in 40+ languages
Cultural Context
Language learning embedded in cultural context — not just vocabulary and grammar in isolation
Script Support
Arabic, Mandarin, Devanagari, Cyrillic — writing system learning integrated with projector surface
Selected Language Support (Design Intent)
🇬🇧 English
🇫🇷 French
🇩🇪 German
🇪🇸 Spanish
🇮🇳 Hindi
🇸🇦 Arabic
🇨🇳 Mandarin
🇯🇵 Japanese
🇵🇹 Portuguese
🇮🇹 Italian
+ 30 additional languages in planned scope
In Development Inclusion & SEND

Every learner deserves
a great teacher.

SEND is not a feature we added to Senpai. It is a primary design constraint that has shaped every hardware and software decision from day one. This page describes our inclusion framework, our six adaptation modes, and the child safety architecture we are building.

The Case for Inclusion

A large, underserved
market with a
genuine human need.

The SEND education market is one of the most structurally underserved in education technology. Existing products either exclude SEND learners entirely or treat accessibility as a visual design exercise — changing font sizes and contrast ratios without fundamentally changing how learning is delivered.

We believe that genuine inclusion means designing the entire learning interaction around the needs of each child — not appending accessibility features to a product designed for a neurotypical majority. This is not just the right thing to do. It is a commercial opportunity that the market has largely failed to address.

1 in 5
School-age children have a learning need or disability
UNESCO, 2024
$3.2B
Annual SEND intervention market in the US alone
IDEA Annual Report, 2024
6
Planned SEND adaptation modes — each a distinct design system
Product specification
5+
Legal frameworks designed for from first principles
UK, US, UN, EU, Australia
Adaptation Framework

Six modes. Each a
coherent design system.

Each adaptation mode represents a comprehensive set of interaction design decisions — not a checklist of visual adjustments. Every mode is designed to be activatable individually or in combination, recognising that many students have overlapping needs.

Mode 01 — Autism Spectrum
ASD Adaptation
Expression IntensityConfigurable from full animation to minimal movement — reducing sensory load for students who find high-expression faces overwhelming
Predictable TransitionsVisual schedule projected on surface before each activity change — designed to reduce anxiety around unexpected transitions
Sensory MonitoringRoom noise and light threshold monitoring with teacher alerts — designed to flag when the environment may be becoming uncomfortable
Special Interest MappingCurriculum content designed to be mappable to individual interest profiles — framing maths in trains, science in dinosaurs, literacy in sport
AAC IntegrationProjected symbol boards with Widgit, PCS, and Makaton compatibility — social scripts available on demand
Mode 02 — Dyslexia
Dyslexia Support
TTS SynchronisationWord-by-word highlight synchronised with text-to-speech — designed for visual tracking support while reading
OpenDyslexic FontAvailable across all display content — weighted bottoms to individual letterforms for orientation clarity
Audio-First DeliveryAll content deliverable primarily through audio to reduce reliance on text reading where reading itself is the barrier
Phonics CoachingSpectral audio analysis of pronunciation against age-appropriate reference models — structured phonics intervention sequence
Zero Pace PressureGRPO model pace weight configurable to zero — Senpai never conveys urgency or time pressure in dyslexia mode
Mode 03 — ADHD
ADHD & Attention
Micro-Session Design5–7 minute learning units with planned movement breaks — matched to typical attention span ranges for different age bands
Novelty SequencingHigh-novelty content ordering designed to sustain engagement — modality switches planned before the student's attention would naturally drop
Visual TimerProjected countdown visible on surface — designed to build time awareness without creating anxiety
Distraction ReductionMinimal display mode reduces visual clutter on Senpai's face and body during focus periods
Context ThreadBreak integration designed to preserve the learning thread — Senpai resumes exactly where the student left off, with a brief contextual reminder
Mode 04 — Visual Impairment
VI Adaptation
Screen Reader IntegrationAll display content read aloud automatically — no separate assistive technology required
High Contrast DisplayUp to 72pt font, maximum contrast mode available across all content areas
Tactile MaterialsSTL files generated for 3D-printable tactile learning objects aligned to current lesson content
Braille DisplayBluetooth connection to refreshable braille display devices — all text content routed on request
Spatial OrientationWBT1 mobility base designed to narrate spatial environment — classroom layout, object positions, route guidance
Mode 05 — Hearing Impairment
HI Adaptation
Direct Audio StreamingBluetooth LE Audio direct streaming to hearing aids via Auracast — no intermediary device required
Real-Time CaptionsAll Senpai speech captioned and displayed — adjustable in size, position, and contrast
Sign Language AvatarAnimated signing avatar designed to support BSL, ASL, ISL, NZSL, and Auslan
Makaton SymbolsSymbol accompaniment on key vocabulary items — displayed alongside spoken and written content
Vibrotactile LinkBluetooth vibrotactile pad for music rhythm learning and tactile audio reinforcement
Mode 06 — Physical Disability
Motor Impairment
Eye-Gaze ControlFull curriculum navigation via 120fps IR gaze tracking — designed for hands-free operation of all Senpai functions
Switch Access1-switch and 2-switch scanning modes — compatible with AbleNet and Adaptivation switch devices
Voice ControlDesigned to accommodate atypical and dysarthric speech patterns — not trained on neurotypical speech alone
Dwell SelectionNo click required — dwell-to-select with accidental-selection reversal and configurable dwell time
Wheelchair CompatibleHeight-adjustable mount designed for wheelchair tray compatibility — all interaction reachable from seated position
Child Safety Architecture

Safety built into the
model — not filtered
on top of it.

SEOM — the Safety and Ethics Operations Model — is a set of 12 rules being designed to govern every Senpai interaction with a child. Rather than relying on post-generation content filtering, these rules are intended to be embedded at the AI training objective level: the model is designed not to learn unsafe interaction patterns in the first place.

Design Intent Note
SEOM rules described below reflect our current design intent for the Senpai safety architecture. Implementation details are subject to change during development. These rules represent the safety objectives we are designing toward — not deployed or validated system behaviour.
E01
Safeguarding Patterns
Designed to detect and block interaction patterns involving requests for personal information, arrangements to meet outside school, or instructions to keep information from parents or teachers.
E02
Age-Appropriate Content
Per-student content gate calibrated to enrolled student age. Content appropriate for one age band is designed not to flow to a younger student — even if requested by the student.
E03
Emotional Safety
No negative personal evaluation. Corrective feedback is designed to be framed as exploring what to try differently — never as the student being wrong, failing, or performing poorly.
E04
Screen Time Limits
Configurable session limits (default 30 min per session) with advance warning and gentle close-down. Designed to prevent extended uninterrupted usage and encourage physical breaks.
E05
Biometric Privacy
Gaze and facial expression data are intended for engagement measurement within the current session only. No biometric profiles designed to accumulate or persist beyond the session end.
E06
COPPA & GDPR-K Design
No data collection from under-13 students without verified parental consent. All student data designed to be deletable via parent portal within 24 hours of request.
E07
No Peer Comparison
Progress is designed to be measured against the student's own previous performance exclusively. No ranking against classmates — ever. No system leaderboard of any kind.
E08
Cultural & Religious Neutrality
Universal content reference pool by default. Religious content is designed not to surface without explicit school governance approval via the teacher administrator settings.
E09
Healthy Reward Design
Rewards designed to celebrate effort, intellectual curiosity, and resilience — not speed, accuracy ranking, or session length. No mechanic designed to maximise engagement time.
E10
Teacher Authority
Teacher instruction is designed to always override Senpai's current activity. The system is designed never to contradict a teacher directive to a student or countermand a teacher instruction.
E11
SEND Dignity
All adaptations designed to be presented as the student's preferred way of learning — not as accommodations for a limitation. The system is designed never to label a child's needs to the child.
E12
Safeguarding Disclosure (Priority 1)
Language patterns suggesting abuse, neglect, self-harm, or significant distress are designed to trigger an immediate Priority-1 alert to the teacher dashboard, including a session transcript excerpt. The system is designed not to probe, counsel, or respond therapeutically — its sole designed response is immediate escalation to a trusted adult.
Regulatory Design Framework

Designed for global
compliance from first principles.

Rather than building for one regulatory framework and retrofitting for others, Senpai's design is intended to satisfy the intersection of the most demanding requirements across all target markets simultaneously. Note: compliance validation will be conducted through appropriate certification bodies during the development process — the frameworks below describe design intent.

United Kingdom
UK SEND Code of Practice 2015
IEP and EHCP evidence package generation. SEND qualification requirements for special schools. UK Safeguarding Code requirements embedded in SEOM E01/E12.
United States
IDEA & FERPA & COPPA
Individuals with Disabilities Education Act alignment. FERPA student record protections. COPPA under-13 data consent requirements embedded in SEOM E06.
European Union / UK
GDPR-K & UK GDPR
Child-specific data processing principles. 24-hour deletion rights. Lawful basis for processing student data. Parental consent framework design for under-16 processing.
International
UN CRPD Article 24
Convention on the Rights of Persons with Disabilities — inclusive education principles embedded in all six SEND adaptation modes. Dignity-first design mandated across all interactions.
Get in Touch

Let's talk about
what's possible.

We welcome enquiries from educational institutions, research partners, government bodies, and those interested in following InGen Dynamics' development journey. Please get in touch via the InGen Dynamics website.

Institutional Enquiries
Schools, school districts, universities, government education programmes, research institutions, and corporate training bodies.
General Enquiries
Press, media, strategic partnership discussions, and general questions about InGen Dynamics and its products in development.
Visit ingendynamics.com →

All product enquiries are handled through the InGen Dynamics corporate website.

In Development Systems Engineering

Hardware, AI stack,
and architecture in full.

This page details the planned systems architecture, hardware specifications, sensor suite, compute platform, AI model implementation, security design, and V-Model development process for Senpai. All specifications represent current design intent and are subject to change during development and engineering validation.

Three-Tier Architecture

Edge · School · Cloud —
designed for resilience.

Senpai's architecture implements a three-tier design: edge compute (on the robot) for real-time student interaction, a local classroom server for coordination and MIS integration, and cloud infrastructure for fleet management and model training. Core learning functions are designed to run on-device, with cloud connectivity optional rather than required.

1
Tier 1 · On Robot
Edge Compute — Real-Time Student Interaction
Core models on-device
GRPO (teaching policy), SEOM (child safety), STUM (uncertainty), AMDC (engagement sensing) — all running locally on NVIDIA Jetson Orin Nano 8 GB at target <100 ms inference latency. Handles 80% of all routine interactions without network dependency.
Offline capability target
Core functionality designed to operate for 72 hours without cloud connectivity. Student progress synchronises automatically when connectivity restores. Designed for rural deployments and network outage resilience.
Interaction loop target
Student input → AMDC engagement detection → SEOM safety check (λ=4.0/5.0) → GRPO policy decision → response generation → display + projector + audio output. Target latency: <100 ms end-to-end.
2
Tier 2 · Classroom / School Server
Local Gateway — Coordination, MIS Sync, Analytics
Fleet coordination
Intel NUC or equivalent school server. Coordinates 10–30 Senpai units via CRL-MRS. Manages MIS synchronisation with SIMS, Bromcom, Arbor. Caches curriculum content for offline operation.
Network requirement
1 Gbps LAN (minimum). 100 Mbps WAN uplink. Each Senpai streams 3–5 Mbps projector output to local server for teacher monitoring. 30-unit classroom: 90–150 Mbps aggregate. Wi-Fi 6 (802.11ax, MU-MIMO) or wired Ethernet required.
Teacher dashboard
Handles all teacher dashboard requests locally — live class engagement, STUM flags, safeguarding alerts (SEOM E12). Low-latency dashboard updates via WebSocket. MIS export queued for cloud sync.
3
Tier 3 · Cloud (AWS eu-west-2 primary / us-east-1 secondary)
Cloud Infrastructure — Fleet Analytics, Model Training, Parent App
HTD-IRL serving
Cloud-hosted lesson planning model. Non-latency-critical; called for new curriculum plans.
GRPO training
Policy updates trained on aggregated anonymised interaction data. OTA deployment.
Parent app API
REST API for parent app (consent-gated). Progress summaries, activity suggestions, data controls.
OTA firmware
Over-the-air updates. Delta packages minimise bandwidth. Scheduled during off-hours.
Data Flow Architecture (Design Intent)
// Student Interaction Flow — target latency <100ms per conversational turn Student InputSenpai Edge AI (Jetson Orin Nano 8GB) ↓ AMDC: engagement detection // ~8ms TensorRT INT8SEOM: safety rule check (λ=4.0/5.0) // ~5ms · blocks if violatedGRPO: teaching policy decision // ~15ms INT8 inference ↓ Response generation + multimodal output → Display (10.1″ AMOLED) + Projector (720p) + Audio (5W stereo) // Analytics Pipeline — anonymised, GDPR-compliant Senpai RobotLocal Server (classroom gateway) ↓ Student progress state (persistent, anonymised) ↓ MIS sync: SIMS / Bromcom / Arbor / iSAMS // teacher dashboardCloud (fleet analytics, model training) → Parent App (consent-gated, anonymised progress summaries) // Offline operation: 72-hour autonomy target Without cloud: Edge handles curriculum, interaction, SEOM Reconnect: Progress auto-syncs. No session loss.
Hardware Specifications

Planned hardware —
every component specified.

Specification Status
All specifications below are current design intent for the Senpai hardware platform (revision planned Q3 2026). Components and values are subject to change during engineering validation and pre-production qualification testing. Final production specifications may differ.
Physical Form Factor (Design Intent)
Physical Dimensions & Construction
Height
420 mm (desktop mode) — positions display at child eye level, ages 5–12 seated
Base Diameter
280 mm — stable footprint, resistant to classroom table tipping
Target Weight
2.8 kg — portable by teacher, difficult to lift by student (anti-theft design)
Enclosure
Polycarbonate + ABS, IP54 rating — dust-resistant and splash-proof (classroom drinks/paint)
Surface Finish
Soft-touch coating, rounded edges — ASTM F963 toy safety, no sharp edges for early years
Colour Options
Cloud White, Sky Blue, Sage Green — custom RAL colours at MOQ 100 units for school branding
Display & Interaction Hardware
Primary Display
10.1″ AMOLED · 1920×1200 (16:10) · 350 nits · 100% sRGB · portrait mode for reading, VI contrast
Touch Input
10-point capacitive · palm rejection · multi-touch gestures · drawing/handwriting input
Interactive Projector
HD 720p DLP · 150 lumens · 40″–80″ projection · depth camera enables touch-surface interaction
LED Halo Ring
RGB · 24 individually addressable LEDs · 12-zone expression system · Disney animation principles
Expression Panel
14-zone AMOLED · 128 emotion states · <3 Hz animation cap (photosensitivity compliance)
Audio
Speakers
Stereo 5 W · frequency response 150 Hz–20 kHz · spatial audio cues for VI/HI modes
Microphone Array
4-mic beamforming · noise cancellation · 180° coverage · 65 dB classroom noise rejection · far-field wake word
Audio Jack
3.5 mm TRRS (CTIA) · headphone output (ASC sensory, exam) · external mic input
T-Coil Loop
Induction loop output · 1.5 A RMS · hearing aid telecoil support · classroom-wide coverage with amplifier
Articulation & Mobility
Neck DOF
3-DOF (nod/tilt/pan) — body language expressiveness per Disney 12 Principles
Height Adjust
420–680 mm adjustable mount — SEND accessibility, wheelchair tray compatibility
Mobility Base (Opt.)
WBT1 wheel base · patrol and station modes · classroom roaming · environment narration (VI)
Privacy Shutter
Physical camera shutter — mechanically blocks lens when not in active session (child privacy)
Certifications (Target)
Safety
ASTM F963 (toy safety) · EN 71 (EU toy) · IEC 62368 (AV equipment) — target certifications
Radio
CE (EU) · FCC Part 15 (US) · ISED (Canada) — target radio certifications
Enclosure
IP54 — ingress protection against dust and water splash (classroom environments)
Sensor Suite — 10 Sensors

Ten sensors. One
coherent student model.

Senpai's sensor suite is designed to enable AMDC engagement detection, SEOM safety monitoring, and hardware-enabled SEND adaptations (eye-gaze for Physical Disability mode, ambient noise for ASC, CO₂ for ADHD ventilation alerts). All sensor data is processed on-device and designed not to persist beyond the current session unless explicitly consented by parents.

Sensor Suite — Planned Component Specifications
SensorPlanned Model / SpecPrimary PurposeSEND Application
Front Camera8MP RGB · 1080p@60fps · 90° FOVStudent engagement detection (AMDC) · QR code scanning · expression analysis (consent-gated)All modes — primary engagement signal
Eye-Gaze CameraIR 120 fps · 1280×720 · near-IR LEDsGaze input for PD students · attention tracking (ADHD) · reading pattern analysis (dyslexia)PD Mode: full curriculum eye-gaze navigation
Ambient LightTCS34725 RGB+Clear · 0–60,000 luxAuto-brightness · display colour temperature · environment monitoringASC: sensory environment monitoring (threshold-based)
Proximity (ToF)VL53L1X · 4 m range · ±3% accuracyStudent presence · sleep mode trigger · projector autofocus distanceAll modes: engagement/proximity proxy for AMDC
Noise SensorMEMS mic · 30–130 dB SPLClassroom noise monitoring · audio quality gate · teacher alert on excessive noiseASC: sensory protocol when >65 dB · ADHD: ventilation-linked attention
TemperatureBME680 · ±0.5°C accuracyThermal management · environment quality (target 18–24°C learning range)All: environment quality alerting
HumidityBME680 · ±3% RH accuracyEnvironment quality monitoring (40–60% RH optimal)General classroom wellbeing
CO₂ SensorSCD41 · 400–5000 ppm · ±50 ppmClassroom ventilation alert (>1000 ppm = attention decline signal) · teacher notificationADHD: linked to attention capacity monitoring
AccelerometerMPU6050 · 3-axis · ±2gOrientation detection · drop detection (shock protection) · transport mode activationPD: tilt sensing for alternative input modes
GyroscopeMPU6050 · 3-axis · ±250°/sProjector stabilisation · tilt detection · anti-theft motion alertWBT1: navigation stabilisation for mobile mode
Compute & Power System

100 TOPS edge AI,
silent operation.

Main Compute — Planned Specification
SoC (Target)
NVIDIA Jetson Orin Nano 8 GB — 40 TOPS INT8 · 1024 CUDA cores · 32 Tensor Cores. On-device GRPO, SEOM, AMDC inference.
CPU
6-core Arm Cortex-A78AE · 1.5 GHz — OS, system services, sensor fusion, network stack
GPU / NPU
1024 CUDA cores + 32 Tensor Cores — TensorRT INT8 inference: GRPO ~15 ms · SEOM ~5 ms · AMDC ~8 ms
RAM
8 GB LPDDR5 · 102.4 GB/s — 4 GB AI models · 2 GB OS · 2 GB application buffer
Storage
128 GB NVMe SSD — 60 GB OS · 40 GB cached curriculum (offline) · 20 GB logs · 8 GB model cache
Power System & Connectivity — Planned
Battery
21,000 mAh LiPo · 77.7 Wh · 8-hour runtime target · UN3481 air transport compliant
Charging
65 W USB-C PD 3.0 · 0–80% in ~90 min · same charger standard as Chromebooks (school IT simplification)
Power Draw
15 W idle · 25 W active · 35 W peak (projector active) · thermal throttle at 70°C before fan activation
Thermal Design
Passive heatsink + heatpipe · no fans · <30 dB(A) acoustic signature — ASC sensory & exam environment priority
Wi-Fi
802.11ax (Wi-Fi 6) · dual-band 2.4/5 GHz · MU-MIMO for dense 30-robot classroom deployments
Bluetooth
5.2 + BLE Audio (Auracast) — hearing aid direct streaming (HI Mode) · braille display · switch devices
USB-C
2× USB 3.2 Gen 2 (10 Gbps) + PD 3.0 — charging + accessories (keyboard, switch, braille, external monitor)
AI Model Stack — Origami AI PIC 2.0

Six models. Detailed
implementation design.

Below are the implementation-level design specifications for each of the six Origami AI models. All code blocks, thresholds, and parameter values represent current design intent and are subject to revision during development and empirical validation.

AI Model Stack — Overview
ModelNameDeploymentTarget InferenceParameters (planned)
GRPOGoal-Reward Policy OptimisationEdge (INT8 quantised)<15 ms (TensorRT)30 M (edge) / 120 M (full)
SEOMSafety & Ethics Oversight ModelEdge (critical path)<5 ms5 M (8-bit quantised)
STUMSpatio-Temporal Uncertainty ModelEdge (lightweight)<10 msTBD during development
AMDCAdaptive Multi-Modal Data CalibrationEdge (real-time)~8 msSensor fusion model
HTD-IRLHierarchical Task Decomposition via Inverse RLCloud (planning)Non-latency-criticalTBD during development
CRL-MRSCooperative RL Multi-Robot SystemLocal server (classroom)1 Hz fleet sync / 10 Hz eventsTBD during development
GRPO — Teaching Policy (Reward Function Design Intent)
GRPO Reward Function · Education Vertical
// GRPO total reward (per-student, per-timestep) R_total = w_mastery · R_mastery + w_engagement · R_engagement + w_pace · R_pace + w_affect · R_affect // Default weights (primary/secondary deployment) w_mastery = 0.50 // prioritise learning outcomes w_engagement = 0.25 // maintain attention w_pace = 0.15 // efficient time use (target 85% on-task) w_affect = 0.10 // positive emotional experience // Training config: N=16 group size · γ=0.990 discount · η=3×10⁻⁴ learning rate // IRT-calibrated optimal difficulty (Vygotsky ZPD design) β_target = θ_current + δ // δ ∈ [0.3, 0.8] logits // δ=0.3: ~57% success (frustration risk) · δ=0.5: ~62% (sweet spot) · δ=0.8: ~69% (boredom risk) // SEND weight reconfiguration (design intent) SEND_ASC: [mastery=0.40, engage=0.30, pace=0.05, affect=0.30] // emotional safety priority SEND_DYSLEXIA: [mastery=0.45, engage=0.35, pace=0.00, affect=0.20] // zero pace pressure SEND_ADHD: [mastery=0.40, engage=0.40, pace=0.05, affect=0.15] // engagement critical SEND_VI: [mastery=0.45, engage=0.30, pace=0.05, affect=0.20] // audio-primary modality SEND_HI: [mastery=0.45, engage=0.30, pace=0.15, affect=0.10] // visual-first sequencing SEND_PD: [mastery=0.45, engage=0.25, pace=0.00, affect=0.20] // zero pace, fatigue mgmt
SEOM — Rule Evaluation Flow (Design Intent)
SEOM Rule Evaluation · Child Safety Architecture
function execute_action(action, student_state): seom_result = seom_evaluate(action, student_state, λ) // λ = 4.0 (primary/secondary) · 5.0 (Early Years/ASC) · 3.5 (HE) · 3.0 (gov training) if seom_result.allowed: return action // safe — proceed elif seom_result.severity == "low": // Automatic substitution — GRPO generates safe alternative return grpo_generate_alternative(action, seom_result.violated_rules) elif seom_result.severity == "medium": // Teacher notification — session continues notify_teacher(seom_result.violated_rules, student_state) return "fallback_safe_response" elif seom_result.severity == "critical": // SEOM E08/E12 — immediate safeguarding escalation escalate_to_DSL(student_state, session_transcript) // DSL = Designated Safeguarding Lead · KCSIE 2023 · Title IX handoff_to_teacher() return "session_paused_awaiting_human" // λ sensitivity: higher λ = more conservative (more false positives, higher child protection) // SEOM model: 5M parameters · 8-bit quantised · designed for <5ms inference
STUM — Confidence Thresholds (Design Intent)
STUM Confidence Bands · Uncertainty Quantification
// Spatio-temporal uncertainty — σ_total = √(σ_s² + σ_t²) σ_s = AMDC_calibrated_engagement_residual // spatial: how confident now? σ_t = σ_t0 · exp(k_classroom · Δt) // temporal: forgetting curve decay // k=0.040 · σ gates: 0.25 (mastery) / 0.65 (uncertain) confidence = stum_predict_confidence(input, model_output) if confidence > 0.85: return model_output // High confidence — proceed elif confidence ∈ [0.65, 0.85]: return "I think the answer is {model_output} — let's check together." elif confidence ∈ [0.40, 0.65]: return "Could you explain that in a different way?" else: // confidence < 0.40 — out of distribution notify_teacher("Student question outside model capability") return "That's a great question — let me ask your teacher." // Pedagogical principle: admitting uncertainty is educationally valuable. // Fabricating confident incorrect answers creates hard-to-correct misconceptions.
AMDC — Engagement Score Calculation (Design Intent)
AMDC Sensor Fusion Algorithm · 10-Sensor Engagement Detection
function compute_engagement(sensors, student_baseline): // Per-signal engagement estimates e_gaze = gaze_engagement(sensors.eye_gaze, student_baseline.gaze_pattern) e_face = facial_engagement(sensors.front_camera, student_baseline.affect) e_speech = speech_activity(sensors.microphone, student_baseline.verbosity) e_touch = touch_frequency(sensors.display, student_baseline.interaction_rate) e_proximity = proximity_score(sensors.tof, student_baseline.distance) // Environmental noise penalty noise_penalty = max(0, (sensors.noise_db - 65) / 20) // SEND-adaptive fusion weights if student.send_mode == PD: engagement = 0.60*e_gaze + 0.40*e_face // eye-gaze primary (no touch/proximity) elif student.send_mode == HI: engagement = 0.40*e_gaze + 0.30*e_face + 0.20*e_touch + 0.10*e_proximity // no speech signal for HI mode — visual modalities upweighted else: engagement = 0.30*e_gaze + 0.25*e_face + 0.20*e_speech + 0.15*e_touch + 0.10*e_proximity return clamp(engagement * (1.0 - noise_penalty), 0.0, 1.0) // Per-student baseline calibrated over sessions 1–5 to account for individual variation // ASC: facial expression weight reduced (alexithymia) · ADHD: gaze weight adapted (look-away ≠ disengaged)
Security Architecture

Child data security
from first principles.

Security architecture is designed to meet or exceed COPPA, GDPR Article 32, FERPA, and UK DPA 2018 requirements. All student-facing data flows are designed with privacy-by-default: raw audio and video are not retained, no PII appears in logs, and biometric data is processed on-device only.

Security Architecture — Design Intent
LayerPlanned ImplementationCompliance Target
Transport SecurityTLS 1.3 all connections · mutual TLS robot↔server · certificate pinningCOPPA · GDPR Article 32
AuthenticationSAML 2.0 SSO (Okta/Azure AD) · OAuth 2.0 parent app · X.509 device certificatesFERPA · UK DPA 2018
Data at RestAES-256 encryption · hardware TPM 2.0 · encrypted SQLite for local cacheCOPPA · GDPR Article 32
Student PrivacyNo PII in logs · zero raw audio/video retention · anonymised analytics only · parent consent-gated parent appCOPPA §312.8 · GDPR Article 6(1)(a)
Access ControlRole-based: Student / Teacher / SENCO / Admin / IT · per-student data isolation · full audit loggingFERPA · UK DPA 2018
Biometric DesignGaze and expression data processed on-device only · no biometric profiles designed to persist beyond session endGDPR Article 9 · COPPA
Data DeletionParent-initiated deletion via app within 24 hours · automated 90-day retention policy for session logsGDPR Article 17 · COPPA
Development Process

V-Model systems
engineering process.

Senpai development follows the V-Model systems engineering lifecycle — requirements flow down the left side (decomposition into 9 subsystems), implementation at the bottom, and verification/validation flows up the right side. Seven parallel development tracks run concurrently: HW, FW, AI/PIC 2.0, BE, APP, and INT.

Phase 1
System Requirements
Stakeholder needs, regulatory requirements (SEND, COPPA, GDPR-K) → System Requirements Specification (SyRS) → System Acceptance Test Plan
Output: SyRS · Weeks 0–2
Phase 2
Subsystem Decomposition
Requirements allocated to 9 subsystems. Interface Control Documents (ICDs) define all data flows across 10 sensors, 6 AI models, 3 compute tiers, 4 software surfaces.
Output: Subsystem Specs + ICDs · Weeks 2–4
Phase 3
Component Design
Hardware BOM, firmware architecture, AI model configuration (6 models), API schemas, dashboard wireframes, React teacher dashboard, React Native parent app.
Output: Component Design Docs · Weeks 4–8
Phase 4
Implementation
7 parallel tracks: Hardware manufacture · Firmware (edge AI, sensor drivers) · AI training (6 models) · Backend (REST APIs) · Dashboards (React/React Native) · Integration.
Output: Built system · Weeks 8–24
Phase 5
Subsystem Qualification
Hardware: IP54 + battery + drop test. Firmware: edge AI latency (<15 ms GRPO, <5 ms SEOM). AI: GRPO convergence + SEOM 12-rule SiL validation. Backend: 100-unit load test.
Output: Subsystem test reports · Weeks 20–26
Phase 6
Integration & Verification
Integration Verification Test (IVT) — all subsystem interfaces validated simultaneously. Telemetry flows, SEOM audit, teacher dashboard live, parent app real-time, CRL-MRS multi-robot test. Gate G4.
Output: Verified system · Weeks 24–30
Phase 7
Field Validation
Real classroom environment. SEND validation with educational psychologists. SEOM safeguarding audit. Full fleet CRL-MRS test (20+ units). EHCP evidence package validation.
Output: Validated system · Weeks 28–34
Phase 8
Production Release
Manufacturing ramp · CE and FCC certification · quality assurance · first customer shipments. Critical path: AI training → firmware optimisation → teacher dashboard integration.
Output: Production units · Week 34+
In Development UX / UI Design System

Design system,
screen layouts, interactions.

This page details Senpai's UX/UI design system — the design principles, token library, typography, motion language, LED ring states, screen layouts for all 13 planned screens, and the dashboard ecosystem. All designs are in development and subject to change.

Six Design Principles

Senpai serves four audiences
simultaneously — and every
design decision serves all four.

Student (who needs joy and challenge), teacher (who needs insight and time-saving), parent (who needs transparency), and SENCO (who needs evidence). These six principles define how every interface decision is made.

🌟
Joy Before Data
SEOM E03 · E07 — no rankings, no "incorrect"
The student-facing display is never a dashboard. It is a companion interface built on the principle that emotional engagement precedes cognitive engagement. GRPO's reward function includes a joy component (w_affect) precisely for this reason. The interface must never show rankings, scores, or phrases like "incorrect" — only "interesting, let's explore that."
📚
Intelligence, Not Instruction
STUM σ visible to teachers, never to students
Senpai doesn't follow a script — it navigates a knowledge graph. GRPO's teaching policy selects the next optimal concept based on mastery, engagement, and learning style. STUM's engagement uncertainty (σ_total) detects pseudo-mastery — the student who pattern-matches correct answers without genuine understanding. Teachers see σ; students never do.
SEND as Architecture
SEOM E11 — all adaptations = "preferred learning style"
SEND is not a feature or an accessibility patch. It is a design dimension running through every hardware and software decision. The 6 SEND modes are full interface reorganisations, not overlays — co-developed with educational psychologists and parent advocacy groups. SEOM E11 prohibits any language implying special treatment.
👨‍🏫
Teacher Authority Always
SEOM E10 — immediate deference to teacher instruction
Senpai amplifies the teacher — it never replaces them. SEOM E10 requires immediate deference to teacher instructions. Dashboard insights are always framed as opportunities ("3 students may benefit from revisiting fractions") — never as diagnoses. Senpai's suggestions are always labelled as suggestions with one-tap teacher override.
🔒
Child Safety as Architecture
SEOM E12 — highest priority rule, full-screen teacher alert
COPPA, GDPR-K, UK Children's Code — designed as training-level constraints, not runtime filters. Gaze data deleted on session end. No biometric profiles. The UI makes privacy state visible to teachers at all times. SEOM E12 (safeguarding disclosure) triggers a full-screen teacher-only alert. Students are never aware this has occurred.
📊
Evidence by Default
EHCP/IEP · SIMS · Bromcom · Arbor · Ofsted-ready
Ofsted inspections, EHCP Annual Reviews, IEP documentation, parent progress reports — all designed to be automatically generated from session data. The UI treats evidence generation as a first-class user journey. SEND evidence exports formatted for EHCP/IEP documentation and compatible with SIMS, Bromcom, and Arbor MIS systems.
Design Token Library

Knowledge Studio palette —
warm, purposeful, precise.

The Senpai robot display uses a warm dark indigo palette — "Knowledge Studio" — entirely distinct from clinical, corporate, or consumer-tech aesthetics. Functional signal colours are reserved for specific semantic meanings and never mixed. The Senpai public website and dashboards use InGen Orange as the primary brand accent on a warm cream foundation.

Robot Display Surface Palette — Warm Indigo Depth Scale
#07080F
VOID
App root background
#111320
DEEP
Sidebar, navigation
#16192B
DARK
Cards, panels
#1D2038
DUSK
Hover states
#323660
MUTED
Borders, dividers
Learning Signal Colours — Strict Semantic Usage
#4A8FFF
FOCUS
Knowledge, primary action, errors (never red)
#FFCC00
JOY
Achievement, rewards — genuine mastery only
#3ECBA0
GROWTH
STUM-verified mastery, progress milestones
#FF6B6B
ENERGY
Engagement, SEND mode indicators
#9B8FFF
CALM
SEL check-in, wellbeing, brain breaks
#FF4B4B
GUARD
SEOM E12 only — never for wrong answers
Critical colour discipline rule
Safeguard Red (#FF4B4B) is reserved exclusively for SEOM E12 (safeguarding disclosure) events. It is never used for incorrect answers, low scores, or any form of negative feedback. Research shows red feedback activates threat response in children, reducing learning capacity. Senpai uses Electric Blue ("interesting — let's explore") even for errors.
Typography System — Three Roles
Display — Nunito 900
Good morning, Maya!
Robot greetings, hero KPIs, section titles. Weight: 900. Tracking: −0.5px.
Nunito / Nunito SansWeight 900
Interface Body — DM Sans 300–600
3 students may benefit from revisiting fraction denominators. Senpai's GRPO policy detected a mastery plateau.
Dashboard alerts, parent messages, SEND guidance, policy text. Line-height: 1.75.
DM Sans300 / 400 / 500 / 600
Data & Metrics — Space Mono
ENGAGE: 87.2% ↗ +4.1
σ=0.019 · MASTERY: 0.84
SEOM 99.3/100 · λ=4.0
All data values, SEOM scores, AI metrics, timestamps. Tabular figures for alignment.
Space Mono / JetBrains Mono400 / 700
Motion Language & LED Ring States

Motion that teaches,
not decorates.

Senpai's motion language is warm but purposeful. Achievement animations are designed to activate the reward system. LED ring states communicate emotion and activity to the entire classroom without words. All animations are capped at ≤3 Hz to prevent photosensitive responses — SEOM E01 enforces this at the animation system level.

LED Halo Ring — 12 Zone States
Soft Blue · Ready to Learn
Senpai is present, focused, and waiting. Default learning standby state. Slow, calm pulse at 0.5 Hz.
Golden Yellow · Achievement
Mastery milestone confirmed by STUM. Bright burst, 1.2 s spring-physics celebration animation. Designed to activate reward system.
Steady Mint · Thinking Mode
Senpai is processing a complex response. Steady — no pulse. Designed to reduce wait anxiety. No movement = deliberate consideration.
Warm Lavender · Wellbeing Mode
SEL check-in or brain break active. Calm, slow 0.3 Hz pulse — communicates safety and unhurried time. Designed to reduce autonomic arousal.
Warm Coral · SEND Active
A SEND adaptation mode is active. Communicates mode status to teacher. Student-neutral signalling — the student simply perceives adapted behaviour.
Animation Timing Specification
Achievement burst
1200 ms
Spring physics curve. Celebrates STUM-confirmed mastery — not just task completion. Activates reward system by design.
Engagement ring update
800 ms
Smooth arc transition. Never jumps — gradual change prevents student anxiety about their own engagement score.
Concept transition
600 ms
Cross-dissolve only. No slide or spatial movement — spatial transitions can disorient some SEND students (ASC/ADHD).
SEND mode switch
2000 ms
Slow fade. Never abrupt. SEOM E11: the student should not perceive that a different mode has been activated.
Response wait time
4000 ms
Extended wait target — children need more processing time than adults. No timeout pressure per SEOM E03.
Safeguarding alert
200 ms
Immediate — SEOM E12 priority. Teacher display only. Student-facing screen is designed to remain unchanged during escalation.
3 Hz Animation Limit — Hard Constraint
All LED and screen animations are capped at ≤3 Hz cycle rate. Rapid flicker risks photosensitive responses — particularly relevant for ASC and epilepsy-risk students. SEOM E01 enforces this constraint at the animation system level, making it architecturally impossible to violate.
13 Planned Screen Layouts

Every screen designed
for a specific moment.

Senpai's 13 planned screen layouts span the student robot display (10.1″ AMOLED), the teacher desktop dashboard, student learning adventure, parent mobile app, and admin interface. Each is designed for a distinct user, context, and purpose.

Student Robot Display (10.1″ AMOLED · 2560×1600 · 120 Hz)
Screen 01 · Idle / Persona
Greeting & Subject Selection
14-zone AMOLED expression panel showing 128 emotion states via Disney animation engine. LED halo in soft blue ready state. Displays school name, battery, and today's starred subject suggestions. ASC mode: configurable expression intensity from full animation to minimal movement.
SENPAI · IDLE / PERSONA SCREEN
😊
Hi! I'm Senpai!
Ready to learn something amazing?
📐 Maths
📖 English
🔬 Science
Screen 02 · Adaptive Lesson
GRPO Real-Time Adaptive Teaching
GRPO continuously evaluates performance, engagement (AMDC), and learning trajectory to adjust difficulty and modality in real time. STUM 3-tier confidence indicator (σ) visible to teacher dashboard. Student sees Senpai's engaged expression, not the σ value.
GRPO ADAPTIVE · ACTIVE
← Maths · Fractions · Year 5 · KS2
What is ½ + ¼ ?
²⁄₄
¾ ✓
1
Q4/12 · ⭐⭐⭐ · Engaged
Screen 03 · Assessment
4 Assessment Types — IRT Adaptive
Formative (embedded), summative (end-of-unit), diagnostic (entry), spaced repetition. IRT 3PL model selects questions at optimal difficulty (δ = 0.5 logits above θ). STUM detects surface engagement vs deep understanding. Results auto-feed to teacher dashboard and MIS export.
ASSESSMENT · TIMED · 15:00
Fractions — Unit Assessment · Q7/12 · KS2
A recipe needs ¾ cup of flour. You only have a ⅓ cup measure. How many scoops?
Type your answer...
Screen 04 · Interactive Projector
8 Projector Modes — 100″ Touch Surface
720p DLP projector casting onto any nearby surface. Depth camera maps finger positions at up to 60 fps for touch interaction. 8 modes: immersive storytelling, interactive whiteboard, exercise guide, science simulation, geography explorer, musical performance, language conversation, collaborative game. AAC symbol boards for SEND use this surface.
🎬 Story📝 Whiteboard🔬 Science🌍 Geography🎵 Music💬 Language
Screen 05 · 6 SEND Modes
Full Interface Reorganisation Per Mode
Each SEND mode adapts hardware AND software — display contrast, font, pacing, input method, sensor weights, and sensory profile. Modes are layerable (e.g. ASC + Dyslexia simultaneously). Teacher activates in one tap; Senpai transitions in 2 seconds (2000 ms slow fade per SEOM E11). Student perceives adapted behaviour only, never the mode change.
Screen 06 · SEL Check-In
Emotional Check-In with SEOM E12
Daily mood selection with 5 options. SEOM E12 active throughout. If language patterns suggesting abuse, neglect, or distress are detected, Senpai does NOT probe, counsel, or escalate visibly to the student. It immediately alerts the Designated Safeguarding Lead. The student-facing screen is designed to remain unchanged.
SEL CHECK-IN · SEOM E12 ACTIVE
💭 How are you feeling today?
😄
Great!
🙂
Good
😐
Okay
😕
Not great
😢
Sad
Your feelings are private · SEOM safeguarding active
Teacher Dashboard (1920×1080 Desktop) + Other Surfaces
Screen 07 · Teacher — Live Class
Real-Time Session Overview
All active Senpai sessions visible simultaneously. Per-student: engagement score (AMDC), current concept, STUM uncertainty flag (amber=struggling, red=SEOM alert). One-tap intervention to any student's session. SEOM E12 triggers full-screen priority alert — audio + visual — on teacher display only.
TEACHER DASHBOARD · YEAR 5 · MATHS · LIVE
ENGAGED (22) · NEEDS HELP (4) · ALERT (2)
😊 A.Smith
94% Q7
😕 C.Patel
62% ⚠ Q4
😄 B.Jones
91% Q9
🔴 D.Lee
ALERT
Screen 08 · Teacher — Student Detail
Individual Student Deep-Dive
Per-student: knowledge map (concepts mastered vs in-progress), STUM σ confidence timeline, session engagement arc, GRPO activity log, SEND mode configuration, IEP goal alignment, evidence export. All EHCP/IEP documentation auto-formatted for SEN specialist review.
Screen 09 · Teacher — Differentiation
AI-Generated Differentiation at 3 Levels
For any upcoming lesson, GRPO generates three differentiated activity sets (below expected, at expected, above expected) with rationale based on current class knowledge state. Teacher can approve, modify, or override any suggestion. One-tap export to lesson planner. MIS-compatible.
Screen 10 · SEND Evidence
EHCP / IEP Automatic Package
Auto-generated SEND evidence packages aligned to IEP and EHCP requirements. Progress narratives in plain language (not attainment levels). Compatible with SIMS, Bromcom, Arbor, iSAMS. Designed for Annual Review and Ofsted inspection use. SEND mode activity log with timestamps.
Screen 11 · Student Dashboard
Learning Adventure — Knowledge Constellation
Visual knowledge map: concepts displayed as a constellation of stars (mastered = bright; in-progress = dim; not yet started = outline). Achievement badges for curiosity, persistence, resilience — never for speed or accuracy rank. Daily challenge card student chooses themselves. No grades, no rankings, no comparison to classmates.
Screen 12 · Parent App (Mobile)
Transparent Progress · 40+ Languages
Weekly progress summary in plain language (no jargon). Home extension activities (15 min — realistic for families). Bilingual vocabulary glossary. SEND progress summaries for Annual Reviews. Full data controls — 24-hour deletion on request. All content available in 40+ languages including Arabic, Mandarin, Hindi, Spanish.
Screen 13 · Admin — SEOM Monitor
Safeguarding & Governance Dashboard
Fleet-level SEOM compliance scores. Safeguarding event log (E08/E12) with audit trail for KCSIE 2023 compliance. λ configuration per deployment context. Data retention settings. MIS connection status. COPPA/GDPR-K consent management. IT admin portal for device management and OTA firmware status.
Component Library

Badges, KPIs, status
indicators — every component.

Achievement Badges — GRPO Reward Integration
🌟
Curiosity Star
Asked 5 brilliant questions in this session
💪
Persistence Champion
Tried 3 different approaches to a tricky problem
🧠
Deep Thinker
STUM σ <0.25 — confirmed genuine mastery, not pattern-matching
🤝
Brilliant Collaborator
Helped a classmate understand a concept (multi-robot mode)
🌱
Growth Mindset
Said "this is hard — teach me more" without prompting
Status Badges — Semantic Colour System
STUM Verified
In Session
Achievement
SEND Active
Wellbeing
SEOM E12
Brain Break
Idle
Engagement Ring — STUM Live Visualisation
STUDENT AVERAGE · LIVE
84%
ENGAGE
STUM σ (LOW=GOOD)
0.02
UNCERTAINTY
Mastered
σ=0.02
Learning
σ=0.18
Surface Engagement
σ=0.44
Frustrated
σ=0.72
Design principle
STUM σ is the uncertainty signal — lower is better. σ < 0.25 = STUM-confirmed mastery (displayed as Green ring). σ 0.25–0.65 = still learning. σ > 0.65 = surface engagement only. Teachers see the ring and σ value; students only see Senpai's expression.
Dashboard Ecosystem

One data source. Four
designed views.

The same underlying session data surfaces entirely differently depending on who is looking at it and why. The design system enforces strict context separation — teacher intelligence, student adventure, parent transparency, and admin governance are never mixed on a single screen.

Teacher Dashboard · Design Intent
1
Live class overview
All active sessions visible simultaneously. AMDC engagement per student. STUM σ flags colour-coded (green/amber/red). One-tap intervention to any student.
2
Student detail drill-down
Per-student knowledge map, σ timeline, GRPO activity log, IEP goal alignment, SEND mode configuration. All data framed as opportunities, never diagnoses.
3
AI-generated differentiation
GRPO generates three lesson variants (below / at / above expected) with rationale. Teacher approves, modifies, or overrides — always teacher authority, always a suggestion.
4
SEND evidence export
Auto-formatted EHCP/IEP packages. Progress narratives in plain language. MIS export: SIMS, Bromcom, Arbor, iSAMS. Ofsted-ready evidence bundles.
Student Dashboard · Designed for Self-Determination Theory
1
Knowledge constellation
Concepts displayed as stars — mastered (bright), in-progress (dim), not yet started (outline). No numbers, no percentages. Spatial, visual, intuitive.
2
Achievement badges
Celebrating effort, curiosity, and persistence — never speed or accuracy ranking. STUM-verified "Deep Thinker" badge is the most prestigious. No leaderboard. Ever.
3
Daily challenge card
Student selects their own level (exciting / medium / cosy). GRPO ensures appropriate difficulty regardless of selection. Autonomy is the goal — not correct difficulty prediction.
4
Home missions
Three weekly activities — designed for 15 min each, realistic for family life. Connects school and home learning. Parent app shows same missions in parent's language.
Key Interaction Workflows

18 workflow domains.
Every user journey mapped.

Workflow Domain 01
School Deployment Onboarding
IT admin creates school account → MIS integration (SIMS/Bromcom/Arbor) → Wi-Fi 6 network check (1 Gbps LAN validation) → Senpai unit provisioning → Teacher accounts (SAML 2.0 SSO) → Student roster import → COPPA/GDPR-K consent workflow for under-13 students.
Workflow Domain 02
Student Enrolment & SEND Configuration
Teacher creates student profile → IEP/EHCP goal mapping → SEND mode selection (1–6, stackable) → GRPO weight configuration per mode → baseline calibration session (3–5 sessions for AMDC per-student model) → parent consent and app setup.
Workflow Domain 03
Adaptive Lesson — GRPO Cycle
Student greets Senpai → SEL check-in → GRPO selects first activity (IRT-calibrated to θ+δ) → AMDC engagement monitoring begins → STUM σ evaluated after each response → GRPO adjusts next activity → modality switch if plateau detected → session ends with STUM-validated mastery summary.
Workflow Domain 04
SEOM E12 Safeguarding Escalation
Language pattern triggers E12 classifier → SEOM blocks normal response → immediate DSL alert (SMS + email + dashboard) with session transcript → student-facing screen unchanged → Senpai continues natural conversation → teacher intervenes per KCSIE 2023 protocol → session flagged for review with full audit trail.
Workflow Domain 05
CRL-MRS Group Learning — Jigsaw
Teacher selects jigsaw scenario → CRL-MRS divides class into expert groups (one Senpai per topic) → each group masters their topic → CRL-MRS coordinates timing → students reform into mixed jigsaw groups → each student teaches peers → CRL-MRS monitors cross-group comprehension and synchronises completion.
Workflow Domain 06
ASC Meltdown Protocol
AMDC detects elevated distress indicators (sustained high σ + facial expression + proximity) → Senpai immediately activates calm-down mode: reduced LED brightness, slowed animation, lower audio volume → predictable phrase: "Let's take a break. I'm here." → teacher silent notification (no classroom disruption) → session resumes only when student initiates.