MIR-01: Adaptive ML Infrastructure—Privacy Preserving Authoring & Real Time Feedback Loops for Children’s AI

Brief description of the organization

Mirie is an early stage AI startup building a white label voice interactive companion platform for children’s media IP holders. We provide a "Safe by Design" infrastructure layer that enables brands to deploy engaging, conversational AI characters while maintaining a rigorous regulatory moat. Our architecture is built to exceed global standards, including COPPA, PIPEDA, and the emerging OPC Children’s Privacy Code.

We are building an AI-powered authoring suite that allows brand partners to transform static IP into dynamic, voice-interactive experiences. Our core innovation is a Contingent Adaptive Feedback Engine that uses ML to modulate dialogue complexity, emotional tone, and safety constraints in real time based on the child's developmental profile.


Problem area

Modern LLMs and STT systems are trained on adult data and lack "Safety by Design" guardrails. The central engineering challenge is twofold:

  • Vertical Orchestration: Building a multi-tenant system that isolates the Brand Management Plane (authoring), the Parent Dashboard (oversight), and the Child Interaction Layer (execution).
  • Adaptive ML Mediation: Developing an engine that performs real-time Named Entity Recognition (NER) to redact PII at the edge and utilizes Contingent Adaptive Feedback to modulate AI response complexity based on parent-defined age bands.

Main objectives

  • ML Driven Redaction & Safety: Build a local/edge ML module for real time PII detection in child speech to ensure zero leakage to cloud models.
  • Generative Authoring Suite: Develop an LLM orchestration layer that allows brands to author character behaviors with built in consistency and safety checks.
  • Contingent Logic Engine: Implement a feedback loop that programmatically tunes AI vocabulary and "No Go Zones" based on real time engagement and developmental metadata.
  • Vertical Isolation: Ensure 100% data siloing between brand partners using secure, logic based routing.

Scope of work

  • Algorithm Development: Research ML techniques for identifying sensitive content in low resource (child voice) environments.
  • System Architecture: Build the three vertical infrastructure (Brand/Parent/Child) using secure, immutable configurations.
  • HCI & Scaffolding: Apply Edith Law’s research to ensure the AI provides "adaptive scaffolding" - helping children learn through interaction rather than just providing answers.
  • Infrastructure Scaling: Implement high throughput NLP pipelines for real time feedback, drawing on Jimmy Lin’s methodologies for low latency processing.

Deliverables

  • New protocols/processes
  • Report
  • A technical deep dive on ML safety architecture and compliance with the OPC Children's Privacy Code, "Safe by Design" API and authoring specification for white label integration, a functional demo of the three vertical architecture with real time ML mediation and adaptive feedback.

Team meeting frequency

Bi-weekly.


Skills and training required

  • Machine Learning & NLP: Experience with Large Language Models (LLMs), fine-tuning (PEFT/LoRA), and Named Entity Recognition (NER) for privacy redaction.
  • Systems Architecture: Understanding of multi-tenancy, data siloing, and secure API design to manage the Brand, Parent, and Child verticals.
  • Human-Computer Interaction (HCI): Interest in "Safe by Design" principles and building "adaptive scaffolding" for children, inspired by the research of Edith Law and Leah Zhang-Kennedy.
  • Full-Stack Development: Proficiency in building secure, authenticated dashboards for parents and brand administrators.
  • Ethics in AI: A strong interest in navigating the regulatory landscape of the OPC Children’s Privacy Code and COPPA.

Resources required 

  • Computing Resources: Cloud GPU credits (AWS/GCP/Azure) for model inference and potential fine-tuning of small language models.
  • Specialized Software: Access to developer environments (GitHub, VS Code), and conversational AI APIs (e.g., Gemini API, LangChain).
  • Research Access: Documentation regarding current and emerging privacy regulations (OPC, PIPEDA, CCPA) provided by the Mirie team.
  • Testing Sandbox: A secure staging environment to validate the "Vertical Isolation" and "Adaptive Feedback" logic.

NDA or a commercialization agreement for this project?

Yes