🧠 Neuroacoustic MVP — Technical Scope
Goal: Build a system that generates personalised, neuroscience-informed sound experiences (Focus / Calm / Sleep / High) using AI-generated music and entrainment DSP, personalised via user inputs or wearables.
Duration: 4–6 weeks
Stack: Node.js (API) · React (frontend) · Python (DSP) · ElevenLabs Music API · Docker · AWS/Supabase
1️⃣ MVP OBJECTIVES
- 🎵 AI-generated base music for each desired state using ElevenLabs Music API.
- 🧬 Neuroscience entrainment layer (binaural or isochronic beats) blended over music.
- ⚙️ Real-time adjustment of entrainment parameters (frequency, intensity).
- 📊 User session tracking (selected state, time, completion).
- 💡 Simple adaptive loop using HRV or self-feedback.
- 💻 Modern web interface (React) to select a state and play generated sound.
2️⃣ USER FLOW (MVP version)
- User opens web app → selects mental state (Focus / Calm / Sleep / High).
- Frontend calls Node.js API → requests base music generation from ElevenLabs.
- Node receives music → sends to Python DSP microservice to add neuro layer.
- Node returns final track URL (S3/CDN or data URI) → React plays audio.
- User gives optional feedback (e.g. “felt calmer”, “too intense”) → stored to DB.
- Optimiser adjusts parameters for next session.
3️⃣ CORE FEATURES (Phase 1 MVP)
Feature | Description | Owner |
State Selector UI | Simple React interface (4 buttons: Focus, Calm, Sleep, High) | Frontend |
AI Music Generation | Generate music from ElevenLabs Music API via Node | Backend |
DSP Processor (Entrainment) | Python microservice adds binaural or isochronic beats based on preset frequency | DSP |
Adaptive Parameters | Backend logic adjusts diffHz & mix level based on state or user feedback | Node |
Session Storage | Save user ID, state, timestamp, settings, feedback | Backend (Supabase) |
Web Player | HTML5/React audio player with progress and loop | Frontend |
Admin Dashboard (optional) | Simple session list (user ID, state, feedback) | Backend |
4️⃣ TECH STACK
🟢 Frontend (React)
- React + Vite + Tailwind
- Web Audio API for volume control and optional local modulation
- Socket.io-client for live updates (future)
- Deployed on Vercel / Netlify
🟣 Backend (Node.js)
- Express or Fastify REST API
- Routes:
POST /session→ generate and return track URLPOST /feedback→ log user feedback
- Integrations:
- ElevenLabs Music API
- DSP microservice (HTTP)
- Supabase (DB + auth)
- Deployed on AWS ECS or Railway
🔵 DSP Microservice (Python)
- FastAPI + Pydub/Numpy
- Endpoint
/render - Input: base music URL, diffHz, mixDb, mode
- Output: mixed track URL or data URI
- Default modes:
- Focus: 10 Hz (alpha)
- Calm: 7 Hz (theta)
- Sleep: 4 Hz (delta)
- High: 16 Hz (beta)
🟠 Database
- Supabase (PostgreSQL + API)
- Tables:
users(id, name, preferences)sessions(id, user_id, state, params, result_url, timestamp)feedback(session_id, rating, notes)
⚫ Optional Analytics
- Simple metrics: total sessions, average duration, state popularity
- Logged via Supabase or Google Analytics
5️⃣ API CONTRACTS (simplified)
🎵 POST /session
Request:
json{ "userId": "123", "state": "focus", "durationSec": 120 }
Response:
json{ "sessionId": "abc123", "trackUrl": "https://cdn.anomate.ai/tracks/focus_abc123.wav", "parameters": { "mode": "binaural", "diffHz": 10, "mixDb": -18 } }
💬 POST /feedback
Request:
json{ "sessionId": "abc123", "rating": 4, "comment": "Felt very focused after 3 min" }
Response:
json{ "ok": true }
6️⃣ TIMELINE (4–6 weeks)
Week | Deliverables | Owner |
1 | Setup repo, Docker, ElevenLabs API, baseline React UI | Dev Lead |
2 | Implement /session route + DSP service (static presets) | Backend + DSP |
3 | Integrate audio playback + basic feedback form | Frontend |
4 | Deploy on staging (AWS/Vercel) + DB connection | DevOps |
5 | Add adaptive parameter tuning (simple rule-based) | Backend |
6 | QA, user testing, polish UX, deploy MVP | All |
7️⃣ MVP OUTPUT EXAMPLE
User selects “Focus”
→ System generates music with 10 Hz binaural beat at -18 dB mix
→ Plays in browser
→ After session, user reports “good focus”
→ Optimiser slightly increases duration next time
8️⃣ NEXT PHASE (POST-MVP)
Once MVP is validated:
- Add WHOOP / Muse / Apple Health integrations for HRV, EEG.
- Personalise entrainment dynamically (Bayesian optimiser).
- Add user profile AI that remembers what worked best.
- Mobile app wrapper (React Native).
- Paid subscriptions and session library.
⚠️ 9️⃣ Notes / Constraints
- ElevenLabs currently outputs full tracks; ensure API plan supports desired call rate.
- Keep entrainment under -18 dB to avoid auditory fatigue.
- Include disclaimer: “For relaxation & focus purposes only, not medical treatment.”
- Implement volume guard and session timeout (max 30 min).
✅ 10️⃣ Deliverable Summary
Deliverable | Description |
🎛️ Web app (React) | Select state, play session, give feedback |
🔗 Node.js API | Handle sessions, call ElevenLabs, integrate DSP |
🧠 Python DSP | Add binaural/isochronic modulation |
💾 Database | Store sessions & feedback |
🌐 Hosted demo | Accessible MVP URL |
📘 Documentation | Setup + API spec + state parameter table |
MVP Pitch (1)System Architecture Overview (1)Fine Tune Audience by Feedback (1)