diff --git a/PROMPT.md b/PROMPT.md
index be9115b1..685d7fe4 100644
--- a/PROMPT.md
+++ b/PROMPT.md
@@ -328,18 +328,135 @@ When documenting features, verify against actual source:
---
-## Diagram Guidelines
+## NO ASCII DIAGRAMS — MANDATORY
-For SVG diagrams in `src/assets/`:
+**NEVER use ASCII art diagrams in documentation. ALL diagrams must be SVG.**
+
+### Prohibited ASCII Patterns
```
-- Transparent background
-- Dual-theme support (light/dark CSS)
-- Width: 1040-1400px
-- Font: Arial, sans-serif
-- Colors: Blue #4A90E2, Orange #F5A623, Purple #BD10E0, Green #7ED321
+❌ NEVER USE:
+┌─────────┐ ╔═══════╗ +-------+
+│ Box │ ║ Box ║ | Box |
+└─────────┘ ╚═══════╝ +-------+
+
+├── folder/ │ → ▶ ──►
+│ └── file ▼ ← ◀ ◄──
```
+### What to Use Instead
+
+| Instead of... | Use... |
+|---------------|--------|
+| ASCII box diagrams | SVG diagrams in `assets/` |
+| ASCII flow charts | SVG with arrows and boxes |
+| ASCII directory trees | Markdown tables or SVG |
+| ASCII tables with borders | Markdown tables |
+| ASCII progress indicators | SVG or text description |
+
+### Directory Trees → Tables
+
+```markdown
+❌ WRONG:
+mybot.gbai/
+├── mybot.gbdialog/
+│ └── start.bas
+└── mybot.gbot/
+ └── config.csv
+
+✅ CORRECT:
+| Path | Description |
+|------|-------------|
+| `mybot.gbai/` | Root package |
+| `mybot.gbdialog/` | BASIC scripts |
+| `mybot.gbdialog/start.bas` | Entry point |
+| `mybot.gbot/` | Configuration |
+| `mybot.gbot/config.csv` | Bot settings |
+```
+
+### Rationale
+
+- ASCII breaks in different fonts/editors
+- Not accessible for screen readers
+- Cannot adapt to dark/light themes
+- Looks unprofessional in modern docs
+- SVGs scale perfectly at any size
+
+---
+
+## SVG Diagram Guidelines — Theme Transparency
+
+All SVG diagrams MUST be theme-agnostic and work with light/dark modes.
+
+### Required Structure
+
+Every SVG must include:
+
+1. **CSS classes for text** (not hardcoded colors)
+2. **Dark mode media query** in `
+```
+
+### Standard Gradients (use these IDs)
+
+| Gradient ID | Colors | Purpose |
+|-------------|--------|---------|
+| `primaryGrad` | `#6366F1` → `#8B5CF6` | Primary/purple elements |
+| `cyanGrad` | `#06B6D4` → `#0EA5E9` | Cyan/info elements |
+| `greenGrad` | `#10B981` → `#34D399` | Success/green elements |
+| `orangeGrad` | `#F59E0B` → `#FBBF24` | Warning/orange elements |
+
+### Background Rules
+
+- Use `fill="#FAFBFC"` for main background (not pure white)
+- Add dot pattern overlay: `fill="url(#dots)"` with `opacity="0.08"`
+- Cards use `fill="#FFFFFF"` with `stroke="#E2E8F0"` for definition
+- Use `filter="url(#cardShadow)"` for card depth
+
+### DO NOT
+
+- ❌ Hardcode text colors without CSS class
+- ❌ Use pure black (`#000000`) for text
+- ❌ Forget the `@media (prefers-color-scheme: dark)` block
+- ❌ Create new gradient IDs when standard ones exist
+- ❌ Use ASCII art diagrams when SVG is appropriate
+
+### DO
+
+- ✅ Use CSS classes for ALL text elements
+- ✅ Include dark mode media query in every SVG
+- ✅ Use standard gradient IDs consistently
+- ✅ Test SVGs in both light and dark browser modes
+- ✅ Convert ASCII diagrams to proper SVGs
+- ✅ Place SVGs in `src/assets/chapter-XX/` directories
+
+### Dimensions
+
+- Width: 600-1200px (responsive)
+- Use `style="max-width: 100%; height: auto;"` when embedding
+
+### Reference
+
+See `src/16-appendix-docs-style/svg.md` for complete guidelines.
+
---
## Translation Workflow
diff --git a/README.md b/README.md
index 2698e633..4e531668 100644
--- a/README.md
+++ b/README.md
@@ -10,26 +10,7 @@ A strongly-typed, self-hosted conversational platform focused on convention over
## 🏗️ Architecture
-```
-┌─────────────────────────────────────────────────────────────────────────┐
-│ General Bots Platform │
-├─────────────────────────────────────────────────────────────────────────┤
-│ │
-│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
-│ │ botapp │ │ botui │ │ botserver │ │
-│ │ (Tauri) │───▶│ (Pure Web) │───▶│ (API) │ │
-│ │ Desktop │ │ HTMX/HTML │ │ Rust │ │
-│ └─────────────┘ └─────────────┘ └──────┬──────┘ │
-│ │ │
-│ ┌───────────┴───────────┐ │
-│ │ │ │
-│ ┌─────▼─────┐ ┌──────▼─────┐ │
-│ │ botlib │ │ botbook │ │
-│ │ (Shared) │ │ (Docs) │ │
-│ └───────────┘ └────────────┘ │
-│ │
-└─────────────────────────────────────────────────────────────────────────┘
-```
+
---
diff --git a/src/04-gbui/how-to/add-kb-documents.md b/src/04-gbui/how-to/add-kb-documents.md
index 6b768791..d78b9d80 100644
--- a/src/04-gbui/how-to/add-kb-documents.md
+++ b/src/04-gbui/how-to/add-kb-documents.md
@@ -6,24 +6,7 @@
---
-```
-┌─────────────────────────────────────────────────────────────────────────┐
-│ │
-│ ┌─────────────────────────────────────────────────────────────────┐ │
-│ │ │ │
-│ │ 📚 ADD DOCUMENTS TO KNOWLEDGE BASE │ │
-│ │ │ │
-│ │ ┌─────────┐ ┌─────────┐ ┌─────────┐ ┌─────────┐ │ │
-│ │ │ Step │───▶│ Step │───▶│ Step │───▶│ Step │ │ │
-│ │ │ 1 │ │ 2 │ │ 3 │ │ 4 │ │ │
-│ │ │Prepare │ │ Upload │ │ Index │ │ Test │ │ │
-│ │ │ Docs │ │ Files │ │ KB │ │ KB │ │ │
-│ │ └─────────┘ └─────────┘ └─────────┘ └─────────┘ │ │
-│ │ │ │
-│ └─────────────────────────────────────────────────────────────────┘ │
-│ │
-└─────────────────────────────────────────────────────────────────────────┘
-```
+
---
@@ -57,36 +40,7 @@ Before you begin, make sure you have:
A **Knowledge Base (KB)** is a collection of documents that your bot uses to answer questions. When a user asks something, the bot searches through these documents to find relevant information.
-```
-┌─────────────────────────────────────────────────────────────────────────┐
-│ HOW KNOWLEDGE BASE WORKS │
-├─────────────────────────────────────────────────────────────────────────┤
-│ │
-│ User asks: "What is our refund policy?" │
-│ │ │
-│ ▼ │
-│ ┌─────────────────────────────────────────────────────────────┐ │
-│ │ 🔍 Semantic Search │ │
-│ │ Searches through all documents in the knowledge base │ │
-│ └────────────────────────┬────────────────────────────────────┘ │
-│ │ │
-│ ┌───────────────────┼───────────────────┐ │
-│ ▼ ▼ ▼ │
-│ ┌─────────┐ ┌─────────┐ ┌─────────┐ │
-│ │policies │ │ FAQ │ │ terms │ │
-│ │ .pdf │ │ .docx │ │ .md │ │
-│ └────┬────┘ └─────────┘ └─────────┘ │
-│ │ │
-│ ▼ Found match! │
-│ ┌─────────────────────────────────────────────────────────────┐ │
-│ │ "Refunds are available within 30 days of purchase..." │ │
-│ └─────────────────────────────────────────────────────────────┘ │
-│ │ │
-│ ▼ │
-│ Bot answers with context from the document │
-│ │
-└─────────────────────────────────────────────────────────────────────────┘
-```
+
---
diff --git a/src/04-gbui/how-to/connect-whatsapp.md b/src/04-gbui/how-to/connect-whatsapp.md
index 6fe98d9b..d27ce68f 100644
--- a/src/04-gbui/how-to/connect-whatsapp.md
+++ b/src/04-gbui/how-to/connect-whatsapp.md
@@ -6,24 +6,7 @@
---
-```
-┌─────────────────────────────────────────────────────────────────────────┐
-│ │
-│ ┌─────────────────────────────────────────────────────────────────┐ │
-│ │ │ │
-│ │ 📱 CONNECT WHATSAPP TO YOUR BOT │ │
-│ │ │ │
-│ │ ┌─────────┐ ┌─────────┐ ┌─────────┐ ┌─────────┐ │ │
-│ │ │ Step │───▶│ Step │───▶│ Step │───▶│ Step │ │ │
-│ │ │ 1 │ │ 2 │ │ 3 │ │ 4 │ │ │
-│ │ │ Meta │ │ Create │ │Configure│ │ Test │ │ │
-│ │ │ Account │ │ App │ │ Bot │ │ Channel │ │ │
-│ │ └─────────┘ └─────────┘ └─────────┘ └─────────┘ │ │
-│ │ │ │
-│ └─────────────────────────────────────────────────────────────────┘ │
-│ │
-└─────────────────────────────────────────────────────────────────────────┘
-```
+
---
@@ -56,38 +39,7 @@ Before you begin, make sure you have:
## Understanding WhatsApp Integration
-```
-┌─────────────────────────────────────────────────────────────────────────┐
-│ HOW WHATSAPP INTEGRATION WORKS │
-├─────────────────────────────────────────────────────────────────────────┤
-│ │
-│ User sends message on WhatsApp │
-│ │ │
-│ ▼ │
-│ ┌─────────────────────────────────────────────────────────────┐ │
-│ │ WhatsApp Cloud API │ │
-│ │ (Meta's servers receive message) │ │
-│ └────────────────────────┬────────────────────────────────────┘ │
-│ │ │
-│ │ Webhook │
-│ ▼ │
-│ ┌─────────────────────────────────────────────────────────────┐ │
-│ │ General Bots Server │ │
-│ │ (Your bot processes the message) │ │
-│ └────────────────────────┬────────────────────────────────────┘ │
-│ │ │
-│ │ API Call │
-│ ▼ │
-│ ┌─────────────────────────────────────────────────────────────┐ │
-│ │ WhatsApp Cloud API │ │
-│ │ (Sends reply to user) │ │
-│ └────────────────────────┬────────────────────────────────────┘ │
-│ │ │
-│ ▼ │
-│ User receives bot response on WhatsApp │
-│ │
-└─────────────────────────────────────────────────────────────────────────┘
-```
+
---
diff --git a/src/13-devices/README.md b/src/13-devices/README.md
index 4153b4ac..ac7ec4f5 100644
--- a/src/13-devices/README.md
+++ b/src/13-devices/README.md
@@ -13,26 +13,7 @@ General Bots can run on any device, from mobile phones to minimal embedded hardw
- **Education** - Classroom assistants, lab equipment interfaces
- **Healthcare** - Patient check-in, medication reminders
-```
-┌─────────────────────────────────────────────────────────────────────────────┐
-│ Embedded GB Architecture │
-├─────────────────────────────────────────────────────────────────────────────┤
-│ │
-│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
-│ │ Display │ │ botserver │ │ llama.cpp │ │
-│ │ LCD/OLED │────▶│ (Rust) │────▶│ (Local) │ │
-│ │ TFT/HDMI │ │ Port 8088 │ │ Port 8080 │ │
-│ └──────────────┘ └──────────────┘ └──────────────┘ │
-│ │ │ │ │
-│ │ │ │ │
-│ ┌──────▼──────┐ ┌──────▼──────┐ ┌──────▼──────┐ │
-│ │ Keyboard │ │ SQLite │ │ TinyLlama │ │
-│ │ Buttons │ │ (Data) │ │ GGUF │ │
-│ │ Touch │ │ │ │ (~700MB) │ │
-│ └─────────────┘ └─────────────┘ └─────────────┘ │
-│ │
-└─────────────────────────────────────────────────────────────────────────────┘
-```
+
## What's in This Chapter
@@ -43,6 +24,10 @@ General Bots can run on any device, from mobile phones to minimal embedded hardw
- [Supported Hardware](./hardware.md) - SBCs, displays, and peripherals
- [Quick Start](./quick-start.md) - Deploy in 5 minutes
- [Local LLM](./local-llm.md) - Offline AI with llama.cpp
+- [Buying Guide](./buying-guide.md) - Choose your first SBC
+
+### Robotics
+- [Humanoid Robots](./humanoid.md) - Build intelligent humanoids with CV, LLM, and movement control
### Deployment Options
@@ -52,3 +37,4 @@ General Bots can run on any device, from mobile phones to minimal embedded hardw
| **Raspberry Pi** | IoT, displays, terminals | 1GB+ RAM |
| **Orange Pi** | Full offline AI | 4GB+ RAM for LLM |
| **Industrial** | Factory, retail, healthcare | Any ARM/x86 SBC |
+| **Humanoid Robots** | Service, reception, research | Servo kit + compute board |
diff --git a/src/13-devices/buying-guide.md b/src/13-devices/buying-guide.md
index 97740b02..2b7b286a 100644
--- a/src/13-devices/buying-guide.md
+++ b/src/13-devices/buying-guide.md
@@ -16,24 +16,7 @@ A Single Board Computer (SBC) is a complete computer on a single circuit board.
### Decision Flowchart
-```
- ┌─────────────────────────┐
- │ What's your budget? │
- └───────────┬─────────────┘
- │
- ┌───────────────────┼───────────────────┐
- ▼ ▼ ▼
- Under $30 $30-80 $80-150
- │ │ │
- ▼ ▼ ▼
- ┌───────────────┐ ┌───────────────┐ ┌───────────────┐
- │ Orange Pi │ │ Raspberry Pi │ │ Orange Pi 5 │
- │ Zero 3 │ │ 4 / 5 │ │ (with NPU) │
- │ │ │ │ │ │
- │ Basic display │ │ Full desktop │ │ Local AI/LLM │
- │ Simple tasks │ │ Most projects │ │ Edge AI │
- └───────────────┘ └───────────────┘ └───────────────┘
-```
+
### Recommended Starter Kits
@@ -107,65 +90,61 @@ This board has a 6 TOPS NPU for accelerated AI inference!
Perfect first project - monitor and log temperature!
-```
-□ Orange Pi Zero 3 (1GB) $20
-□ 16GB microSD card $5
-□ 5V 2A power supply $5
-□ DHT22 temperature sensor $6
-□ 0.96" OLED display (I2C) $6
-□ Jumper wires (female-female) $3
- ─────────────
- Total: $45
-```
+| Item | Price |
+|------|-------|
+| Orange Pi Zero 3 (1GB) | $20 |
+| 16GB microSD card | $5 |
+| 5V 2A power supply | $5 |
+| DHT22 temperature sensor | $6 |
+| 0.96" OLED display (I2C) | $6 |
+| Jumper wires (female-female) | $3 |
+| **Total** | **$45** |
### Smart Doorbell ($70)
AI-powered doorbell with notifications!
-```
-□ Raspberry Pi Zero 2 W $15
-□ Pi Camera Module $25
-□ Push button $1
-□ Piezo buzzer $2
-□ LED (with resistor) $1
-□ 16GB microSD card $5
-□ 5V 2.5A power supply $8
-□ Case $5
-□ Jumper wires $3
- ─────────────
- Total: $70
-```
+| Item | Price |
+|------|-------|
+| Raspberry Pi Zero 2 W | $15 |
+| Pi Camera Module | $25 |
+| Push button | $1 |
+| Piezo buzzer | $2 |
+| LED (with resistor) | $1 |
+| 16GB microSD card | $5 |
+| 5V 2.5A power supply | $8 |
+| Case | $5 |
+| Jumper wires | $3 |
+| **Total** | **$70** |
### Offline AI Assistant ($150)
Run AI completely offline - no internet needed!
-```
-□ Orange Pi 5 (8GB RAM) $89
-□ 128GB NVMe SSD $20
-□ 12V 3A power supply $12
-□ 7" HDMI touchscreen $45
-□ USB microphone $10
-□ Case with fan $15
-□ Jumper wires $3
- ─────────────
- Total: ~$195
-```
+| Item | Price |
+|------|-------|
+| Orange Pi 5 (8GB RAM) | $89 |
+| 128GB NVMe SSD | $20 |
+| 12V 3A power supply | $12 |
+| 7" HDMI touchscreen | $45 |
+| USB microphone | $10 |
+| Case with fan | $15 |
+| Jumper wires | $3 |
+| **Total** | **~$195** |
### Voice-Controlled Lights ($55)
Control your lights by talking!
-```
-□ Raspberry Pi 4 (2GB) $35
-□ 4-channel relay module $6
-□ USB microphone $8
-□ 16GB microSD card $5
-□ 5V 3A power supply $8
-□ Jumper wires $3
- ─────────────
- Total: ~$65
-```
+| Item | Price |
+|------|-------|
+| Raspberry Pi 4 (2GB) | $35 |
+| 4-channel relay module | $6 |
+| USB microphone | $8 |
+| 16GB microSD card | $5 |
+| 5V 3A power supply | $8 |
+| Jumper wires | $3 |
+| **Total** | **~$65** |
## Where to Buy (By Region)
diff --git a/src/13-devices/hardware.md b/src/13-devices/hardware.md
index f8d6d416..5e2e3401 100644
--- a/src/13-devices/hardware.md
+++ b/src/13-devices/hardware.md
@@ -39,22 +39,7 @@
The Orange Pi 5 with RK3588S is ideal for embedded LLM:
-```
-┌─────────────────────────────────────────────────────────────┐
-│ Orange Pi 5 - Best for Offline AI │
-├─────────────────────────────────────────────────────────────┤
-│ CPU: Rockchip RK3588S (4x A76 + 4x A55) │
-│ NPU: 6 TOPS (Neural Processing Unit) │
-│ GPU: Mali-G610 MP4 │
-│ RAM: 4GB / 8GB / 16GB LPDDR4X │
-│ Storage: M.2 NVMe + eMMC + microSD │
-│ │
-│ LLM Performance: │
-│ ├─ TinyLlama 1.1B Q4: ~8-12 tokens/sec │
-│ ├─ Phi-2 2.7B Q4: ~4-6 tokens/sec │
-│ └─ With NPU (rkllm): ~20-30 tokens/sec │
-└─────────────────────────────────────────────────────────────┘
-```
+
## Displays
@@ -68,13 +53,7 @@ For text-only interfaces:
| HD44780 20x4 | 20 chars × 4 lines | I2C/GPIO | More context |
| LCD2004 | 20 chars × 4 lines | I2C | Industrial |
-**Example output on 16x2:**
-```
-┌────────────────┐
-│> How can I help│
-│< Processing... │
-└────────────────┘
-```
+**Example output on 16x2:** Simple text display showing user prompt and bot status.
### OLED Displays
@@ -138,17 +117,7 @@ For low-power, readable in sunlight:
### Buttons & GPIO
-```
-┌─────────────────────────────────────────────┐
-│ Simple 4-Button Interface │
-├─────────────────────────────────────────────┤
-│ │
-│ [◄ PREV] [▲ UP] [▼ DOWN] [► SELECT] │
-│ │
-│ GPIO 17 GPIO 27 GPIO 22 GPIO 23 │
-│ │
-└─────────────────────────────────────────────┘
-```
+
## Enclosures
diff --git a/src/13-devices/humanoid.md b/src/13-devices/humanoid.md
new file mode 100644
index 00000000..0f48bda4
--- /dev/null
+++ b/src/13-devices/humanoid.md
@@ -0,0 +1,654 @@
+# Humanoid Robotics with General Bots
+
+Build custom humanoid robots powered by General Bots, using computer vision from botmodels, LLM intelligence from botserver, and BASIC keywords for movement control.
+
+## Overview
+
+General Bots transforms humanoid robot kits into intelligent conversational assistants capable of:
+
+- **Natural conversation** via LLM integration
+- **Visual recognition** using botmodels CV pipelines
+- **Expressive movement** through BASIC keyword commands
+- **Autonomous behavior** with scheduled tasks and triggers
+
+
+
+## Supported Robot Kits
+
+### Consumer Kits
+
+| Kit | DOF | Height | Price | Best For |
+|-----|-----|--------|-------|----------|
+| **Unitree G1** | 23 | 127cm | $16,000 | Research, commercial |
+| **Unitree H1** | 19 | 180cm | $90,000 | Industrial, research |
+| **UBTECH Walker S** | 41 | 145cm | Contact | Enterprise service |
+| **Fourier GR-1** | 40 | 165cm | $100,000+ | Research |
+| **Agility Digit** | 16 | 175cm | Lease | Warehouse, logistics |
+
+### DIY/Maker Kits
+
+| Kit | DOF | Height | Price | Best For |
+|-----|-----|--------|-------|----------|
+| **Poppy Humanoid** | 25 | 83cm | $8,000 | Education, research |
+| **InMoov** | 22+ | 170cm | $1,500+ | Makers, open source |
+| **Reachy** | 14 | Torso | $15,000 | HRI research |
+| **NimbRo-OP2X** | 20 | 95cm | $30,000 | RoboCup, research |
+| **ROBOTIS OP3** | 20 | 51cm | $11,000 | Education, competition |
+
+### Affordable Entry Points
+
+| Kit | DOF | Height | Price | Best For |
+|-----|-----|--------|-------|----------|
+| **ROBOTIS MINI** | 16 | 27cm | $500 | Learning, hobby |
+| **Lynxmotion SES-V2** | 17 | 40cm | $800 | Education |
+| **Hiwonder TonyPi** | 16 | 35cm | $400 | Raspberry Pi projects |
+| **XGO-Mini** | 12 | 20cm | $350 | Quadruped + arm |
+
+## Architecture
+
+### System Components
+
+| Component | Role | Technology |
+|-----------|------|------------|
+| **botserver** | Brain - LLM, conversation, decisions | Rust, Port 8088 |
+| **botmodels** | Eyes - CV, face recognition, object detection | Python, ONNX |
+| **BASIC Runtime** | Motor control - movement sequences | Keywords → servo commands |
+| **Hardware Bridge** | Translation layer | GPIO, Serial, CAN bus |
+
+### Communication Flow
+
+| Step | From | To | Protocol |
+|------|------|----|----------|
+| 1 | Microphone | botserver | Audio stream / STT |
+| 2 | botserver | LLM | HTTP API |
+| 3 | LLM response | BASIC interpreter | Internal |
+| 4 | BASIC keywords | Hardware bridge | Serial/GPIO |
+| 5 | Hardware bridge | Servos | PWM/CAN |
+| 6 | Camera | botmodels | Video stream |
+| 7 | botmodels | botserver | Detection events |
+
+## Hardware Setup
+
+### Computing Options
+
+| Option | Specs | Use Case |
+|--------|-------|----------|
+| **Jetson Orin Nano** | 8GB, 40 TOPS | Full CV + local LLM |
+| **Raspberry Pi 5** | 8GB, no NPU | Cloud LLM, basic CV |
+| **Orange Pi 5** | 8GB, 6 TOPS NPU | Local LLM, basic CV |
+| **Intel NUC** | 32GB, x86 | Maximum performance |
+
+### Servo Controllers
+
+| Controller | Channels | Interface | Best For |
+|------------|----------|-----------|----------|
+| **PCA9685** | 16 PWM | I2C | Small robots |
+| **Pololu Maestro** | 6-24 | USB/Serial | Hobby servos |
+| **Dynamixel U2D2** | 254 | USB/TTL/RS485 | Dynamixel servos |
+| **ODrive** | 2 BLDC | USB/CAN | High-torque joints |
+
+### Sensor Integration
+
+| Sensor | Purpose | Interface |
+|--------|---------|-----------|
+| USB Camera | Vision, face detection | USB/CSI |
+| IMU (MPU6050) | Balance, orientation | I2C |
+| ToF (VL53L0X) | Distance sensing | I2C |
+| Microphone Array | Voice input, direction | USB/I2S |
+| Force Sensors | Grip feedback | ADC |
+
+## BASIC Keywords for Robotics
+
+### Servo Control
+
+```basic
+' Initialize servo controller
+SERVO INIT "PCA9685", address: 0x40
+
+' Move single servo
+SERVO MOVE channel, angle, speed
+SERVO MOVE 0, 90, 50 ' Channel 0 to 90° at speed 50
+
+' Move multiple servos simultaneously
+SERVO SYNC moves
+SERVO SYNC [[0, 90], [1, 45], [2, 135]]
+
+' Read servo position
+position = SERVO READ channel
+```
+
+### Predefined Movements
+
+```basic
+' Humanoid poses
+ROBOT POSE "stand"
+ROBOT POSE "sit"
+ROBOT POSE "wave"
+ROBOT POSE "bow"
+ROBOT POSE "point", direction: "left"
+
+' Walking
+ROBOT WALK steps, direction
+ROBOT WALK 5, "forward"
+ROBOT WALK 3, "backward"
+ROBOT TURN degrees
+ROBOT TURN 90 ' Turn right 90°
+ROBOT TURN -45 ' Turn left 45°
+
+' Arm movements
+ROBOT ARM "left", action: "raise"
+ROBOT ARM "right", action: "extend"
+ROBOT GRIP "left", "open"
+ROBOT GRIP "right", "close"
+
+' Head movements
+ROBOT HEAD "nod"
+ROBOT HEAD "shake"
+ROBOT HEAD "look", pan: 30, tilt: -15
+```
+
+### Motion Sequences
+
+```basic
+' Define custom sequence
+SEQUENCE DEFINE "greeting"
+ ROBOT POSE "stand"
+ WAIT 500
+ ROBOT HEAD "look", pan: 0, tilt: 0
+ ROBOT ARM "right", action: "wave"
+ WAIT 1000
+ ROBOT POSE "bow"
+ WAIT 800
+ ROBOT POSE "stand"
+SEQUENCE END
+
+' Play sequence
+SEQUENCE PLAY "greeting"
+
+' Interrupt sequence
+SEQUENCE STOP
+```
+
+### Balance and Safety
+
+```basic
+' Enable balance control
+ROBOT BALANCE ON
+ROBOT BALANCE OFF
+
+' Set safety limits
+ROBOT LIMITS torque: 80, speed: 70
+
+' Emergency stop
+ROBOT STOP
+
+' Check stability
+stable = ROBOT STABLE
+IF NOT stable THEN
+ ROBOT POSE "stable"
+END IF
+```
+
+## Computer Vision Integration
+
+### Face Detection and Recognition
+
+```basic
+' Start face detection
+CV START "face_detection"
+
+' Wait for face
+face = CV DETECT "face", timeout: 5000
+
+IF face THEN
+ ' Get face details
+ name = face.identity
+ emotion = face.emotion
+ direction = face.direction
+
+ ' React to person
+ ROBOT HEAD "look", pan: direction.x, tilt: direction.y
+
+ IF name <> "unknown" THEN
+ TALK "Hello " + name + "!"
+ SEQUENCE PLAY "greeting"
+ ELSE
+ TALK "Hello! I don't think we've met."
+ END IF
+END IF
+```
+
+### Object Detection
+
+```basic
+' Detect objects in view
+objects = CV DETECT "objects"
+
+FOR EACH obj IN objects
+ IF obj.class = "cup" THEN
+ ' Point at the cup
+ ROBOT ARM "right", action: "point"
+ ROBOT HEAD "look", pan: obj.x, tilt: obj.y
+ TALK "I see a " + obj.color + " cup"
+ END IF
+NEXT
+```
+
+### Gesture Recognition
+
+```basic
+' Enable gesture detection
+CV START "gesture_detection"
+
+' React to gestures
+gesture = CV DETECT "gesture", timeout: 10000
+
+SELECT CASE gesture.type
+ CASE "wave"
+ SEQUENCE PLAY "wave_back"
+ CASE "thumbs_up"
+ ROBOT POSE "thumbs_up"
+ TALK "Great!"
+ CASE "come_here"
+ ROBOT WALK 3, "forward"
+ CASE "stop"
+ ROBOT STOP
+END SELECT
+```
+
+### Following Behavior
+
+```basic
+' Follow a person
+CV START "person_tracking"
+
+WHILE TRUE
+ person = CV TRACK "person"
+
+ IF person THEN
+ ' Keep person centered
+ IF person.x < -20 THEN
+ ROBOT TURN -10
+ ELSE IF person.x > 20 THEN
+ ROBOT TURN 10
+ END IF
+
+ ' Maintain distance
+ IF person.distance < 1.0 THEN
+ ROBOT WALK 1, "backward"
+ ELSE IF person.distance > 2.0 THEN
+ ROBOT WALK 1, "forward"
+ END IF
+ ELSE
+ ' Lost tracking, look around
+ ROBOT HEAD "look", pan: 45, tilt: 0
+ WAIT 1000
+ ROBOT HEAD "look", pan: -45, tilt: 0
+ WAIT 1000
+ END IF
+
+ WAIT 100
+WEND
+```
+
+## LLM-Driven Behavior
+
+### Conversational Movement
+
+```basic
+' System prompt for embodied AI
+system_prompt = "You are a helpful humanoid robot assistant. "
+system_prompt = system_prompt + "When responding, include movement commands in brackets. "
+system_prompt = system_prompt + "Available: [wave], [nod], [shake_head], [bow], [point_left], [point_right], [think]"
+
+SET SYSTEM PROMPT system_prompt
+
+' Process conversation with movements
+response = LLM "How can I help you today?"
+
+' Parse and execute movements
+movements = EXTRACT_BRACKETS response
+FOR EACH move IN movements
+ SELECT CASE move
+ CASE "wave"
+ SEQUENCE PLAY "wave"
+ CASE "nod"
+ ROBOT HEAD "nod"
+ CASE "shake_head"
+ ROBOT HEAD "shake"
+ CASE "bow"
+ ROBOT POSE "bow"
+ CASE "point_left"
+ ROBOT ARM "left", action: "point"
+ CASE "point_right"
+ ROBOT ARM "right", action: "point"
+ CASE "think"
+ ROBOT HEAD "look", pan: 15, tilt: 15
+ END SELECT
+NEXT
+
+' Speak cleaned response
+clean_response = REMOVE_BRACKETS response
+TALK clean_response
+```
+
+### Autonomous Behavior Loop
+
+```basic
+' Main behavior loop
+WHILE TRUE
+ ' Check for voice input
+ input = HEAR timeout: 5000
+
+ IF input <> "" THEN
+ ' Process with LLM
+ response = LLM input
+ ProcessResponse response
+ ELSE
+ ' Idle behaviors
+ idle_action = RANDOM 1, 5
+
+ SELECT CASE idle_action
+ CASE 1
+ ' Look around
+ ROBOT HEAD "look", pan: RANDOM -30, 30, tilt: 0
+ CASE 2
+ ' Check for faces
+ face = CV DETECT "face", timeout: 1000
+ IF face THEN
+ TALK "Hello there!"
+ SEQUENCE PLAY "greeting"
+ END IF
+ CASE 3
+ ' Slight movement
+ ROBOT POSE "relax"
+ CASE 4, 5
+ ' Do nothing
+ END SELECT
+ END IF
+
+ WAIT 1000
+WEND
+```
+
+## Configuration
+
+### Robot Definition File
+
+Create `robot.csv` in your `.gbot` folder:
+
+| name | value |
+|------|-------|
+| robot_type | humanoid |
+| controller | PCA9685 |
+| controller_address | 0x40 |
+| servo_count | 16 |
+| balance_enabled | true |
+| cv_enabled | true |
+| cv_model | yolov8n |
+| face_model | arcface |
+
+### Servo Mapping
+
+Create `servos.csv` to map joints to channels:
+
+| joint | channel | min_angle | max_angle | home | inverted |
+|-------|---------|-----------|-----------|------|----------|
+| head_pan | 0 | -90 | 90 | 0 | false |
+| head_tilt | 1 | -45 | 45 | 0 | false |
+| left_shoulder | 2 | -90 | 180 | 0 | false |
+| left_elbow | 3 | 0 | 135 | 90 | false |
+| left_wrist | 4 | -90 | 90 | 0 | false |
+| right_shoulder | 5 | -90 | 180 | 0 | true |
+| right_elbow | 6 | 0 | 135 | 90 | true |
+| right_wrist | 7 | -90 | 90 | 0 | true |
+| left_hip | 8 | -45 | 45 | 0 | false |
+| left_knee | 9 | 0 | 90 | 0 | false |
+| left_ankle | 10 | -30 | 30 | 0 | false |
+| right_hip | 11 | -45 | 45 | 0 | true |
+| right_knee | 12 | 0 | 90 | 0 | true |
+| right_ankle | 13 | -30 | 30 | 0 | true |
+| left_grip | 14 | 0 | 90 | 0 | false |
+| right_grip | 15 | 0 | 90 | 0 | false |
+
+### Motion Files
+
+Create `motions.csv` for predefined sequences:
+
+| name | frames |
+|------|--------|
+| wave | [{"t":0,"joints":{"right_shoulder":90}},{"t":500,"joints":{"right_shoulder":120}},{"t":1000,"joints":{"right_shoulder":90}}] |
+| bow | [{"t":0,"joints":{"head_tilt":0}},{"t":500,"joints":{"head_tilt":-30}},{"t":1500,"joints":{"head_tilt":0}}] |
+| nod | [{"t":0,"joints":{"head_tilt":0}},{"t":200,"joints":{"head_tilt":-15}},{"t":400,"joints":{"head_tilt":10}},{"t":600,"joints":{"head_tilt":0}}] |
+
+## Build Guides
+
+### Beginner: Desktop Companion
+
+**Budget: ~$600**
+
+| Component | Model | Price |
+|-----------|-------|-------|
+| Robot Kit | ROBOTIS MINI | $500 |
+| Compute | Raspberry Pi 5 4GB | $60 |
+| Camera | Pi Camera v3 | $25 |
+| Microphone | USB mic | $15 |
+
+**Capabilities:**
+- Voice conversation via cloud LLM
+- Basic face detection
+- Gesture responses
+- Desktop assistant
+
+### Intermediate: Service Robot
+
+**Budget: ~$10,000**
+
+| Component | Model | Price |
+|-----------|-------|-------|
+| Robot Kit | Poppy Humanoid | $8,000 |
+| Compute | Jetson Orin Nano | $500 |
+| Camera | Intel RealSense D435 | $300 |
+| Microphone | ReSpeaker Array | $100 |
+| Speakers | USB powered | $50 |
+
+**Capabilities:**
+- Local LLM (Llama 3.2 3B)
+- Full body tracking
+- Object manipulation
+- Walking and navigation
+
+### Advanced: Research Platform
+
+**Budget: ~$20,000+**
+
+| Component | Model | Price |
+|-----------|-------|-------|
+| Robot Kit | Unitree G1 EDU | $16,000 |
+| Compute | Onboard + Workstation | $3,000 |
+| Sensors | LiDAR + Depth cameras | $1,000 |
+
+**Capabilities:**
+- Full autonomous navigation
+- Complex manipulation
+- Multi-modal interaction
+- Research-grade motion control
+
+## Safety Considerations
+
+### Physical Safety
+
+- Always have emergency stop accessible
+- Set conservative torque limits initially
+- Test movements at reduced speed first
+- Ensure stable base before walking
+- Keep clear area around robot
+
+### Software Safety
+
+```basic
+' Always wrap motion code in error handling
+ON ERROR GOTO SafetyStop
+
+' Motion code here...
+
+END
+
+SafetyStop:
+ ROBOT STOP
+ ROBOT POSE "safe"
+ LOG ERROR "Emergency stop: " + ERROR MESSAGE
+ TALK "I need to stop for safety"
+```
+
+### Operating Guidelines
+
+| Situation | Action |
+|-----------|--------|
+| Unknown obstacle detected | Stop, assess, navigate |
+| Balance threshold exceeded | Widen stance or sit |
+| Person too close | Step back, warn verbally |
+| Battery low | Announce, return to charger |
+| Communication lost | Safe pose, wait for reconnect |
+
+## Example: Reception Robot
+
+Complete example of a reception desk humanoid:
+
+```basic
+' reception-robot.bas
+' A humanoid reception assistant
+
+' Initialize systems
+SERVO INIT "PCA9685", address: 0x40
+CV START "face_detection"
+ROBOT POSE "stand"
+ROBOT BALANCE ON
+
+' Set personality
+SET SYSTEM PROMPT "You are a friendly reception robot at TechCorp. "
+SET SYSTEM PROMPT SYSTEM PROMPT + "Greet visitors, answer questions about the company, "
+SET SYSTEM PROMPT SYSTEM PROMPT + "and direct them to the right department. "
+SET SYSTEM PROMPT SYSTEM PROMPT + "Be warm and professional. Use [wave], [nod], [point_left], [point_right] for gestures."
+
+' Load company knowledge
+USE KB "company-info"
+
+' Main loop
+WHILE TRUE
+ ' Watch for approaching people
+ face = CV DETECT "face", timeout: 2000
+
+ IF face THEN
+ ' Turn to face person
+ ROBOT HEAD "look", pan: face.direction.x, tilt: face.direction.y
+
+ ' Check if known
+ IF face.identity <> "unknown" THEN
+ TALK "Welcome back, " + face.identity + "!"
+ SEQUENCE PLAY "wave"
+ ELSE
+ TALK "Hello! Welcome to TechCorp. How can I help you today?"
+ SEQUENCE PLAY "greeting"
+ END IF
+
+ ' Enter conversation mode
+ ConversationLoop
+ END IF
+
+ ' Idle animation
+ IdleBehavior
+
+ WAIT 500
+WEND
+
+SUB ConversationLoop
+ silence_count = 0
+
+ WHILE silence_count < 3
+ input = HEAR timeout: 10000
+
+ IF input <> "" THEN
+ silence_count = 0
+
+ ' Process with LLM
+ response = LLM input
+
+ ' Execute gestures
+ ExecuteGestures response
+
+ ' Speak response
+ TALK REMOVE_BRACKETS response
+
+ ' Check for directions
+ IF CONTAINS response, "point_left" THEN
+ TALK "The elevators are to your left"
+ ELSE IF CONTAINS response, "point_right" THEN
+ TALK "You'll find that department to your right"
+ END IF
+ ELSE
+ silence_count = silence_count + 1
+
+ IF silence_count = 2 THEN
+ TALK "Is there anything else I can help with?"
+ ROBOT HEAD "tilt", angle: 10
+ END IF
+ END IF
+ WEND
+
+ TALK "Have a great day!"
+ SEQUENCE PLAY "wave"
+END SUB
+
+SUB IdleBehavior
+ action = RANDOM 1, 10
+
+ SELECT CASE action
+ CASE 1
+ ROBOT HEAD "look", pan: RANDOM -20, 20, tilt: 0
+ CASE 2
+ ROBOT POSE "relax"
+ CASE 3, 4, 5, 6, 7, 8, 9, 10
+ ' Mostly stay still
+ END SELECT
+END SUB
+
+SUB ExecuteGestures response
+ IF CONTAINS response, "[wave]" THEN
+ SEQUENCE PLAY "wave"
+ END IF
+ IF CONTAINS response, "[nod]" THEN
+ ROBOT HEAD "nod"
+ END IF
+ IF CONTAINS response, "[point_left]" THEN
+ ROBOT ARM "left", action: "point"
+ END IF
+ IF CONTAINS response, "[point_right]" THEN
+ ROBOT ARM "right", action: "point"
+ END IF
+END SUB
+```
+
+## Resources
+
+### Robot Kits
+
+- [Unitree Robotics](https://www.unitree.com/) - G1, H1 humanoids
+- [ROBOTIS](https://www.robotis.com/) - OP3, MINI, Dynamixel
+- [Poppy Project](https://www.poppy-project.org/) - Open source humanoid
+- [InMoov](https://inmoov.fr/) - 3D printable humanoid
+- [Pollen Robotics](https://www.pollen-robotics.com/) - Reachy
+
+### Components
+
+- [Dynamixel](https://www.robotis.us/dynamixel/) - Smart servos
+- [ODrive](https://odriverobotics.com/) - BLDC motor control
+- [Intel RealSense](https://www.intelrealsense.com/) - Depth cameras
+- [NVIDIA Jetson](https://developer.nvidia.com/embedded/jetson-modules) - Edge AI
+
+### Learning
+
+- [ROS 2 Humanoid](https://docs.ros.org/) - Robot Operating System
+- [MoveIt](https://moveit.ros.org/) - Motion planning
+- [OpenCV](https://opencv.org/) - Computer vision
+- [MediaPipe](https://mediapipe.dev/) - Body/hand tracking
\ No newline at end of file
diff --git a/src/13-devices/local-llm.md b/src/13-devices/local-llm.md
index 3feb4199..02b47cc0 100644
--- a/src/13-devices/local-llm.md
+++ b/src/13-devices/local-llm.md
@@ -4,24 +4,7 @@ Run AI inference completely offline on embedded devices. No internet, no API cos
## Overview
-```
-┌─────────────────────────────────────────────────────────────────────────────┐
-│ Local LLM Architecture │
-├─────────────────────────────────────────────────────────────────────────────┤
-│ │
-│ User Input ──▶ botserver ──▶ llama.cpp ──▶ Response │
-│ │ │ │
-│ │ ┌────┴────┐ │
-│ │ │ Model │ │
-│ │ │ GGUF │ │
-│ │ │ (Q4_K) │ │
-│ │ └─────────┘ │
-│ │ │
-│ SQLite DB │
-│ (sessions) │
-│ │
-└─────────────────────────────────────────────────────────────────────────────┘
-```
+
## Recommended Models
diff --git a/src/13-devices/mobile.md b/src/13-devices/mobile.md
index b7bd1877..488d8402 100644
--- a/src/13-devices/mobile.md
+++ b/src/13-devices/mobile.md
@@ -6,27 +6,7 @@ Deploy General Bots as the primary interface on Android and HarmonyOS devices, t
BotDevice transforms any Android or HarmonyOS device into a dedicated General Bots system, removing manufacturer bloatware and installing GB as the default launcher.
-```
-┌─────────────────────────────────────────────────────────────────────────────┐
-│ BotDevice Architecture │
-├─────────────────────────────────────────────────────────────────────────────┤
-│ │
-│ ┌──────────────────────────────────────────────────────────────────┐ │
-│ │ BotDevice App (Tauri) │ │
-│ ├──────────────────────────────────────────────────────────────────┤ │
-│ │ botui/ui/suite │ Tauri Android │ src/lib.rs (Rust) │ │
-│ │ (Web Interface) │ (WebView + NDK) │ (Backend + Hardware) │ │
-│ └──────────────────────────────────────────────────────────────────┘ │
-│ │ │
-│ ┌─────────────────────────┴────────────────────────────┐ │
-│ │ Android/HarmonyOS System │ │
-│ │ ┌─────────┐ ┌──────────┐ ┌────────┐ ┌─────────┐ │ │
-│ │ │ Camera │ │ GPS │ │ WiFi │ │ Storage │ │ │
-│ │ └─────────┘ └──────────┘ └────────┘ └─────────┘ │ │
-│ └───────────────────────────────────────────────────────┘ │
-│ │
-└─────────────────────────────────────────────────────────────────────────────┘
-```
+
## Supported Platforms
@@ -226,38 +206,23 @@ adb reboot
## Project Structure
-```
-botdevice/
-├── Cargo.toml # Rust/Tauri dependencies
-├── tauri.conf.json # Tauri config → botui/ui/suite
-├── build.rs # Build script
-├── src/lib.rs # Android entry point
-│
-├── icons/
-│ ├── gb-bot.svg # Source icon
-│ ├── icon.png # Main icon (512x512)
-│ └── */ic_launcher.png # Icons by density
-│
-├── scripts/
-│ ├── generate-icons.sh # Generate PNGs from SVG
-│ └── create-bootanimation.sh
-│
-├── capabilities/
-│ └── default.json # Tauri permissions
-│
-├── gen/android/ # Generated Android project
-│ └── app/src/main/
-│ ├── AndroidManifest.xml
-│ └── res/values/themes.xml
-│
-└── rom/ # Installation tools
- ├── install.sh # Interactive installer
- ├── scripts/
- │ ├── debloat.sh # Remove bloatware
- │ └── build-magisk-module.sh
- └── gsi/
- └── README.md # GSI instructions
-```
+| Path | Description |
+|------|-------------|
+| `Cargo.toml` | Rust/Tauri dependencies |
+| `tauri.conf.json` | Tauri config → botui/ui/suite |
+| `build.rs` | Build script |
+| `src/lib.rs` | Android entry point |
+| `icons/gb-bot.svg` | Source icon |
+| `icons/icon.png` | Main icon (512x512) |
+| `icons/*/ic_launcher.png` | Icons by density |
+| `scripts/generate-icons.sh` | Generate PNGs from SVG |
+| `scripts/create-bootanimation.sh` | Boot animation generator |
+| `capabilities/default.json` | Tauri permissions |
+| `gen/android/` | Generated Android project |
+| `rom/install.sh` | Interactive installer |
+| `rom/scripts/debloat.sh` | Remove bloatware |
+| `rom/scripts/build-magisk-module.sh` | Magisk module builder |
+| `rom/gsi/README.md` | GSI instructions |
## Offline Mode
diff --git a/src/16-appendix-docs-style/svg.md b/src/16-appendix-docs-style/svg.md
index dd4f9795..deed09ed 100644
--- a/src/16-appendix-docs-style/svg.md
+++ b/src/16-appendix-docs-style/svg.md
@@ -38,6 +38,104 @@ Use standard HTML image syntax with responsive styling:
---
+## SVG Theme Transparency Guidelines
+
+All SVGs MUST be theme-agnostic and work with light/dark modes. Follow these requirements:
+
+### Required CSS Media Query
+
+Every SVG must include a `
+```
+
+### Background Transparency
+
+- Use `fill="#FAFBFC"` for light backgrounds (subtle, not pure white)
+- Add dot pattern overlay for texture: `fill="url(#dots)"` with low opacity
+- Cards use `fill="#FFFFFF"` with border strokes for definition
+- **NEVER** use pure black (`#000000`) backgrounds
+
+### Color Palette (Theme-Safe)
+
+| Purpose | Light Mode | Dark Mode Adaptation |
+|---------|------------|---------------------|
+| Title text | `#1E1B4B` | `#F1F5F9` (via CSS) |
+| Main text | `#334155` | `#E2E8F0` (via CSS) |
+| Secondary text | `#64748B` | `#94A3B8` (via CSS) |
+| Mono/code text | `#475569` | `#CBD5E1` (via CSS) |
+| Card backgrounds | `#FFFFFF` | Keep white (contrast) |
+| Borders | `#E2E8F0` | Keep (works both) |
+
+### Standard Gradients
+
+Use these gradient IDs consistently across all SVGs:
+
+```xml
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+```
+
+### DO NOT
+
+- ❌ Hardcode text colors without CSS class
+- ❌ Use pure black (`#000000`) or pure white (`#FFFFFF`) for text
+- ❌ Forget the `@media (prefers-color-scheme: dark)` block
+- ❌ Use opaque backgrounds that don't adapt
+- ❌ Create new gradient IDs when standard ones exist
+
+### DO
+
+- ✅ Use CSS classes for all text elements
+- ✅ Include dark mode media query in every SVG
+- ✅ Use standard gradient IDs from the palette
+- ✅ Test SVGs in both light and dark browser modes
+- ✅ Use subtle shadows with low opacity (`0.15`)
+- ✅ Keep white cards for contrast in both modes
+
+---
+
## Conversation Examples (WhatsApp Style)
All conversation examples throughout the book use the WhatsApp-style HTML format. This provides a familiar, visually consistent representation of bot interactions.
diff --git a/src/assets/chapter-04/kb-semantic-search-flow.svg b/src/assets/chapter-04/kb-semantic-search-flow.svg
new file mode 100644
index 00000000..da18698b
--- /dev/null
+++ b/src/assets/chapter-04/kb-semantic-search-flow.svg
@@ -0,0 +1,144 @@
+
diff --git a/src/assets/chapter-04/step-flow-4-steps.svg b/src/assets/chapter-04/step-flow-4-steps.svg
new file mode 100644
index 00000000..acf88c59
--- /dev/null
+++ b/src/assets/chapter-04/step-flow-4-steps.svg
@@ -0,0 +1,110 @@
+
diff --git a/src/assets/chapter-04/whatsapp-integration-flow.svg b/src/assets/chapter-04/whatsapp-integration-flow.svg
new file mode 100644
index 00000000..6d2907df
--- /dev/null
+++ b/src/assets/chapter-04/whatsapp-integration-flow.svg
@@ -0,0 +1,152 @@
+
diff --git a/src/assets/chapter-13/botdevice-architecture.svg b/src/assets/chapter-13/botdevice-architecture.svg
new file mode 100644
index 00000000..079ed763
--- /dev/null
+++ b/src/assets/chapter-13/botdevice-architecture.svg
@@ -0,0 +1,159 @@
+
diff --git a/src/assets/chapter-13/budget-decision-tree.svg b/src/assets/chapter-13/budget-decision-tree.svg
new file mode 100644
index 00000000..99fb85db
--- /dev/null
+++ b/src/assets/chapter-13/budget-decision-tree.svg
@@ -0,0 +1,152 @@
+
diff --git a/src/assets/chapter-13/embedded-architecture.svg b/src/assets/chapter-13/embedded-architecture.svg
new file mode 100644
index 00000000..72ec782b
--- /dev/null
+++ b/src/assets/chapter-13/embedded-architecture.svg
@@ -0,0 +1,164 @@
+
diff --git a/src/assets/chapter-13/gpio-button-interface.svg b/src/assets/chapter-13/gpio-button-interface.svg
new file mode 100644
index 00000000..81994587
--- /dev/null
+++ b/src/assets/chapter-13/gpio-button-interface.svg
@@ -0,0 +1,89 @@
+
diff --git a/src/assets/chapter-13/humanoid-architecture.svg b/src/assets/chapter-13/humanoid-architecture.svg
new file mode 100644
index 00000000..a4521832
--- /dev/null
+++ b/src/assets/chapter-13/humanoid-architecture.svg
@@ -0,0 +1,234 @@
+
diff --git a/src/assets/chapter-13/local-llm-architecture.svg b/src/assets/chapter-13/local-llm-architecture.svg
new file mode 100644
index 00000000..5340972e
--- /dev/null
+++ b/src/assets/chapter-13/local-llm-architecture.svg
@@ -0,0 +1,182 @@
+
diff --git a/src/assets/chapter-13/orange-pi-5-specs.svg b/src/assets/chapter-13/orange-pi-5-specs.svg
new file mode 100644
index 00000000..d5d37c57
--- /dev/null
+++ b/src/assets/chapter-13/orange-pi-5-specs.svg
@@ -0,0 +1,128 @@
+
diff --git a/src/assets/platform-architecture.svg b/src/assets/platform-architecture.svg
new file mode 100644
index 00000000..c426b530
--- /dev/null
+++ b/src/assets/platform-architecture.svg
@@ -0,0 +1,162 @@
+