AI Denoising with ZUNA
ZUNA is a diffusion-based foundation model trained on large-scale EEG data. It removes noise artifacts from brain recordings while preserving the underlying neural signals.
What Gets Cleaned
| Artifact Type | Source | Effect on EEG |
|---|
| Eye blinks | Eyelid movement | Large frontal spikes (50–200 µV) |
| Saccades | Eye movement | Lateral voltage shifts |
| Muscle (EMG) | Jaw, forehead, neck | High-frequency broadband noise |
| Line noise | Power outlets | 50/60 Hz sinusoidal interference |
| Drift | Electrode impedance changes | Slow baseline wander |
| Movement | Head or cable motion | Large transient artifacts |
What Gets Preserved
- Neural oscillations — Alpha (8–13 Hz), beta (13–30 Hz), theta (4–8 Hz), delta (1–4 Hz), gamma (30+ Hz)
- Event-related potentials — P300, N170, N400, and other cognitive markers
- Resting-state patterns — Eyes-open vs eyes-closed alpha differences
- Task-related changes — Frequency power shifts during cognitive tasks
Processing Pipeline
Raw CSV → Parse channels → Resample to 256 Hz → Filter → Epoch (5s segments)
→ Normalize → ZUNA GPU inference → Denormalize → Reconstruct continuous signal
Technical Details
- Epoch size: 5 seconds (1,280 samples at 256 Hz)
- Overlap: Adjacent epochs for seamless reconstruction
- Normalization: Per-epoch z-score normalization before inference, reversed after
- GPU: NVIDIA A100 80GB VRAM
Using Denoising
From the Recordings Page
- Record live EEG or upload a CSV file
- Click ”🧹 Denoise Recording”
- Wait for processing (5–15 seconds if GPU is warm)
- View the before/after comparison — raw signal on the left, denoised on the right
- Download the cleaned CSV or send to the Analysis page
From the API
For programmatic access, POST to /api/eeg-models/inference:
curl -X POST https://usefusion.ai/api/eeg-models/inference \
-H "Authorization: Bearer YOUR_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"task": "denoise",
"eeg": {
"csv_content": "CP3,C3,F5,PO3,PO4,F6,C4,CP4\n12.3,8.1,...",
"device_type": "neurosity",
"sfreq": 256
},
"output_format": "json"
}'
See the API Reference for full endpoint documentation.
GPU Management
The ZUNA model runs on a dedicated GPU VM that auto-manages its lifecycle:
Auto-Sleep
The VM deallocates after 30 minutes of inactivity. This stops compute billing while preserving the disk. The GPU status indicator on the recordings page shows current state.
Auto-Wake
When you trigger denoising and the GPU is sleeping:
- The backend detects the VM is deallocated
- It sends a start command and returns HTTP 202
- The frontend automatically retries every 15 seconds (up to 6 times)
- You see live status updates: “GPU is starting up. Auto-retrying in 15s…”
- Once the VM is ready (~90 seconds), your denoising request processes normally
If you know you’ll need the GPU soon, click “Wake GPU” in the status bar at the top of the recordings page. This starts the VM immediately so it’s ready when you finish recording.
Status Indicators
| Indicator | Meaning |
|---|
| 🟢 Green | GPU ready — denoising will start immediately |
| 🟡 Yellow (pulsing) | GPU starting up — will be ready in ~90s |
| ⚫ Gray | GPU sleeping — will auto-wake on first request |
Before / After Comparison
After denoising, the recordings page shows a side-by-side comparison:
- Left panel (red label) — Your original raw recording with all artifacts visible
- Right panel (green label) — The ZUNA-denoised output
Both use the same amplitude scale and time axis. You can adjust the scale (±50, ±100, ±200 µV) and zoom/pan with the scroll bar to inspect specific time windows.
This comparison helps you:
- Verify that artifacts were removed (look for smoothed-out eye blink spikes)
- Confirm neural features are preserved (alpha oscillations should remain intact)
- Decide if the denoising quality is sufficient for your analysis
Other ZUNA Tasks
Beyond denoising, ZUNA supports two additional tasks:
Reconstruct
Recover missing or corrupted EEG channels using the spatial relationships learned from healthy data. Useful when one electrode has poor contact.
Upsample
Expand a low-density recording (e.g., 4-channel Muse) to estimate what a higher-density montage would look like. This is experimental and best used for exploratory visualization.
Both tasks use the same API endpoint with task: "reconstruct" or task: "upsample". See API Reference for details.