Analysis scripts let you attach Python code to a quest that runs against collected datasets. Use them to automate quality checks on EEG recordings, compute summary metrics from experiment trials, or process prompt responses — without downloading data manually.
How It Works
Each analysis script has:
| Field | Required? | Description |
|---|
| Name | ✅ Yes | Display name for the script |
| Description | Optional | What the script does |
| Code | ✅ Yes | Python code to execute |
| Trigger Type | ✅ Yes | When the script runs (see below) |
| Dataset Types | Optional | Filter which dataset types the script receives |
| Active | Yes | Enable/disable without deleting |
Trigger Types
Scripts can be triggered in different ways:
| Trigger | When it runs |
|---|
manual | On-demand — click “Run” in the dashboard |
after_experiment | After a participant completes an experiment |
after_dataset | After a new dataset is uploaded |
after_prompt_response | After a prompt response is submitted |
scheduled | On a cron schedule |
Currently, only manual execution is available in the UI. Automatic triggers (after_experiment, scheduled, etc.) will be enabled in a future update.
Data Flow
When a script executes:
- The server queries all datasets for the quest (filtered by the script’s
datasetTypes if set)
- For each dataset, the actual file content is downloaded from blob storage
- Everything is sent to the Python executor as structured input
- The executor runs the script and returns output + stdout
- Results are saved as an execution record with status, output, errors, and timing
Quest Datasets (DB) → Blob Storage (download) → Python Executor → Execution Record
Writing Scripts
Scripts run in a Python 3.11 environment with numpy, pandas, scipy, and json pre-imported. Your code receives several variables automatically:
Available Variables
| Variable | Type | Description |
|---|
datasets | list[dict] | List of dataset objects (see below) |
dataset_count | int | Number of datasets |
quest_guid | str | GUID of the quest |
triggered_by | str | How the script was triggered (manual, after_experiment, etc.) |
np | module | NumPy |
pd | module | Pandas |
json | module | JSON |
Dataset Object
Each item in the datasets list is a dictionary:
| Key | Type | Description |
|---|
id | int | Dataset row ID |
type | str | brain_recordings, experiment_trials, prompt_responses, etc. |
content | str | None | Raw file content (CSV, JSON). None if file couldn’t be downloaded. |
userGuid | str | Participant identifier |
timestamp | int | Unix timestamp of the recording |
experimentName | str | None | Name of the experiment (if applicable) |
deviceType | str | None | EEG device type (muse, neurosity, etc.) |
provider | str | None | Data source (fusion, etc.) |
fileName | str | None | Blob storage path |
Returning Results
Set the output variable to return structured results. Anything printed to stdout is also captured.
output = {"status": "success", "results": my_results}
Example: EEG Quality Check
from io import StringIO
results = []
for ds in datasets:
if ds["type"] != "brain_recordings" or ds["content"] is None:
continue
df = pd.read_csv(StringIO(ds["content"]))
channels = [c for c in df.columns if c != "timestamp"]
quality = {}
for ch in channels:
vals = df[ch].values
quality[ch] = {
"mean": float(np.mean(vals)),
"std": float(np.std(vals)),
"is_flat": bool(np.std(vals) < 0.001), # flat signal = bad contact
}
results.append({
"dataset_id": ds["id"],
"experiment": ds.get("experimentName"),
"n_samples": len(df),
"channels": quality,
"pass": all(not q["is_flat"] for q in quality.values()),
})
output = {"status": "success", "results": results}
print(f"Checked {len(results)} recordings")
Example: Experiment Reaction Times
from io import StringIO
for ds in datasets:
if ds["type"] != "experiment_trials" or ds["content"] is None:
continue
df = pd.read_csv(StringIO(ds["content"]))
if "rt" in df.columns:
rt = df["rt"].dropna()
output = {
"mean_rt": float(rt.mean()),
"median_rt": float(rt.median()),
"std_rt": float(rt.std()),
"n_trials": len(rt),
"outliers": int((rt > rt.mean() + 3 * rt.std()).sum()),
}
print(f"Mean RT: {output['mean_rt']:.1f}ms across {output['n_trials']} trials")
Using the Dashboard
Analysis scripts are managed from the quest detail page, below the dataset sections.
Creating a Script
- Open your quest in the dashboard
- Scroll to the Analysis Scripts section
- Click New Script
- Enter a name, select a trigger type, and write your Python code in the Monaco editor
- Click Save Script
Running a Script
- Find the script in the list
- Click Run
- The execution status and output appear in the expandable execution history below the script
Viewing Execution History
Click on a script row to expand it and see recent executions with:
- Status (completed / failed)
- Execution time
- Output (stdout + structured output)
- Error messages and Python tracebacks (if failed)
Filtering by Dataset Type
If your script only needs certain data types, set the Dataset Types field when creating the script. For example, an EEG quality script should filter to brain_recordings only — this avoids loading irrelevant prompt responses or experiment trials.
Supported dataset types:
brain_recordings — EEG data from Muse or Neurosity
experiment_trials — jsPsych experiment results
prompt_responses — Answers to recurring prompts
onboarding_responses — Onboarding form answers
Billing
Each script execution costs credits, billed as script_run to the quest’s organization. See the Pricing page for current costs.
Credits are deducted before execution. If the script fails, credits are still consumed. Check your script logic in a small test before running on large datasets.
Permissions
To create, edit, or run analysis scripts, you need the data.run_script permission in the quest’s organization. Organization admins have this by default. See Members & Permissions for details.
Limits
| Limit | Value |
|---|
| Execution timeout | 5 minutes |
| Datasets per execution | 50 (most recent) |
| Scripts per quest | Unlimited |
API Reference
Analysis scripts are also available via the REST API:
| Method | Endpoint | Description |
|---|
POST | /api/quest/analysis-scripts | Create a script |
GET | /api/quest/analysis-scripts?questId=... | List scripts for a quest |
GET | /api/quest/analysis-scripts/:guid | Get a single script |
PUT | /api/quest/analysis-scripts/:guid | Update a script |
DELETE | /api/quest/analysis-scripts/:guid | Delete a script |
POST | /api/quest/analysis-scripts/:guid/execute | Execute a script |
GET | /api/quest/analysis-scripts/executions?scriptGuid=... | Get execution history |
Execute Request Body
{
"triggeredBy": "manual",
"datasetId": 42,
"inputValues": {
"threshold": 0.5
}
}
All fields are optional. datasetId restricts execution to a single dataset (useful for event-driven triggers). inputValues maps to the script’s configured input variables.
Explore