Experiment Ordering
Experiments in a quest are displayed in the order they appear in the quest editor. You can reorder them using the up/down arrow buttons on each experiment card.
- Each experiment card shows a numbered badge indicating its position
- Use the up arrow (˄) and down arrow (˅) to move an experiment earlier or later in the sequence
- Participants see experiments as horizontally scrollable cards on the run page
- The active experiment is highlighted and clicking a card selects it
Reordering experiments may remove invalid dependencies. If experiment B depends on experiment A and you move B before A, that dependency is automatically cleaned up.
Dependencies
Dependencies let you lock an experiment until one or more prerequisite experiments have been completed. This is useful for sequential protocols where later experiments rely on earlier ones.Setting Dependencies
- Click Edit on an experiment card
- Open the Ordering Options disclosure at the bottom of the editor
- Under Required Experiments (Dependencies), check the experiments that must be completed first
- Only experiments that come before the current one in the list can be selected as dependencies

How Dependencies Work at Runtime
When a participant opens a quest with dependencies configured:- Experiments with unmet dependencies show a lock icon and cannot be selected
- A locked experiment displays the message “Complete the required experiments first”
- Once all required experiments are completed, the lock is removed and the experiment becomes clickable
- Experiments with no dependencies are always available
Validation Rules
The editor enforces several rules to keep dependencies valid:| Rule | Description |
|---|---|
| Order-aware | You can only depend on experiments that come before the current one in the list |
| No circular dependencies | If A depends on B, then B cannot depend on A (detected automatically) |
| Cleanup on delete | If a dependency experiment is deleted, it is automatically removed from all other experiments’ dependency lists |
| Cleanup on reorder | If reordering causes a dependency to now come after the dependent, it is automatically removed |
Auto-Trigger
When Auto-trigger next experiment is enabled on an experiment, the system will automatically advance the participant to the next available experiment as soon as the current one is completed. To enable it:- Click Edit on an experiment card
- Open the Ordering Options disclosure
- Check Auto-trigger next experiment
Completion Tracking
Experiment completion is tracked per session. When a participant reloads the quest run page, all completion state resets and experiments start fresh. Within a single session:- Completing an experiment marks it with a green checkmark
- Dependent experiments are unlocked as their prerequisites are met
- Auto-trigger fires immediately after completion if enabled
Assignment Editor
The assignment editor is an advanced feature for counterbalancing, conditional assignment, or randomization. It lets you write a script that determines which experiment a participant gets assigned to.How It Works
- You write a Python script in the assignment editor
- You map onboarding question responses to script input variables
- When a participant joins, the script runs with their onboarding answers as inputs
- The script output determines which experiment the participant sees
Configuration
The assignment configuration has:| Field | Description |
|---|---|
| Script | Python code to execute |
| Language | Script language (python or javascript) |
| Variable Mapping | Maps an onboarding question (source ID) to a placeholder variable name |
Variable Mapping
For each variable:- Select a source — an onboarding question
- Enter a placeholder name — the variable name used in the script
- The participant’s answer to that question is injected as the variable’s value when the script runs
Example: Random Assignment
Randomly assign participants to one of two experiment conditions:Example: Age-Based Assignment
Assign different experiments based on participant age (collected during onboarding):Example: Counterbalancing
Assign participants to alternating conditions:How Assignment Data Is Used
The assignment script output is stored alongside the quest’s assignment configuration. The server fetches onboarding responses for assignment scripts via the dataset API with typeonboarding_responses.
Assignment scripts currently support Python and JavaScript. Python is the most common choice.
Tips
- Use the up/down arrows to set experiment order, then add dependencies to enforce that order at runtime
- Enable auto-trigger on sequential experiments to create a guided flow
- Experiments without any ordering options configured behave exactly as before — all are available, no locks
- Use assignment scripts for between-subjects designs where different groups see different experiments
- Combine with onboarding questions to collect the variables your assignment script needs
- Test your assignment script thoroughly before publishing — incorrect assignment logic can compromise your study design

