Skip to main content
When a quest has multiple experiments, you can control the order participants see them, set dependencies between experiments, enable auto-trigger to advance automatically, and use assignment scripts for counterbalancing.

Experiment Ordering

Experiments in a quest are displayed in the order they appear in the quest editor. You can reorder them using the up/down arrow buttons on each experiment card.
Experiment cards showing numbered badges and up/down reorder buttons
  • Each experiment card shows a numbered badge indicating its position
  • Use the up arrow (˄) and down arrow (˅) to move an experiment earlier or later in the sequence
  • Participants see experiments as horizontally scrollable cards on the run page
  • The active experiment is highlighted and clicking a card selects it
Reordering experiments may remove invalid dependencies. If experiment B depends on experiment A and you move B before A, that dependency is automatically cleaned up.

Dependencies

Dependencies let you lock an experiment until one or more prerequisite experiments have been completed. This is useful for sequential protocols where later experiments rely on earlier ones.

Setting Dependencies

  1. Click Edit on an experiment card
  2. Open the Ordering Options disclosure at the bottom of the editor
  3. Under Required Experiments (Dependencies), check the experiments that must be completed first
  4. Only experiments that come before the current one in the list can be selected as dependencies
Experiment editor showing the Ordering Options disclosure with dependency checkboxes and auto-trigger

How Dependencies Work at Runtime

When a participant opens a quest with dependencies configured:
  • Experiments with unmet dependencies show a lock icon and cannot be selected
  • A locked experiment displays the message “Complete the required experiments first”
  • Once all required experiments are completed, the lock is removed and the experiment becomes clickable
  • Experiments with no dependencies are always available

Validation Rules

The editor enforces several rules to keep dependencies valid:
RuleDescription
Order-awareYou can only depend on experiments that come before the current one in the list
No circular dependenciesIf A depends on B, then B cannot depend on A (detected automatically)
Cleanup on deleteIf a dependency experiment is deleted, it is automatically removed from all other experiments’ dependency lists
Cleanup on reorderIf reordering causes a dependency to now come after the dependent, it is automatically removed

Auto-Trigger

When Auto-trigger next experiment is enabled on an experiment, the system will automatically advance the participant to the next available experiment as soon as the current one is completed. To enable it:
  1. Click Edit on an experiment card
  2. Open the Ordering Options disclosure
  3. Check Auto-trigger next experiment
This is useful for guided protocols where participants should move through experiments in a fixed sequence without having to manually select the next one.
Combine dependencies with auto-trigger for a fully guided, linear experiment flow — participants are automatically advanced through a locked sequence.

Completion Tracking

Experiment completion is tracked per session. When a participant reloads the quest run page, all completion state resets and experiments start fresh. Within a single session:
  • Completing an experiment marks it with a green checkmark
  • Dependent experiments are unlocked as their prerequisites are met
  • Auto-trigger fires immediately after completion if enabled

Assignment Editor

The assignment editor is an advanced feature for counterbalancing, conditional assignment, or randomization. It lets you write a script that determines which experiment a participant gets assigned to.

How It Works

  1. You write a Python script in the assignment editor
  2. You map onboarding question responses to script input variables
  3. When a participant joins, the script runs with their onboarding answers as inputs
  4. The script output determines which experiment the participant sees

Configuration

The assignment configuration has:
FieldDescription
ScriptPython code to execute
LanguageScript language (python or javascript)
Variable MappingMaps an onboarding question (source ID) to a placeholder variable name

Variable Mapping

For each variable:
  1. Select a source — an onboarding question
  2. Enter a placeholder name — the variable name used in the script
  3. The participant’s answer to that question is injected as the variable’s value when the script runs

Example: Random Assignment

Randomly assign participants to one of two experiment conditions:
import random

# No variables needed — purely random
condition = random.choice(["control", "treatment"])
print(condition)

Example: Age-Based Assignment

Assign different experiments based on participant age (collected during onboarding):
# Variable mapping: onboarding question "How old are you?" → age
age = int(age)  # injected from onboarding response

if age < 25:
    print("young_adult_protocol")
elif age < 65:
    print("adult_protocol")
else:
    print("older_adult_protocol")

Example: Counterbalancing

Assign participants to alternating conditions:
import hashlib

# Variable mapping: onboarding question "participant ID" → participant_id
hash_val = int(hashlib.md5(participant_id.encode()).hexdigest(), 16)
condition = "A" if hash_val % 2 == 0 else "B"
print(condition)

How Assignment Data Is Used

The assignment script output is stored alongside the quest’s assignment configuration. The server fetches onboarding responses for assignment scripts via the dataset API with type onboarding_responses.
Assignment scripts currently support Python and JavaScript. Python is the most common choice.

Tips

  • Use the up/down arrows to set experiment order, then add dependencies to enforce that order at runtime
  • Enable auto-trigger on sequential experiments to create a guided flow
  • Experiments without any ordering options configured behave exactly as before — all are available, no locks
  • Use assignment scripts for between-subjects designs where different groups see different experiments
  • Combine with onboarding questions to collect the variables your assignment script needs
  • Test your assignment script thoroughly before publishing — incorrect assignment logic can compromise your study design