Nov 26, 2024

Public workspaceTesting Framework Protocol

  • 1Neural Circuits, Consciousness and Cognition Research Group, Max Planck Institute of Empirical Aesthetics, Frankfurt am Main, Germany;
  • 2Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, 6500 HB, the Netherlands;
  • 3Sagol School of Neuroscience, Tel Aviv University;
  • 4Department of Neurology, NYU Grossman School of Medicine, New York, USA;
  • 5Canadian Institute for Advanced Research (CIFAR), Brain, Mind, and Consciousness Program, Toronto, ON, Canada;
  • 6Sagol School of Neuroscience, Tel-Aviv University, Tel Aviv, Israel;
  • 7School of Psychological Sciences, Tel-Aviv University, Tel Aviv, Israel
Icon indicating open access to content
QR code linking to this content
Protocol CitationAlex Lepauvre, Rony Hirschorn, Lucia Melloni, mudrikli 2024. Testing Framework Protocol. protocols.io https://dx.doi.org/10.17504/protocols.io.36wgqjmbxvk5/v1
License: This is an open access protocol distributed under the terms of the Creative Commons Attribution License,  which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited
Protocol status: Working
We use this protocol and it's working
Created: January 24, 2023
Last Modified: November 26, 2024
Protocol Integer ID: 75766
Funders Acknowledgements:
Templeton World Charity Foundation
Grant ID: TWCF0389
Abstract
This protocol provides a step-by-step guide for testing and validating experimental setups in visual event-related research. It includes Testing Preparation required to conduct the protocol, Visual Angle Check for stimulus size accuracy, Controlled and Uncontrolled Event Content Logging to verify the correctness of event and response logs, Controlled and Uncontrolled Event Timing Logging to assess timing precision, Validating Experimental Design Parameters to confirm design adherence, and Peripheral Testing for devices like EEG, MEG, and eye trackers. By standardizing setup testing, this protocol enhances reproducibility, robustness, and cross-study integration. Shared on protocols.io, it supports transparent and reliable research practices.
Materials
Photodiode
Contact microphone
Testing preparation
Testing preparation
Adjust the experiment scripts to conduct the testing protocol
Present a black square on a corner of the screen that switches to white on the same frame as the stimulus onset and then turns back to black after three frames.
Add functionality to record sound from a microphone during the execution of the experiment to record button presses - "click" sounds - to assess response box latencies.
Create a pre-defined response sequence to be executed during the test run to compare against the log file to identify any log file issues. The event sequence must sample all possible answers, as well as unexpected button presses to assess the robustness of the experiment.
Prepare for a test run:
Attach a photodiode recording device (see material) over the displayed black/white square flashed upon stimulus onset and record the measured voltage to a file for later processing
Place a contact microphone (see materials) on the response devices used to record the sound made by button presses and ensure quietness in the room (avoid speaking, opening/closing doors, etc.) to facilitate later processing stages
Prepare to take notes of the presented events on the screen for assessing logging content accuracy. In the case of a fast paced experiment, set up a camera to capture the screen for slow paced annotations of events presentation.

Visual angle check
Visual angle check
The visual angle of each stimulus of interests was tested using by measuring the size of the stimulus presented on the screen with a ruler. The size of each stimulus was converted to degrees of visual angle using this converter: https://www.sr-research.com/visual-angle-calculator/ with the participant's expected distance from the display. The eccentricity was computed using the same converter (if applicable).
Obtain Screen Height and Width in pixels
Required
Screen height (px)
Required
Screen width (px)

Measure screen height and width in cm
Required
Screen height (cm)
Required
Screen width (cm)

Distance between participant nasion and screen
Required
d (cm)

Measured stimuli sizes (if more than one, comma separated)
Required
Expected width (d.v.a.)
Required
Measured width(d.v.a)
Required
Expected height (d.v.a.)
Required
Measured height (d.v.a)

Eccentricity (if more than one, comma separated)
Required
Expected horizontal offset (d.v.a.)
Required
Measured horizontal offset (d.v.a.)
Required
Expected vertical offset (d.v.a.)
Required
Measured vertical offset (d.v.a.)

Controlled Event content logging
Controlled Event content logging
Describe method used to test:
Note
The experimenter ran the experiment in the exact same way it would have run for a subject. As the experiment progressed, the experimenter noted down all features of each stimulus being presented in the order they were presented. Each noted event was compared to the log file to ensure that logging was correct.
Example of notes that can be taken during the experiment. Importantly, each feature of the presentation relevant to the experiment should be noted
Number of tested events conditions
Note
Conditions here refer to any unique combination of experimental conditions in case of nested designs. If presented stimuli are of different categories (e.g. faces and objects) and presented in different contexts (e.g. task relevant and irrelevant), a condition would be 'stimulus of a given category in a given context' (i.e. task relevant faces and task irrelevant faces are interpreted as two different conditions)
Required
M =



Number of tested events per condition:
Required
N =

Confirmation that the test was performed and that no discrepancies remain
I hereby confirm that the test was conducted and that no discrepancies remain




Uncontrolled events content logging
Uncontrolled events content logging
The experiment was conducted entirely while a human actuator executed a pre-defined sequence of button presses. Each press content and timing was saved in the log file. In addition, a contact microphone was located close to the keys to be pressed and recorded on the experimental computer. The response devices' latencies were computed as the difference between the detected onset of button press in the response device and the recorded and parsed sound file.
Compare the logged response description against the planned response sequence. An inaccurate response logging would be identified as a discrepancy between the two, such as the log file recording a response as "No" when in fact the button mapped to the "Yes" logging was pressed for example. Such issues must be addressed before data collection and the report i below must be 0.
Number of tested responses types:
Required
M =

Number of tested responses per response types:
Required
N =

Confirmation that the test was performed and that no discrepancies remain (between the logged responses and the planned response sequence)
Required
I hereby confirm that the test was conducted and that no discrepancies remain

Controlled event timing logging
Controlled event timing logging
Describe method used to test:

Note
A black square (RGB: 0, 0, 0) was turned to white (RGB: 255, 255, 255) on the bottom right corner of the screen for 3 frames on the exact same frame as an event was displayed and then turned back to black. A photodiode device was placed on top of this square and the signal was recorded to a file.

Select threshold for binarization:
k =

Binarize the signal:
Where is the recorded signal

Compute discrete difference on the binarized photodiode signal:
Where n is the number of samples in the signal
Detect event observed onset ( ):
Compute :
Where n is the number of detected photodiode events
Compute :
Where is the log file time stamp of all events (n) in the log file

Controlled event timing logging
Controlled event timing logging
Compute and report the log file average timing inaccuracies ( )
Required
mu (s) =

Compute and report the log file timing inaccuracies standard deviation ( )
Required
sig (s) =


Uncontrolled events timing
Uncontrolled events timing
The experiment was conducted entirely while a human actuator executed a pre-defined sequence of button presses. Each press content and timing was saved in the log file. In addition, a contact microphone was located close to the keys to be pressed and recorded on the experimental computer. The response devices' latencies were computed as the difference between the detected onset of button press in the response device and the recorded and parsed sound file.
Select threshold for binarization:
k =

Binarize the signal:
Where is the recorded audio signal containing the button presses sounds

Compute discrete difference on the binarized audio signal:
Where n is the number of samples in the signal
Detect event observed onset ( ):
Compute :
Where n is the number of detected audio events

Compute :
Where is the log file time stamp of response events (n) in the log file

Compute and report the log file average timing inaccuracies ( )
Required
mu (s) =

Compute and report the log file timing inaccuracies standard deviation ( )
Required
sig (s) =


Validating experimental design parameters
Validating experimental design parameters
Test Experiment counter-balancing
Describe method used to test:

Note
To test that the expected counter-balancing was correct, the total number of trials and the number of trials per condition in a full experimental run were counted based on the log file and compared to the expected number of trials according to the experimental design

Total number of trials


Required
Expected number of trials (total)
Required
Observed number of trials (total)



Number of trials per condition

Required
Number of conditions
Required
Expected number of trials (per condition)
Required
Observed number of trials (per condition)

Test observed stimulus duration against the expected stimulus duration

Describe the method used:

Note
To test the observed duration of the stimuli against the expected duration, we compared the duration of each stimulus calculated based on the photodiode recording against the expected duration stored in the log file.


Compute observed stimulus duration:

Compute and report the mean difference between observed and expected ( ):

Required
mu (s) =

Compute and report the standard deviation of the difference between observed and expected ( ):
Required
sig (s) =

Peripherals that will be used to collect data during the experiment should be tested and reported according in the step case below.
Step case

Eyetracker
7 steps

This section should be filled out if an eyetracker device was used to record participants gaze
The experiment was conducted entirely while recording data on the Eyetracker. For each event, a trigger was sent from the experimental computer to the device's computer to identify each event's content as well as timing. The triggers' timing accuracy was computed by comparing them to the photodiode. The triggers' content accuracy was computed by comparing them to the log file event content.
Compute finite difference of each Eyetracker trigger time stamp :
Compute and report the device triggers average timing inaccuracies ( ):

Required
mu (s) =

Where is the finite difference of photodiode onsets computed in step 3.4
Compute the log file file timing inaccuracies standard deviation ( )

Required
sig (s) =

Compare the information to the log file using scripted test
Note
All triggers recorded in one run of the experiment should be parsed and automatically compared to the information stored in the log file. All events in one run should therefore be tested.

Tested events counts (should be equal to the total number of events in one experimental run):

Required
N =

Confirmation that the test was performed and that no discrepancies remain (between the logged responses and the planned response sequence)
Required
I hereby confirm that the test was conducted and that no discrepancies remain