1 Introduction
In economics, conducting laboratory experiments to determine causal relationships has become increasingly fashionable over the last few decades. A large number of laboratory experiments are nowadays conducted using computers as a result of the spread of computerized laboratories at major universities and research institutions. Urs Fischbacher’s experimental software z-Tree (Fischbacher Reference Fischbacher2007) is very popular around the globe and is among the most-used experimental softwares, having been cited more than 8000 times as of 2019 according to Google Scholar.
z-Tree is versatile and yet easy to learn—starting with version 3.5.0—users can even record the current time on the client machine. Part of z-Tree’s popularity is presumably due to its simplicity and the convenience of creating even complex experiments with only very little programming effort. The analysis of video footage created during these experiments, however, was relatively inconvenient so far, since the capability of z-Tree to interact with other software packages was limited until recently. For example, consider an experimenter interested in facial expressions of subjects after they observe a certain screen, for example a donation decision in a dictator game. To perform their analysis, experimenters so far needed to employ a software to record videos, create and export the time stamps (i.e., the times at which the events of interest occur) into z-Tree (using table dumpers)Footnote 1, and manually match time stamps and video footage in the program of their choice. It is apparent that these and similar methods might be very tedious to the experimenter, since they rely on manually repeating a large number of tasks. This might be especially burdensome if experimenters have to deal with a large number of subjects and sessions. One of these situations for example concerns the analysis of emotions using FaceReader™.
FaceReader™ is a software package developed by the Dutch company Noldus (http://www.noldus.com), it analyzes facial expressions from photographs and videos with respect to six basic emotional states. The software lays a grid of more than 500 key points over images of each participant’s face and identifies emotions by distinct muscle movement that is associated with a change in emotions. According to Noldus, the software performs equally well as trained annotators in describing emotional states of subjects. Researchers in economics have recently started to use FaceReader™ to investigate how emotional states correlate with economic decision making. For example, Noussair and Nguyen (Reference Noussair and Nguyen2014) investigate the role of emotions in decision-making under risk, and Breaban and Noussair (Reference Breaban and Noussair2017) look at emotions in the context of asset markets. Furthermore, van Leeuwen et al. (Reference van Leeuwen, Noussair, Offerman, Suetens, van Veelen and van de Ven2017) investigate the relationship of anger and rejections in ultimatum games, and Breaban et al. (Reference Breaban, van de Kuilen and Noussair2016) describe a positive correlation between more negative states and greater prudence.
Cap is the first tool to allow experimenters to create a fully automated connection between z-Tree and FaceReader™ in a straightforward way. It is barely noticeable by subjects and imprecisions are kept very limited with only a few milliseconds delay. As we will point out later, Cap can easily be implemented to improve experimenters’ work flow and has possible applications beyond its original purpose. Cap has found first applications, such as Kugler et al. (Reference Kugler, Ye, Motro and Noussair2018), who use the tool to synchronize decisions from a trust game programmed in z-Tree with videos of participants’ facial expressions.
2 Cap
Cap is a bundled software package and comes with two additional separate applications: Config and Project. The governing principle of Cap is based on repeatedly reading a specific pixel from the top left corner of the participants’ screen (for an example screen, see Fig. 1). Whenever this pixel changes color, Cap will create a time stamp in a csv file.Footnote 2 By doing so, Cap handles each participant separately, such that the progress of participants in different stages of the experiment will still be recorded precisely. This would matter, for example, if participants in a dictator game arrive at the donation decision screen at different points in time, for example, due to a preceding questionnaire. This procedure requires a minimum of additional programming effort by the experimenter. On the one hand, the tool Config renders the creation of hand-written configuration files (i.e., files that define the association of colors and time stamps) unnecessary, as it offers experimenters a graphical interface to define colors Cap will react to and to attach labels to events. On the other hand, additional programming in z-Tree is limited to adding a small box of 20 20 pixels in the top left corner of the screen.Footnote 3
In a typical experiment, experimenters will use Config when they program their experiment in z-Tree. A set of colors (using the RGB scheme) has to be defined in Config and rectangles with the corresponding colors have to be placed in the top left corner of the z-tree screen. Virtually unlimited numbers of markers (i.e., color event combinations) can be defined to distinguish all parts of interest within the experiment. If the same part occurs more than once, markers can be re-used. Note that using the RGB scheme also enables users to only marginally vary the color of the rectangle (e.g., from 250, 250, 250 to 252, 250, 250) in a way such that Cap recognizes the difference, but the participant does not. This might be especially valuable in a setting where the experimenter does not want the participant to know (or guess) what the instances of interest are.
After Config has been set up correctly, Cap will not only create video files of participants (including starting and stopping the recording at prespecified color changes)Footnote 4, but also correctly create the necessary time stamps. For Cap to function properly, it simply needs to be executed on all clients (just like z-Leaf, the client program of z-Tree). After the start of the experiment, Cap will be executed in the background, not visible to subjects.
As soon as video files and time stamps have been collected from the client computersFootnote 5, Project will help to automatically create a new project in FaceReader™ (i.e., a file that will allow to batch process the analysis of facial expressions from the video material). Not only will this project automatically create the total number of subjects, but also load the video files and time stamps. Especially in projects with a large number of subjects, the automated processing of data using Project can substantially reduce experimenters’ workload.
In a typical experiment, and to generate the desired output, an experimenter would typically follow these steps:Footnote 6 First, when programming the experiment, ensure that all screens in z-Tree include the small rectangular box with the desired RGB color code. Second, using Config, generate the configuration csv file defining the markers for these color codes. Third, store the csv in the same folder as Cap and make sure that all client computers can access this folder. Fourth, when running the experiment, make sure that all clients run Cap and z-Leaf to generate the marker and video file for each subject. Fifth, collect all files and load them in ProjectCreator, which in turn creates an frx file to be loaded in FaceReader™. Sixth, let FaceReader™ execute the frx file and save the resulting data file that indicates the strength of each emotion at every point in time. Cap puts an additional column of data in the file that indicates the event markers.
Cap can be used free of charge. We however kindly ask you to cite this paper in any academic publication or presentation when Cap has been used (citeware). The most recent version of the package and documentation can be downloaded from http://mucap.david-schindler.de.
3 Advantages, limitations, and possible extensions
The biggest advantage of using Cap for experimenters is the reduction of performing time-consuming manual tasks to link video footage and experimental data. This can prove extremely valuable in settings with a large number of subjects. While other options require repeating many small steps, Cap offers a worry-free solution that automatizes many of the necessary tasks and enhances workflow. Another big advantage of Cap is the ability to only record parts of interest. In a long experiment consisting of several parts, maybe only the facial expressions in one part, for example the donation decision in a dictator game, are of interest to the experimenter. Since FaceReader™ needs computational power to process the videos files, useless footage will drastically increase the time FaceReader™ will need for completing the analysis. In our experience, an average workstation needs about 3 min to process a 1-min video file. Many experiments nowadays consist of more than 100 participants and last up to 2 h. If a researcher was interested in only 15 min of the recordings, the use of Cap could reduce computation time drastically from 600 h to only 75 h.
An important issue for experimenters is probably the imprecision resulting from the use of Cap. Although we have not taken the effort to measure the exact latency Cap induces, we are very confident that the incongruence of timestamp and video frame is less than a frame (i.e., less than 40 milliseconds).Footnote 7 Apart from many important advantages, using Cap may have some downsides, which we would like users to be aware of. Although we tried to keep additional programming very limited, the use of Cap will always mean additional work to the experimenter. We think that our solution is very efficient in requiring as little effort as possible, but still, colored rectangles will have to be implemented in z-Tree and the system needs to run both, z-Tree and Cap, which might be more burdensome than using z-Tree alone. On the Cap website, we provide a template treatment that can help to reduce programming effort. Additionally, Cap is a program written for users of Microsoft Windows. We do not see ourselves able to offer the package for any other operating system, but Cap is compatible with versions of Windows XP up to Windows 10. Given that z-Tree requires Windows as an operating system as well, we deem this constraint not binding. Support on other operating systems is probably possible using emulations.
While Cap was developed to work with FaceReader™, it is potentially interesting to use the tool in other environments or with different software packages, where timestamps in csv format are needed. In addition to the presented application, one could think of linking experimental data to a variety of physiological measures, like eye-tracking, skin conductance and the like. Since most of those physiological measures are optimized to analyze individual behavior, Cap can offer a simple extension for the analysis of interactive behavior, especially in large groups. Also, the principle of placing colored rectangles on screen can be implemented in basically any experimental software that supports such a customization and therefore does not only limit Cap to be used with z-Tree.
The principle of changing colors to create precise timestamps has since been picked up by Perakakis et al. (Reference Perakakis, Guinot-Saporta, Jaber-Lopez, García-Gallego and Georgantzis2019), who suggest the use of external photo sensors to feed changes in luminosity directly to external devices. While their approach is equally promising in terms of effectiveness, it requires setup costs: monetary (purchasing photo sensors) and non-monetary (installing and maintaining such a system). Most importantly, however, it does not offer the degree of automation that Cap offers for the specific combination of connecting FaceReader™ and z-Tree.
4 Conclusion
In this paper we have introduced Cap, a software package for experimental economists to link FaceReader™ and z-Tree. We have explained the governing principle of Cap and discussed its advantages and weaknesses. Furthermore, we have suggested possible extensions. We heavily rely on users’ feedback to improve and enhance the program and therefore ask all users to contribute by sending in ideas.