Hostname: page-component-cd9895bd7-gbm5v Total loading time: 0 Render date: 2025-01-05T01:58:37.907Z Has data issue: false hasContentIssue false

μCap: connecting FaceReader™ to z-Tree

Published online by Cambridge University Press:  01 January 2025

Leonard Doyle
Affiliation:
Ludwig Maximilian University of Munich, Munich, Germany
David Schindler*
Affiliation:
Department of Economics, Tilburg University, PO Box 90153, 5000 LE Tilburg, The Netherlands
Rights & Permissions [Opens in a new window]

Abstract

μCap is a software package (citeware) for economic experiments enabling experimenters to analyze emotional states of subjects using z-Tree and FaceReader™. μCap is able to create videos of subjects on client computers based on stimuli shown on screen and restrict recording material to relevant time frames. Another feature of μCap is the creation of time stamps in csv format at prespecified screens (or at prespecified points in time) during the experiment, measured on the client computer. The software makes it possible to import these markers into FaceReader™ easily. μCap is the first program that significantly simplifies the process of connecting z-Tree and FaceReader™ with the additional benefit of extremely high precision. This paper describes the usage, underlying principles as well as advantages and limitations of μCap. Furthermore, we give a brief outlook of how μCap can be beneficial in other contexts.

Type
Original Paper
Creative Commons
Creative Common License - CCCreative Common License - BY
This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Copyright
Copyright © The Author(s) 2019

1 Introduction

In economics, conducting laboratory experiments to determine causal relationships has become increasingly fashionable over the last few decades. A large number of laboratory experiments are nowadays conducted using computers as a result of the spread of computerized laboratories at major universities and research institutions. Urs Fischbacher’s experimental software z-Tree (Fischbacher Reference Fischbacher2007) is very popular around the globe and is among the most-used experimental softwares, having been cited more than 8000 times as of 2019 according to Google Scholar.

z-Tree is versatile and yet easy to learn—starting with version 3.5.0—users can even record the current time on the client machine. Part of z-Tree’s popularity is presumably due to its simplicity and the convenience of creating even complex experiments with only very little programming effort. The analysis of video footage created during these experiments, however, was relatively inconvenient so far, since the capability of z-Tree to interact with other software packages was limited until recently. For example, consider an experimenter interested in facial expressions of subjects after they observe a certain screen, for example a donation decision in a dictator game. To perform their analysis, experimenters so far needed to employ a software to record videos, create and export the time stamps (i.e., the times at which the events of interest occur) into z-Tree (using table dumpers)Footnote 1, and manually match time stamps and video footage in the program of their choice. It is apparent that these and similar methods might be very tedious to the experimenter, since they rely on manually repeating a large number of tasks. This might be especially burdensome if experimenters have to deal with a large number of subjects and sessions. One of these situations for example concerns the analysis of emotions using FaceReader™.

FaceReader™ is a software package developed by the Dutch company Noldus (http://www.noldus.com), it analyzes facial expressions from photographs and videos with respect to six basic emotional states. The software lays a grid of more than 500 key points over images of each participant’s face and identifies emotions by distinct muscle movement that is associated with a change in emotions. According to Noldus, the software performs equally well as trained annotators in describing emotional states of subjects. Researchers in economics have recently started to use FaceReader™ to investigate how emotional states correlate with economic decision making. For example, Noussair and Nguyen (Reference Noussair and Nguyen2014) investigate the role of emotions in decision-making under risk, and Breaban and Noussair (Reference Breaban and Noussair2017) look at emotions in the context of asset markets. Furthermore, van Leeuwen et al. (Reference van Leeuwen, Noussair, Offerman, Suetens, van Veelen and van de Ven2017) investigate the relationship of anger and rejections in ultimatum games, and Breaban et al. (Reference Breaban, van de Kuilen and Noussair2016) describe a positive correlation between more negative states and greater prudence.

μ Cap is the first tool to allow experimenters to create a fully automated connection between z-Tree and FaceReader™ in a straightforward way. It is barely noticeable by subjects and imprecisions are kept very limited with only a few milliseconds delay. As we will point out later, μ Cap can easily be implemented to improve experimenters’ work flow and has possible applications beyond its original purpose. μ Cap has found first applications, such as Kugler et al. (Reference Kugler, Ye, Motro and Noussair2018), who use the tool to synchronize decisions from a trust game programmed in z-Tree with videos of participants’ facial expressions.

2 μ Cap

μ Cap is a bundled software package and comes with two additional separate applications: μ Config and μ Project. The governing principle of μ Cap is based on repeatedly reading a specific pixel from the top left corner of the participants’ screen (for an example screen, see Fig. 1). Whenever this pixel changes color, μ Cap will create a time stamp in a csv file.Footnote 2 By doing so, μ Cap handles each participant separately, such that the progress of participants in different stages of the experiment will still be recorded precisely. This would matter, for example, if participants in a dictator game arrive at the donation decision screen at different points in time, for example, due to a preceding questionnaire. This procedure requires a minimum of additional programming effort by the experimenter. On the one hand, the tool μ Config renders the creation of hand-written configuration files (i.e., files that define the association of colors and time stamps) unnecessary, as it offers experimenters a graphical interface to define colors μ Cap will react to and to attach labels to events. On the other hand, additional programming in z-Tree is limited to adding a small box of 20 × 20 pixels in the top left corner of the screen.Footnote 3

Fig. 1 Participant’s screen when using μ Cap, red arrow added to highlight the colored rectangle (color figure online)

In a typical experiment, experimenters will use μ Config when they program their experiment in z-Tree. A set of colors (using the RGB scheme) has to be defined in μ Config and rectangles with the corresponding colors have to be placed in the top left corner of the z-tree screen. Virtually unlimited numbers of markers (i.e., color event combinations) can be defined to distinguish all parts of interest within the experiment. If the same part occurs more than once, markers can be re-used. Note that using the RGB scheme also enables users to only marginally vary the color of the rectangle (e.g., from 250, 250, 250 to 252, 250, 250) in a way such that μ Cap recognizes the difference, but the participant does not. This might be especially valuable in a setting where the experimenter does not want the participant to know (or guess) what the instances of interest are.

After μ Config has been set up correctly, μ Cap will not only create video files of participants (including starting and stopping the recording at prespecified color changes)Footnote 4, but also correctly create the necessary time stamps. For μ Cap to function properly, it simply needs to be executed on all clients (just like z-Leaf, the client program of z-Tree). After the start of the experiment, μ Cap will be executed in the background, not visible to subjects.

As soon as video files and time stamps have been collected from the client computersFootnote 5, μ Project will help to automatically create a new project in FaceReader™ (i.e., a file that will allow to batch process the analysis of facial expressions from the video material). Not only will this project automatically create the total number of subjects, but also load the video files and time stamps. Especially in projects with a large number of subjects, the automated processing of data using μ Project can substantially reduce experimenters’ workload.

In a typical experiment, and to generate the desired output, an experimenter would typically follow these steps:Footnote 6 First, when programming the experiment, ensure that all screens in z-Tree include the small rectangular box with the desired RGB color code. Second, using μ Config, generate the configuration csv file defining the markers for these color codes. Third, store the csv in the same folder as μ Cap and make sure that all client computers can access this folder. Fourth, when running the experiment, make sure that all clients run μ Cap and z-Leaf to generate the marker and video file for each subject. Fifth, collect all files and load them in μ ProjectCreator, which in turn creates an frx file to be loaded in FaceReader™. Sixth, let FaceReader™ execute the frx file and save the resulting data file that indicates the strength of each emotion at every point in time. μ Cap puts an additional column of data in the file that indicates the event markers.

μ Cap can be used free of charge. We however kindly ask you to cite this paper in any academic publication or presentation when μ Cap has been used (citeware). The most recent version of the package and documentation can be downloaded from http://mucap.david-schindler.de.

3 Advantages, limitations, and possible extensions

The biggest advantage of using μ Cap for experimenters is the reduction of performing time-consuming manual tasks to link video footage and experimental data. This can prove extremely valuable in settings with a large number of subjects. While other options require repeating many small steps, μ Cap offers a worry-free solution that automatizes many of the necessary tasks and enhances workflow. Another big advantage of μ Cap is the ability to only record parts of interest. In a long experiment consisting of several parts, maybe only the facial expressions in one part, for example the donation decision in a dictator game, are of interest to the experimenter. Since FaceReader™ needs computational power to process the videos files, useless footage will drastically increase the time FaceReader™ will need for completing the analysis. In our experience, an average workstation needs about 3 min to process a 1-min video file. Many experiments nowadays consist of more than 100 participants and last up to 2 h. If a researcher was interested in only 15 min of the recordings, the use of μ Cap could reduce computation time drastically from 600 h to only 75 h.

An important issue for experimenters is probably the imprecision resulting from the use of μ Cap. Although we have not taken the effort to measure the exact latency μ Cap induces, we are very confident that the incongruence of timestamp and video frame is less than a frame (i.e., less than 40 milliseconds).Footnote 7 Apart from many important advantages, using μ Cap may have some downsides, which we would like users to be aware of. Although we tried to keep additional programming very limited, the use of μ Cap will always mean additional work to the experimenter. We think that our solution is very efficient in requiring as little effort as possible, but still, colored rectangles will have to be implemented in z-Tree and the system needs to run both, z-Tree and μ Cap, which might be more burdensome than using z-Tree alone. On the μ Cap website, we provide a template treatment that can help to reduce programming effort. Additionally, μ Cap is a program written for users of Microsoft Windows. We do not see ourselves able to offer the package for any other operating system, but μ Cap is compatible with versions of Windows XP up to Windows 10. Given that z-Tree requires Windows as an operating system as well, we deem this constraint not binding. Support on other operating systems is probably possible using emulations.

While μ Cap was developed to work with FaceReader™, it is potentially interesting to use the tool in other environments or with different software packages, where timestamps in csv format are needed. In addition to the presented application, one could think of linking experimental data to a variety of physiological measures, like eye-tracking, skin conductance and the like. Since most of those physiological measures are optimized to analyze individual behavior, μ Cap can offer a simple extension for the analysis of interactive behavior, especially in large groups. Also, the principle of placing colored rectangles on screen can be implemented in basically any experimental software that supports such a customization and therefore does not only limit μ Cap to be used with z-Tree.

The principle of changing colors to create precise timestamps has since been picked up by Perakakis et al. (Reference Perakakis, Guinot-Saporta, Jaber-Lopez, García-Gallego and Georgantzis2019), who suggest the use of external photo sensors to feed changes in luminosity directly to external devices. While their approach is equally promising in terms of effectiveness, it requires setup costs: monetary (purchasing photo sensors) and non-monetary (installing and maintaining such a system). Most importantly, however, it does not offer the degree of automation that μ Cap offers for the specific combination of connecting FaceReader™ and z-Tree.

4 Conclusion

In this paper we have introduced μ Cap, a software package for experimental economists to link FaceReader™ and z-Tree. We have explained the governing principle of μ Cap and discussed its advantages and weaknesses. Furthermore, we have suggested possible extensions. We heavily rely on users’ feedback to improve and enhance the program and therefore ask all users to contribute by sending in ideas.

Footnotes

This paper resulted from work during our joint time at MELESSA. Many thanks to the whole team for offering such a friendly and constructive environment. We thank René Cyranek and Martin Kocher for their encouragement and input. Additionally, we are grateful for the many valuable comments we received from Florentin Krämer, Konstantin Lucks, Simeon Schudy and Mark Westcott.

1 As already said, this is only possible since version 3.5.0. If the experimenter was to use earlier versions, additional workarounds would have to be used (e.g., switching lights on and off to vary brightness).

2 csv is short for comma-separated values, a simple file format to store tabular data.

3 Although μ Cap only monitors one specific pixel, different screen types and resolutions might make it necessary to increase the monitored area to a rectangle of size 20 × 20 pixels. This takes up only a small part of the entire screen but still very likely includes the pixel of interest. We recommend every lab to pre-test the optimal rectangle size and position before first use.

4 An obvious prerequisite for the recording feature is of course that a video camera was installed on the respective computer. μ Cap was designed such that every standard web cam will do and no specific camera type or drivers will be required.

5 At the University of Munich we created a tool (MELESSA Video Collector) for automating the process, it is however specifically tailored to our IT infrastructure. If you are interested in the tool, please get in touch.

6 A more detailed description can be found in the user’s manual accompanying the software package.

7 The screen sensor monitors the pixel of interest with 50–100 frames per seconds which induces a latency of 7–20 milliseconds. One may add a few milliseconds for Microsoft Windows-specific latencies. We understand that there exist settings where this latency is unacceptable (and in those cases μ Cap should not be used), we, however, deem it small enough most applications involving the use of z-Tree. We also recommend to test the actual latency in your IT environment before using the tool, as the delay may depend on features of your infrastructure.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

References

Breaban, A., Noussair, C. N. (2017). Emotional state and market behavior. Review of Finance, 22(1), 279309. 10.1093/rof/rfx022CrossRefGoogle Scholar
Breaban, A., van de Kuilen, G., Noussair, C. N. (2016). Prudence, emotional state, personality, and cognitive ability. Frontiers in Psychology, 7, 1688 10.3389/fpsyg.2016.01688CrossRefGoogle ScholarPubMed
Fischbacher, U. (2007). z-Tree: Zurich toolbox for ready-made economic experiments. Experimental Economics, 10(2), 171178. 10.1007/s10683-006-9159-4CrossRefGoogle Scholar
Kugler, T., Ye, B., Motro, D., Noussair, C. N. (2018). On trust and disgust: Evidence from face reading and virtual reality. Working paper.Google Scholar
Noussair, C. N., Nguyen, Y. (2014). Risk aversion and emotions. Pacific Economic Review, 19(3), 296312. 10.1111/1468-0106.12064Google Scholar
Perakakis, P., Guinot-Saporta, J., Jaber-Lopez, T., García-Gallego, A., Georgantzis, N. (2019). A technical note on the precise timing of behavioral events in economic experiments. Journal of Behavioral and Experimental Finance, 21, 1014. 10.1016/j.jbef.2018.08.002CrossRefGoogle Scholar
van Leeuwen, B., Noussair, C. N., Offerman, T., Suetens, S., van Veelen, M., van de Ven, J. (2017). Predictably angry–facial cues provide a credible signal of destructive behavior. Management Science, 64(7), 33523364. 10.1287/mnsc.2017.2727CrossRefGoogle Scholar
Figure 0

Fig. 1 Participant’s screen when using μCap, red arrow added to highlight the colored rectangle (color figure online)