Hostname: page-component-586b7cd67f-dlnhk Total loading time: 0 Render date: 2024-11-23T22:30:50.941Z Has data issue: false hasContentIssue false

Detecting anomalies in data on government violence

Published online by Cambridge University Press:  15 July 2021

Kanisha D. Bond
Affiliation:
Department of Political Science, Binghamton University, Binghamton, NY, USA
Courtenay R. Conrad*
Affiliation:
Department of Political Science, University of California, Merced, CA, USA
Dylan Moses
Affiliation:
Department of Political Science, University of California, Merced, CA, USA
Joel W. Simmons
Affiliation:
School of Foreign Service, Georgetown University, Washington, DC, USA
*
*Corresponding author. Email: [email protected]
Rights & Permissions [Opens in a new window]

Abstract

Can data on government coercion and violence be trusted when the data are generated by state itself? In this paper, we investigate the extent to which data from the California Department of Corrections and Rehabilitation (CDCR) regarding the use of force by corrections officers against prison inmates between 2008 and 2017 conform to Benford's Law. Following a growing data forensics literature, we expect misreporting of the use-of-force in California state prisons to cause the observed data to deviate from Benford's distribution. Statistical hypothesis tests and further investigation of CDCR data—which show both temporal and cross-sectional variance in conformity with Benford's Law—are consistent with misreporting of the use-of-force by the CDCR. Our results suggest that data on government coercion generated by the state should be inspected carefully before being used to test hypotheses or make policy.

Type
Research Note
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
Copyright
Copyright © The Author(s), 2021. Published by Cambridge University Press on behalf of the European Political Science Association

Although prison officials see force as important for maintaining order inside correctional facilities (e.g., Marquardt, Reference Marquardt1986; Boin and Van Duin, Reference Boin and Van Duin1995), abuse has long been a point of concern among prisoners, rights advocates, and system officials. In the United States, reports of irregular documentation and excessive guard behavior have come from prisons and jails nationwide despite a host of legal and ethical mandates guiding federal, state, and local efforts to improve facility management and decrease the use of force against incarcerated citizens. Unfortunately, the accuracy, validity, and reliability of publicly-available information from within prisons—across regime type and political context—have long been questioned (e.g., Gartner and Macmillan, Reference Gartner and Macmillan1995; Davenport, Reference Davenport2009).

Despite the advantages of media-based and/or crowd-sourced data for “open-source surveillance” and chronicling police-related violence vis-á-vis the un-incarcerated public (Finch et al., Reference Finch, Beck, Brian Burghart, Johnson, Klinger and Thomas2019), these alternatives are not always practical for understanding violence between inmates and correctional officers inside American prisons and jails. Penal institutions are among the most opaque in any political system (McDowall and Loftin, Reference McDowall and Loftin2009). With violence both among and against inmates occurring far from public view, crowdsourcing data from incarcerated citizens would be both difficult and ethically fraught, given the potential for retaliation against whistleblowers (e.g., Robbins, Reference Robbins2016) and the political pressure to downgrade reported offenses (Seidman and Couzens, Reference Seidman and Couzens1974; Maltz, Reference Maltz, Addington and Lynch2007). Nonetheless, while the quality of prison oversight is often uncertain and accountability processes are rife with principal-agent problems (Armstrong, Reference Armstrong2014), data on intra-prison violence is often singularly available from these institutions themselves.

Where the accuracy and validity of official reports may be in question and alternatives are impractical, unethical, or absent, assessing the reliability of institutionally-reported data is imperative. In this paper, we show how researchers can leverage Benford's Law (Benford, Reference Benford1938), applying it to monthly count data from the California Department of Corrections (CDCR) on officers’ uses of force against inmates under its jurisdiction in 37 CDCR institutions from 2008 to 2017. Specifically, we compare the distribution of the numerals that occupy the first digit of these counts to the distribution that Benford's Law expects. Our results point to persistent irregularities. We find agreement among four tests of conformity—the $\chi ^2$ goodness-of-fit test and three alternatives that are less influenced by sample size—that the data do not conform to Benford's Law. Statistical hypothesis tests and further investigation of the data show temporal and cross-sectional variance in conformity with Benford's Law and are consistent with misreporting of the use-of-force by the CDCR. Importantly, these findings hold when accounting for the multiple comparisons problem (Benjamini and Hochberg, Reference Benjamini and Hochberg1995) and when using techniques informed by Bayesian analysis (Sellke et al., Reference Sellke, Bayarri and Berger2001; Pericchi and Torres, Reference Pericchi and Torres2011). Our results suggest that data on government coercion generated by the state should be inspected carefully before being used to test hypotheses or make policy.

Testing Benford's Law on CDCR uses of force

Benford's Law theorizes that the numerals occupying the first digit in a number—including counts of CDCR uses of force—are distributed $P( d) = \log _{10} ( 1 + {1\over d})\; \rm { for\; all }\; {\it d} \in {\{ 1,\; ...,\; 9\} }$.Footnote 1 A wide array of phenomena follow the distribution—stock market prices (Ley, Reference Ley1996), the winning bids in auctions (Giles, Reference Giles2007), and the population of US counties (Hill, Reference Hill1995, p. 355 ).Footnote 2 Given widespread adherence to Benford's distribution, deviations thereof are often understood as evidence of irregular or fraudulent data (Varian, Reference Varian1972). The intuition behind forensics investigations is that when humans manipulate or falsify data, they are typically unable to do so in a way that adheres to the Benford distribution because of ignorance of the law's existence, psychological biases, strategic considerations, and inconsistent record keeping. Thus, violation from Benford's Law is understood as indicating human interference in the collection of data. Although legitimate data generating processes can naturally produce some deviations from Benford's Law,Footnote 3 deviations indicate—at a minimum—that data should be inspected closely before being used for hypothesis testing.

Since 2008, the 35 institutions overseen by the California Department of Corrections have been mandated to collect and publish use-of-force data. Pursuant to California Code of Regulations Title 15, §3268.1 (California Codes of Regulations, Title 15, 3268.1, 1999), “any employee who uses force or observes a staff use of force” is mandated to report details of the incident. Published monthly by the CDCR's Office of Research, each report includes counts of the use of force in each institution, disaggregated by type of force across ten categories. All uses of force can be applied against a single inmate or a group of inmates, and more than one type of force can be used per incident.Footnote 4

We test statistical hypotheses about the extent to which CDCR data conform to Benford's Law and probe subsamples of our results to shed light on practical hypotheses about the extent to which CDCR data are fraudulent.Footnote 5 We use four statistical tests to evaluate the statistical conformity of CDCR use-of-force data with Benford's Law. The first is the $\chi ^2$ goodness-of-fit test, the most commonly-used test in existing scholarship. The test statistic is calculated

(1)$$\chi^2 = \sum_{i = 1}^{k} = {( O_{i} - E_{i}) ^{2}\over E_{i}},\; $$

where $O_{i}$ is the observed frequency of digit i, and $E_{i}$ is the frequency expected by the Benford distribution. The statistic is compared to critical values in standard $\chi ^{2}$ tables at 8 degrees of freedom with a 95 percent critical value of 15.5 and a 99 percent critical value of 20.1.

A drawback of the $\chi ^{2}$ test is that its considerable power in large samples means that even slight deviations or just a handful of outliers can result in large test statistics and correspondingly small p-values (Cho and Gaines, Reference Cho and Gaines2007). Accordingly, our second test draws from Cho and Gaines (Reference Cho and Gaines2007) and calculates the Euclidean distance of the observed data from the Benford distribution:

(2)$$d = \sqrt{\sum_{i = 1}^{k} ( p_i - b_i) ^2}.$$

In Equation 2, $p_{i}$ is the proportion of observations with numeral i in the first digit, and $b_{i}$ is the proportion expected by Benford's Law. As Cho and Gaines (Reference Cho and Gaines2007) note, d is not derived from the hypothesis testing framework, so there are no critical values to compare it against. Instead, they recommend calculating $d^{\ast } = {d\over 1.036}$, where 1.036 is the maximum possible Euclidean distance between observed data and the Benford distribution—that is, the value that occurs when 9 is the only numeral in the first digit. Thus, $d^{\ast }$ ranges between 0 and 1, where larger values indicate more significant deviation. For the sake of comparison, Cho and Gaines (Reference Cho and Gaines2007, Table 1) show that 0.02 is a realistically small value of $d^{\ast }$ that would indicate conformity and values approaching 0.1 indicate more extensive deviation. Given this, we use $d^{\ast } = 0.06$ as a useful heuristic; values below it indicate data fairly consistent with Benford's Law; values above it indicate nonconformity.

Table 1. Tests of conformity to Benford's Law by year

Note: Cells display test statistics in the $\chi ^2$ and $d^{\ast }_N$ columns, proportions in the $d^\ast$ column, and the first digit mean and bootstrapped confidence interval in the Digit Mean Test columns.

Our third test is also based on Euclidean distance. Following Morrow (Reference Morrow2014), we calculate

(3)$$d^\ast _N = \sqrt{N} \ast \sqrt{\sum_{i = 1}^k ( p_i - b_i) ^2}.$$

The advantage of $d^{\ast }_{N}$ over $d^{\ast }$ is that it has critical values that allow for hypothesis testing. Specifically, Morrow (Reference Morrow2014, pp. 4–5) shows that 1.33 is the critical value to reject the null hypothesis that the data are distributed per Benford's Law with 95 percent confidence, and 1.57 is sufficient to do so at the 99 percent confidence level.

Our fourth test is the simplest. If a set of numbers follow Benford's Law, then the numerals occupying the first digits will have a mean value of 3.441. Hicken and Mebane (Reference Hicken and Mebane2015) recommend calculating the mean from the observed data and using nonparametric bootstrapping to assess whether the 95 percent confidence interval contains that value.

Empirical results

In testing whether the CDCR data conform to Benford's Law, we limit our analyses to evaluating the aggregated total uses of force, because several of the force types have counts that are too small for reliable inferences about their digit distributions. The dashed line in Figure 1 plots the proportions with which each numeral appears in the first digit of the total data, across all 35 institutions for the period from 2008 to 2017. The solid line plots the proportions expected by Benford's Law. It is evident that the observed data exhibit some inconsistencies with the expected distribution.

Figure 1. Observed and expected first digit distributions.

Aggregating CDCR data across all institutions over the entire decade conceals important nuance; further interrogation of the data is warranted to determine potential explanations for these anomalies. We rely on theoretical expectations about subsets of the data to further interrogate our results (Cleary and Thibodeau, Reference Cleary and Thibodeau2015, p. 212; Rauch et al., Reference Rauch, Göttsche, Brähler, Engel and Miller2015, p. 262). In addition to the aggregated data being inconsistent with Benford's Law, we might also expect the data to exhibit particular temporal and cross-sectional patterns if the CDCR consistently manipulated its use-of-force data.

Consider the effect of changes in California's legal or political context on the behavior of prison guards. During the time period covered by our data, two major policy interventions were implemented that reformed when corrections officers could use force and how uses-of-force were to be reported. In August 2010, the CDCR filed notice that it had adopted and implemented statewide required use-of-force reforms associated with a 1995 US District Court ruling that correctional staff at Pelican Bay State Prison routinely used unusual and excessive force against inmates and were negligent in reporting it (Madrid v. Gomez, 889 F. Supp. 1146 (N.D. Cal. 1995)). A second set of policy reforms went into effect in 2016, when the state further clarified staff reporting responsibilities, updated official reporting procedures, and revised the definitions of acceptable use of force options in the CDCR Department of Operations Manual.

If these reforms had the intended effects of clarifying the conditions under which force should be used and of improving how uses of force are reported, we might expect the data to conform more closely to Benford's Law at two discrete moments. The first is in 2010, when the CDCR implemented use-of-force reforms. If these reforms worked, data from 2010 onward might adhere more closely than the data from 2008 and 2009. The second moment is in 2016, when the second set of reforms came into effect. These additional reforms might further improve the data's conformity to the Benford Distribution compared even to the 2010 reforms. Figure 2 graphs the observed and expected distributions by year and shows significant temporal variation. Compared to 2008 and 2009, data from 2010 onward appear to conform more closely with the Benford distribution; results from 2016 and 2017 appear to conform still more closely to Benford's Law.

Figure 2. Digit distributions by year

Table 1 shows tests of conformity to Benford's Law by year. The first row in the second column reports the $\chi ^{2}$ test statistic when the data are aggregated across all years and all institutions types. The test statistic is very large, allowing us to reject the null hypothesis that the observed data follow Benford's distribution. The Cho and Gaines (Reference Cho and Gaines2007) Euclidean Distance proportion ($d^{\ast }$) is more conservative than the $\chi ^{2}$ test, but it too indicates nonconformity. Like the $\chi ^{2}$, Morrow's ($d^{\ast }_{N}$) Euclidean Distance test yields a large test statistic that rejects the null at the 99 percent level of confidence. The last two columns show similar results. If the observed data are consistent with Benford's Law, the mean of the first digit would equal 3.441. In these data, the observed mean is 2.96 with 95 percent confidence intervals that do not include the Benford value.

Table 1 also allows assessment of how conformity changes over time. First, though the $\chi ^{2}$ test statistics are very large early in the period, they decline sharply toward the end of it. According to the $\chi ^{2}$ test, in 2014, 2016, and 2017, we cannot reject the null hypothesis that the data appear to conform to Benford's Law. The two Euclidean distance tests and the digit mean tests also identify 2016 and 2017 as years where the data deviate little from Benford's expectations. A second feature of Table 2 is the notable improvement in conformity that occurred in 2010. Compared to 2009, the $\chi ^{2}$ statistic in 2010 is about half the size, while the two distance tests also yield considerably smaller values. These results suggest that the 2010 reforms may have had some effect in improving how force is reported. Conformity improves even more in 2016 and 2017, which coincides with the second set of reforms implemented in 2016.

In addition to temporal variance, we investigate nonconformance with Benford's Law across institution type as indicative of purposeful misreporting of CDCR use-of-force data.Footnote 6 The CDCR divides its institutions into four types. New inmates arrive in the CDCR system at reception centers, where their placements needs are recorded and they are assigned to one of 34 institutions. Inmates can remain in reception centers for up to 120 days. While in reception, inmates are assigned classification scores that facilitate placement at an appropriate institution. High security institutions house the CDCR's violent male offenders; general population facilities house minimum to medium custody male inmates while providing them with opportunities for participation in vocational and academic programs. Female institutions house women offenders across classification scores. Across institution types, we might expect heterogeneity in guard incentives to misreport (or fail to report) violence.

Figure 3 graphs the digit proportions for each institution type and plots them against the Benford distribution. The most glaring result is the poor fit of high security institutions, where digits 1–4 are almost uniformly distributed. General population and female institutions also fail to conform to Benford's Law; reception institutions adhere more closely to expectations. Table 2 performs the four statistical tests described above to determine whether there is heterogeneity with regard to institution type. In every case—across every test and every institution type—our tests show nonconformity with the expectations of Benford's Law. In the second column, which shows the results of the $\chi ^{2}$ test, and the fourth column, which shows the results of Morrow's ($d^{\ast }_{N}$) Euclidean Distance test, we are universally able to reject the null hypothesis that the observed data follow Benford's distribution. The Cho and Gaines (Reference Cho and Gaines2007) Euclidean Distance proportion ($d^{\ast }$) and the test of the mean of the first digit similarly indicate reason to be concerned with potential irregularities in CDCR-reported data across institution type. Although the data do not conform to expectations for any institution type, we see the lowest conformance with Benford's Law in high security male prisons, presumably locations where guards face high incentives to engage in violence and misreport their behavior. We see the highest conformance with Benford's Law in reception centers—locations where inmates are held for 120 days before being assigned to more permanent locations.Footnote 7

Figure 3. Digit distribution by institution type

Table 2. Tests of conformity to Benford's first digit distribution

Note: Cells display test statistics in the $\chi ^2$ and $d^{\ast }_N$ columns, proportions in the $d^\ast$ column, and the first digit mean and bootstrapped confidence interval in the Digit Mean Test columns.

Conclusion

In 2018, Dr. Michael Golding, chief psychiatrist for the CDCR, alleged that prison officials “played around with how they counted things and they played around with time frames” in their reports of how information regarding inmate medical and psychiatric care (Stanton, 2018). This sort of “playing around” with counts and timing, if an endemic problem across CDCR institutions and across reporting areas, could account for the violations of Benford's Law we have observed. Our results suggest that there are irregularities in the CDCR's reporting of use-of-force data. The irregularities are less persistent following reforms intended to limit violence and streamline the reporting of violence, and they are the most persistent in high security male prisons. What does this variance tell us about how the likely origin of the misreporting of CDCR data?

From an organizational behavior perspective, our analyses highlight systematic differences over time and space in how officials seem to understand the expected risks, costs, and benefits of misreporting. Prison officials—both guards and their superiors—face incentives to misreport instances of the use of force (Seidman and Couzens, Reference Seidman and Couzens1974; Maltz, Reference Maltz, Addington and Lynch2007). Misreporting occurs for several reasons: because interactions with inmates are private, because what constitutes force is sometimes unclear, and because there is inadequate infrastructure to catch violators. Conformance with Benford's Law increased in 2010 and 2016 suggests that reforms, which made clearer what constitutes a violation and institutionalized a higher standard of reporting by increasing oversight (e.g., Cook and Fortunato, Reference Cook and Fortunato2019). That reported data gets “better” with oversight suggests purposive CDCR misreporting, although the data do not allow us to determine whether it originates from guard reports or those compiling aggregate prison data.

The contributions of this work are not limited to scholars of criminology and criminal justice. Social scientists from a variety of substantive interests are often forced to rely on state-supplied data; few explore the content of those data for signs of manipulation by the actors responsible for its generation and release. Although scholars have adduced that governments—particularly nondemocratic ones—release data on state-citizen interactions strategically (Hollyer et al., Reference Hollyer, Rosendorff and Vreeland2018), few have explored the content of such data for signs of manipulation. How can researchers be confident of the veracity of their data? What constitutes proper use of data pertaining to coercion that is obtained from the coercive organization itself? The forensics approach employed here can serve as a template that researchers use to more deeply and directly engage these important questions.

Supplementary material

The supplementary material for this article can be found at https://doi.org/10.1017/psrm.2021.40.

Acknowledgements

Thanks to the California Department of Corrections and Rehabilitation (CDCR) for providing publicly available use-of-force data, as well as several anonymous reviewers and the Editor for helpful comments on previous drafts.

Footnotes

1 Not all data are expected to conform to Benford's Law. CDCR data follow Nigrini (Reference Nigrini2011) conformity requirements.

2 For explanations for Benford's Law, please see Miller (Reference Miller2015).

3 Mebane (Reference Mebane2013) shows that strategic voting can make unproblematic election returns data violate Benford's Law.

4 For descriptives, please refer to our Supplemental Appendix.

5 For more on statistical and practical hypotheses, see Cleary and Thibodeau (Reference Cleary and Thibodeau2015, p. 203)

6 Note that the frequency of CDCR institution type does not vary over time.

7 In our Supplemental Appendix, we show the robustness of our results to Bayesian-influenced techniques (Sellke et al., Reference Sellke, Bayarri and Berger2001; Pericchi and Torres, Reference Pericchi and Torres2011) and accounting for the multiple comparisons problem (Benjamini and Hochberg, Reference Benjamini and Hochberg1995).

References

Armstrong, A (2014) No prisoner left behind: enhancing public transparency of penal institutions. Stanford Law and Policy Review 25, 435478.Google Scholar
Benford, F (1938) The law of anomalous numbers. Proceedings of the American Philosophical Society, pp. 551572.Google Scholar
Benjamini, Y and Hochberg, Y (1995) Controlling the false discovery rate: a practical and powerful approach to multiple testing. Journal of the Royal Statistical Society. Series B (Methodological) 57, 289300.CrossRefGoogle Scholar
Boin, RA and Van Duin, MJ (1995) Prison riots as organizational failures: a managerial perspective. The Prison Journal 75, 357379.CrossRefGoogle Scholar
Cho, WKT and Gaines, BJ (2007) Breaking the (Benford) law: statistical fraud detection in campaign finance. The American Statistician 61, 218223.Google Scholar
Cleary, RJ and Thibodeau, JC (2015) Benford's law as a bridge between statistics and accounting. In Benford's Law. Princeton, NJ: Princeton University Press, p. 203.Google Scholar
Cook, SJ and Fortunato, D (2019) Police (agency) problem: police use of deadly force. Working paper.Google Scholar
Davenport, C (2009) Media Bias Perspective and State Repression: The Black Panther Party. New York: Cambridge University Press.CrossRefGoogle Scholar
Finch, BK, Beck, A, Brian Burghart, D, Johnson, R, Klinger, D and Thomas, K (2019) Using crowd-Sourced data to explore police-related-deaths in the United States (2000–2017): the case of fatal encounters Open Health Data 6, 18.CrossRefGoogle Scholar
Gartner, R and Macmillan, R (1995) The effect of victim-offender relationship on reporting crimes of violence against women. Canadian Journal of Criminology 37, 393.CrossRefGoogle Scholar
Giles, DE (2007) Benford's law and naturally occurring prices in certain ebay auctions. Applied Economics Letters 14, 157161.CrossRefGoogle Scholar
Hicken, A and Mebane, Jr WR (2015) A guide to election forensics. Mimeograph. University of Michigan.Google Scholar
Hill, TP (1995) A statistical derivation of the significant-digit law. Statistical Science 10, 354363.CrossRefGoogle Scholar
Hollyer, JR, Rosendorff, BP and Vreeland, JR (2018) Information, Democracy, and Autocracy: Economic Transparency and Political Instability. New York: Cambridge University Press.CrossRefGoogle Scholar
Ley, E (1996) On the peculiar distribution of the US stock indexes' digits. The American Statistician 50, 311313.Google Scholar
Maltz, M (2007) Missing UCR data and divergence of the NCVS and UCR trends. In Addington, L and Lynch, J (eds). Understanding Crime Statistics: Revisiting the Divergence of the NCVS and UCR. New York: Cambridge University Press, pp. 269294.Google Scholar
Marquardt, JW (1986) Prison guards and the use of physical coercion as a mechanism of prisoner control. Criminology 24, 347366.CrossRefGoogle Scholar
McDowall, D and Loftin, C (2009) Do US city crime rates follow a national trend? The influence of nationwide conditions on local crime patterns. Journal of Quantitative Criminology 25, 307324.CrossRefGoogle Scholar
Mebane, Jr W (2013) Election Forensics. Working Book Manuscript.Google Scholar
Miller, SJ (2015) Benford's Law. Princeton, NJ: Princeton University Press.Google Scholar
Morrow, J (2014) Benford's Law, families of distributions and a test basis. Working paper.Google Scholar
Nigrini, MJ (2011) Data-Driven Forensic Investigation: Using Microsoft Access and Excel to Detect Fraud and Data Irregularities. Hoboken, NJ: John Wiley & Sons.Google Scholar
Pericchi, L and Torres, D (2011) Quick anomaly detection by the newcomb-benford law, with applications to electoral processes data from the USA, Puerto Rico, and Venezuela. Statistical Sciences 26, 502516.CrossRefGoogle Scholar
Rauch, B, Göttsche, M, Brähler, G and Engel, S (2015) Measuring the quality of European statistics. In Miller, SJ (ed.), Benford's Law. Princeton, NJ: Princeton University Press, p. 262.Google Scholar
Robbins, T (2016) ‘I was terrified’: inmates say they paid a brutal price for a guard's injury, November 15. Available at https://www.nytimes.com/2016/11/15/nyregion/new-york-prison-inmates-guards-beatings.html (July 23, 2019).Google Scholar
Seidman, D and Couzens, M (1974) Getting the crime rate down: political pressure and crime reporting. Law & Society Review 8, 457493.CrossRefGoogle Scholar
Sellke, T, Bayarri, MJ and Berger, JO (2001) Calibration of $\rho$ values for testing precise null hypotheses. The American Statistician 55, 6271.CrossRefGoogle Scholar
Stanton (2018) Secret prison report alleges poor treatment of inmates, misleading reports on care, October 31. Available at: https://www.sacbee.com/news/local/crime/article230015364.html#storylink=cpy (July 1, 2021).Google Scholar
Varian, HR (1972) Benford's law. American Statistician 26, 65.Google Scholar
Figure 0

Table 1. Tests of conformity to Benford's Law by year

Figure 1

Figure 1. Observed and expected first digit distributions.

Figure 2

Figure 2. Digit distributions by year

Figure 3

Figure 3. Digit distribution by institution type

Figure 4

Table 2. Tests of conformity to Benford's first digit distribution

Supplementary material: Link

Bond et al. Dataset

Link
Supplementary material: PDF

Bond et al. supplementary material

Bond et al. supplementary material

Download Bond et al. supplementary material(PDF)
PDF 1.4 MB