We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In this chapter, we explore several important statistical models. Statistical models allow us to perform statistical inference—the process of selecting models and making predictions about the underlying distributions—based on the data we have. Many approaches exist, from the stochastic block model and its generalizations to the edge observer model, the exponential random graph model, and the graphical LASSO. As we show in this chapter, such models help us understand our data, but using them may at times be challenging, either computationally or mathematically. For example, the model must often be specified with great care, lest it seize on a drastically unexpected network property or fall victim to degeneracy. Or the model must make implausibly strong assumptions, such as conditionally independent edges, leading us to question its applicability to our problem. Or even our data may be too large for the inference method to handle efficiently. As we discuss, the search continues for better, more tractable statistical models and more efficient, more accurate inference algorithms for network data.
This chapter explains weighting in a manner that allows us to appreciate both the power and vulnerability of the technique and, by extension, other techniques that rely on similar assumptions. Once we understand how weighting works, we will better understand when it works. This chapter opens by discussing weighting in general terms. The subsequent sections get more granular. Sections 3.2 and 3.3 cover widely used weighting techniques: cell-weighting and raking. Section 3.4 covers variable selection, a topic that may well be more important than weighting technique. Section 3.5 covers the effect of weighting on precision, a topic that frequently gets lost in polling reporting. This chapter mixes intuitive and somewhat technical descriptions of weighting. The technical details in Sections 3.2 and3.3 can be skimmed by readers focused on the big picture how weighting works.
Bovine tuberculosis (bTB) is prevalent among livestock and wildlife in many countries including New Zealand (NZ), a country which aims to eradicate bTB by 2055. This study evaluates predictions related to the numbers of livestock herds with bTB in NZ from 2012 to 2021 inclusive using both statistical and mechanistic (causal) modelling. Additionally, this study made predictions for the numbers of infected herds between 2022 and 2059. This study introduces a new graphical method representing the causal criteria of strength of association, such as R2, and the consistency of predictions, such as mean squared error. Mechanistic modelling predictions were, on average, more frequently (3 of 4) unbiased than statistical modelling predictions (1 of 4). Additionally, power model predictions were, on average, more frequently (3 of 4) unbiased than exponential model predictions (1 of 4). The mechanistic power model, along with annual updating, had the highest R2 and the lowest mean squared error of predictions. It also exhibited the closest approximation to unbiased predictions. Notably, significantly biased predictions were all underestimates. Based on the mechanistic power model, the biological eradication of bTB from New Zealand is predicted to occur after 2055. Disease eradication planning will benefit from annual updating of future predictions.
This paper proposes a methodology to define and quantify the precision uncertainties in aerothermodynamic cycle model comparisons. The total uncertainty depends on biases and random errors commonly found in such comparisons. These biases and random errors are classified and discussed based on observations found in the literature. The biases account for effects such as differences in model inputs, the configurations being simulated, and thermodynamic packages. Random errors consider the effects on the physics modeling and numerical methods used in cycle models. The methodology is applied to a comparison of two cycle models, designated as the model subject to comparison and reference model, respectively. The former is the so-called Aerothermodynamic Generic Cycle Model developed in-house at the Laboratory of Applied Research in Active Control, Avionics and AeroServoElasticity (LARCASE); the latter is an equivalent model programmed in the Numerical Propulsion System Simulation (NPSS). The proposed methodology is intended to quantify the bias and random errors effects on different cycle parameters of interest, such as thrust, specific fuel consumption, among others. Each bias and random errors are determined by deliberately preventing the effects from other biases and random errors. The methodology presented in this paper can be extended to other cycle model comparisons. Moreover, the uncertainty figures derived in this work are recommended to be used in other model comparisons when no better reference is available.
The Virtual Environment for Radiotherapy Training (VERT) is a simulator used to train radiotherapy students cost-effectively with limited risk. VERT is available as a two-dimensional (2D) and a more costly three-dimensional (3D) stereoscopic resource. This study aimed to identify the specific benefits afforded by stereoscopic visualisation for student training in skin apposition techniques.
Method:
Eight participants completed six electron skin apposition setups in both 2D and 3D views of VERT using a 7 cm × 10 cm rectangular applicator setup to 100 cm focus skin distance (FSD). The standard deviation (SD) of the mean distance from each corner of the applicator to the virtual patient’s skin surface [which we define as apposition precision (AP)] was measured along with the time taken to achieve each setup. Participants then completed a four-question Likert-style questionnaire concerning their preferences and perceptions of the 2D and 3D views.
Results:
There was little difference in mean setup times with 218·43 seconds for 2D and 211·29 seconds for 3D (3·3% difference). There was a similarly small difference in AP with a mean SD of 5·61 mm for 2D and 5·79 mm for 3D (3·2% difference) between views. The questionnaire results showed no preference for the 3D view over the 2D.
Conclusion:
These findings suggest that the 2D and 3D views result in similar setup times and precision, with no user preference for the 3D view. It is recommended that the 2D version of VERT could be utilised in similar situations with a reduced logistical and financial impact.
Determination of sample size (the number of replications) is a key step in the design of an observational study or randomized experiment. Statistical procedures for this purpose are readily available. Their treatment in textbooks is often somewhat marginal, however, and frequently the focus is on just one particular method of inference (significance test, confidence interval). Here, we provide a unified review of approaches and explain their close interrelationships, emphasizing that all approaches rely on the standard error of the quantity of interest, most often a pairwise difference of two means. The focus is on methods that are easy to compute, even without a computer. Our main recommendation based on standard errors is summarized as what we call the 1-2-3 rule for a difference of two treatment means.
We define a linguistic distribution as the range of values for a quantitative linguistic variable across the texts in a corpus. An accurate parameter estimate means that the measures based on the corpus are close to the actual values of a parameter in the domain. Precision refers to whether or not the corpus is large enough to reliably capture the distribution of a particular linguistic feature. Distribution considerations relate to the question of how many texts are needed. The answer will vary depending on the nature of the linguistic variable of interest. Linguistic variables can be categorized broadly as linguistic tokens (rates of occurrence for a feature) and linguistic types (the number of different items that occur). The distribution considerations for linguistic tokens and linguistic types are fundamentally different. Corpora can be “undersampled” or “oversampled” – neither of which is desirable. Statistical measures can be used to evaluate corpus size relative to research goals – one set of measures enables researchers to determine the required sample size for a new corpus, while another provides a means to determine precision for an existing corpus. The adage “bigger is better” aptly captures our best recommendation for studies of words and other linguistic types.
Location and navigation services based on global navigation satellite systems (GNSS) are needed for real-time high-precision positioning applications in relevant economic sectors, such as precision agriculture, transport, civil engineering or mapping. Real-time navigation users of GNSS networks have significantly increased all around the world, since the 1990s, and usage has exceeded initial expectations. Therefore, if the evolution of GNSS network users is monitored, the dynamics of market segments can be studied. The implementation of this hypothesis requires the treatment of big volumes of navigation data over several years and the continuous monitoring of customers. This paper is focused on the management of massive connection of GNSS users in an efficient way, in order to obtain analysis and statistics. Big data architecture and data analyses based on data mining algorithms have been implemented as the best way to approach the hypothesis. Results demonstrate the dynamic of users of different market segments, the increasing demand over the years and, specifically, conclusions are obtained about the trends, year-on-year correlation and business volume recovering after economic crisis periods.
This chapter presents a general overview of sensor characterization from a system perspective, without any reference to a specific implementation. The systems are defined on the basis of input and output signal description and the overall architecture is discussed, showing how the information is transduced, limited, and corrupted by errors. One of the main points of this chapter is the characterization of the error model, and how this one could be used to evaluate the uncertainty of the measure, along with its relationship with resolution, precision and accuracy of the overall system. Finally, the quantization process, which is at the base of any digital sensor systems, is illustrated, interpreted, and included in the error model.
Get up to speed with the fundamentals of electronic sensor design with this comprehensive guide, and discover powerful techniques to reduce the overall design timeline for your specific applications. Includes a step-by-step introduction to a generalized information-centric approach for designing electronic sensors, demonstrating universally applicable practical approaches to speed up the design process. Features detailed coverage of all the tools necessary for effective characterization and organization of the design process, improving overall process efficiency. Provides a coherent and rigorous theoretical framework for understanding the fundamentals of sensor design, to encourage an intuitive understanding of sensor design requirements. Emphasising an integrated interdisciplinary approach throughout, this is an essential tool for professional engineers and graduate students keen to improve their understanding of cutting-edge electronic sensor design.
Since the end of the Cold War the United States and other major powers have wielded their air forces against much weaker state and non-state actors. In this age of primacy, air wars have been contests between unequals and characterized by asymmetries of power, interest, and technology. This volume examines ten contemporary wars where air power played a major and at times decisive role. Its chapters explore the evolving use of unmanned aircraft against global terrorist organizations as well as more conventional air conflicts in Bosnia, Kosovo, Afghanistan, Iraq, Lebanon, Libya, Yemen, Syria, and against ISIS. Air superiority could be assumed in this unique and brief period where the international system was largely absent great power competition. However, the reliable and unchallenged employment of a spectrum of manned and unmanned technologies permitted in the age of primacy may not prove effective in future conflicts.
The Saudi-led intervention in Yemen is a valuable case study in the coercive use of air power. Saudi Arabia’s bombing campaign demonstrates the danger of employing a punishment approach against a subnational actor in a multi-sided internal conflict. Strategies of collective punishment, blockade, and decapitation have all malfunctioned against a stubborn and resilient Houthi adversary. The early audit from Yemen endorses a denial strategy, supports the growing orthodoxy that air attack is most effectively applied in support of ground forces, and offers insight on the relative utility of interdiction and close air support for that purpose. The Saudi-led coalition’s performance also underscores how difficult it is to achieve positive objectives with proxy warfare, regardless of air support. This chapter dissects the campaign, assesses its effectiveness, and draws lessons about air power’s ability to influence the outcome of similar complex civil war scenarios elsewhere.
Moscow’s air power success in Syria presents an opportunity to assess Russian inter- and intra-war adaptation in kinetic counterinsurgency. New technologies and tactics have enhanced the Russian Aerospace Force’s battlefield lethality and resilience but have not yet triggered a fundamental transition in operating concept. Russia’s air force has yet to actualize a reconnaissance-strike regime or advanced air-ground integration. Instead, situational and strategic factors appear to be more powerful contributors to its superior performance in the Syrian conflict. The way in which Russia has chosen to leverage its improvements in accurate munitions delivery, moreover, highlights key differences between its warfighting philosophy and that embraced by major Western powers. The resultant findings provide insight into Moscow’s coercive campaign logic, force-planning imperatives, and the likelihood that it might re-export the Syria model elsewhere.
Atom probe tomography (APT) is a technique that has expanded significantly in terms of adoption, dataset size, and quality during the past 15 years. The sophistication used to ensure ultimate analysis precision has not kept pace. The earliest APT datasets were small enough that deadtime and background considerations for processing mass spectrum peaks were secondary. Today, datasets can reach beyond a billion atoms so that high precision data processing procedures and corrections need to be considered to attain reliable accuracy at the parts-per-million level. This paper considers options for mass spectrum ranging, deadtime corrections, and error propagation as applied to an extrinsic-silicon standard specimen to attain agreement for silicon isotopic fraction measurements across multiple instruments, instrument types, and acquisition conditions. Precision consistent with those predicted by counting statistics is attained showing agreement in silicon isotope fraction measurements across multiple instruments, instrument platforms, and analysis conditions.
As an empirical science, the study of animal behaviour involves measurement. When an animal engages in a series of actions, such as exploring or catching prey, the problem becomes that of identifying suitable components from that stream of action to use as markers suitable to score. The markers selected reflect the observer’s hypothesis of the organisation of the behaviour. Unfortunately, in most cases, researchers only provide heuristic descriptions of what they measure. To make the study of animal behaviour more scientific, the hypotheses underlying the decision of what to measure should be made explicit so as to allow them to be tested. Using hypothesis testing as a guiding framework, several principles that have been shown to be useful in identifying behavioural organisation are presented, providing a starting point in deciding what markers to select for measurement.
As an empirical science, the study of animal behaviour involves measurement. When an animal engages in a series of actions, such as exploring or catching prey, the problem becomes that of identifying suitable components from that stream of action to use as markers suitable to score. The markers selected reflect the observer’s hypothesis of the organisation of the behaviour. Unfortunately, in most cases, researchers only provide heuristic descriptions of what they measure. To make the study of animal behaviour more scientific, the hypotheses underlying the decision of what to measure should be made explicit so as to allow them to be tested. Using hypothesis testing as a guiding framework, several principles that have been shown to be useful in identifying behavioural organisation are presented, providing a starting point in deciding what markers to select for measurement.
Poor-quality measurements are likely to yield meaningless or unrepeatable findings. High-quality measurements are characterised by validity and reliability. Validity relates to whether the right quantity is measured and is assessed by comparing a metric with a gold-standard metric. Reliability relates to whether measurements are repeatable and is assessed by comparing repeated measurements. The accuracy and precision with which measurements are made affect both validity and reliability. A major source of unreliability in behavioural data comes from the involvement of human observers in the measurement process. Where trade-offs are necessary, it is better to measure the right quantity somewhat unreliably than to measure the wrong quantity very reliably. Floor and ceiling effects can make measurements useless for answering a question, even if they are valid and reliable. Outlying data points should only be removed if they can be proved to be biologically impossible or to result from errors.
This article explores how Qin Dynasty bureaucrats attained accuracy and precision in producing and designing measuring containers. One of the salient achievements of the Qin empire was the so-called unification of measurement systems. Yet measurement systems and the technological methods employed to achieve accuracy and precision in ancient China have scarcely been explored in English-language scholarship. I will examine the material features of the containers and reconstruct the production methods with which the clay models, molds, and cores of the containers were prepared before casting. I also investigate the inscriptions on the containers to determine whether they were cast or engraved. In so doing, I supply the field of Qin history with additional solid evidence about how accuracy and precision were defined in the Qin empire.
The use of local knowledge observations to generate empirical wildlife resource exploitation data in data-poor, capacity-limited settings is increasing. Yet, there are few studies quantitatively examining their relationship with those made by researchers or natural resource managers. We present a case study comparing intra-annual patterns in effort and mobulid ray (Mobula spp.) catches derived from local knowledge and fisheries landings data at identical spatiotemporal scales in Zanzibar (Tanzania). The Bland–Altman approach to method comparison was used to quantify agreement, bias and precision between methods. Observations from the local knowledge of fishers and those led by researchers showed significant evidence of agreement, demonstrating the potential for local knowledge to act as a proxy, or complement, for researcher-led methods in assessing intra-annual patterns of wildlife resource exploitation. However, there was evidence of bias and low precision between methods, undermining any assumptions of equivalency. Our results underline the importance of considering bias and precision between methods as opposed to simply assessing agreement, as is commonplace in the literature. This case study demonstrates the value of rigorous method comparison in informing the appropriate use of outputs from different knowledge sources, thus facilitating the sustainable management of wildlife resources and the livelihoods of those reliant upon them.
We undertook a strengths, weaknesses, opportunities, and threats (SWOT) analysis of Northern Hemisphere tree-ring datasets included in IntCal20 in order to evaluate their strategic fit with the demands of archaeological users. Case studies on wiggle-matching single tree rings from timbers in historic buildings and Bayesian modeling of series of results on archaeological samples from Neolithic long barrows in central-southern England exemplify the archaeological implications that arise when using IntCal20. The SWOT analysis provides an opportunity to think strategically about future radiocarbon (14C) calibration so as to maximize the utility of 14C dating in archaeology and safeguard its reputation in the discipline.