We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Edited by
R. A. Bailey, University of St Andrews, Scotland,Peter J. Cameron, University of St Andrews, Scotland,Yaokun Wu, Shanghai Jiao Tong University, China
Eigenvalues of the Laplacian matrix of a graph have been widely used in studying connectivity and expansion properties of networks, and also in analyzing random walks on a graph. Independently, statisticians introduced various optimality criteria in experimental design, the goal being to obtain more accurate estimates of quantities of interest in an experiment. It turns out that the most popular of these optimality criteria for block designs are determined by the Laplacian eigenvalues of the concurrence graph, or of the Levi graph, of the design. The most important optimality criteria, called A (average), D (determinant) and E (extreme), are related to the conductance of the graph as an electrical network, the number of spanning trees, and the isoperimetric properties of the graphs, respectively. The number of spanning trees is also an evaluation of the Tutte polynomial of the graph, and is the subject of the Merino–Welsh conjecture relating it to acyclic and totally cyclic orientations, of interest in their own right. This chapter ties these ideas together, building on the work in [4] and [5].
It is known that without synchronization via a global clock one cannot obtain common knowledge by communication. Moreover, it is folklore that without communicating higher-level information one cannot obtain arbitrary higher-order shared knowledge. Here, we make this result precise in the setting of gossip where agents make one-to-one telephone calls to share secrets: we prove that “everyone knows that everyone knows that everyone knows all secrets” is unsatisfiable in a logic of knowledge for gossiping. We also prove that, given n agents, $2n-3$ calls are optimal to reach “someone knows that everyone knows all secrets” and that $n - 2 + \binom{n}{2}$ calls are optimal to reach “everyone knows that everyone knows all secrets.”
Information amount is a crucial determinant of decision outcomes. But how much information one should collect before arriving at a decision depends on a cost–benefit trade-off: Is the expected benefit of increased decision accuracy that can be gained from additional information higher than the additional information costs? To investigate this trade-off with temporal costs for information, we developed a speed–accuracy trade-off paradigm with sample-based decisions, in which the total payoff was the product of the average payoff per decision and the number of decisions completed in a restricted period. Increasing n served to increase the accuracy of choices, but also to decrease the number of completed choices. Yet, whereas the number of completed choices decreases linearly with increasing n, accuracy increases in a clearly sublinear fashion. As a consequence, the sample-based choice task calls for more weight given to speed than to accuracy. However, overly conservative sampling strategies prevented almost all participants from exploiting the speed advantage despite various guiding interventions. Even when the task was enriched by the social aspect of a teammate or rival, who demonstrated the optimal trade-off, participants remained too focussed on accuracy. We also investigate the cost–benefit trade-off with financial information costs, for which participants’ performance was less biased. We propose this to be related to how evaluable the information’s costs were relative to its benefits. Issues of adaptivity in contrast with optimality are addressed in a final discussion.
Recent work has derived the optimal policy for two-alternative value-baseddecisions, in which decision-makers compare the subjective expected reward oftwo alternatives. Under specific task assumptions — such as linearutility, linear cost of time and constant processing noise — the optimalpolicy is implemented by a diffusion process in which parallel decisionthresholds collapse over time as a function of prior knowledge about averagereward across trials. This policy predicts that the decision dynamics of eachtrial are dominated by the difference in value between alternatives and areinsensitive to the magnitude of the alternatives (i.e., their summed values).This prediction clashes with empirical evidence showing magnitude-sensitivityeven in the case of equal alternatives, and with ecologically plausible accountsof decision making. Previous work has shown that relaxing assumptions aboutlinear utility or linear time cost can give rise to optimal magnitude-sensitivepolicies. Here we question the assumption of constant processing noise, infavour of input-dependent noise. The neurally plausible assumption ofinput-dependent noise during evidence accumulation has received strong supportfrom previous experimental and modelling work. We show that includinginput-dependent noise in the evidence accumulation process results in amagnitude-sensitive optimal policy for value-based decision-making, even in thecase of a linear utility function and a linear cost of time, for both single(i.e., isolated) choices and sequences of choices in which decision-makersmaximise reward rate. Compared to explanations that rely on non-linear utilityfunctions and/or non-linear cost of time, our proposed account ofmagnitude-sensitive optimal decision-making provides a parsimonious explanationthat bridges the gap between various task assumptions and between various typesof decision making.
In motor lotteries the probability of success is inherent in a person’s ability to make a speeded pointing movement. By contrast, in traditional economic lotteries, the probability of success is explicitly stated. Decision making with economic lotteries has revealed many violations of rational decision making models. However, with motor lotteries people’s performance is often near optimal, and is well described by statistical decision theory. We report the results of an experiment testing whether motor planning decisions exhibit the attraction effect, a well-known axiomatic violation of some rational decision models. The effect occurs when changing the composition of a choice set alters preferences between its members. We provide the first demonstration that people do exhibit the attraction effect when choosing between motor lotteries. We also found that people exhibited a similar sized attraction effect in motor and traditional economic paradigms. People’s near-optimal performance with motor lotteries is characterized by the efficiency of their decisions. In attraction effect experiments performance is instead characterized by the violation of an axiom. We discuss the extent that axiomatic and efficiency measures can provide insight in assessing the rationality of decision making.
Bayesian statistics offers a normative description for how a person should combine their original beliefs (i.e., their priors) in light of new evidence (i.e., the likelihood). Previous research suggests that people tend to under-weight both their prior (base rate neglect) and the likelihood (conservatism), although this varies by individual and situation. Yet this work generally elicits people’s knowledge as single point estimates (e.g., x has a 5% probability of occurring) rather than as a full distribution. Here we demonstrate the utility of eliciting and fitting full distributions when studying these questions. Across three experiments, we found substantial variation in the extent to which people showed base rate neglect and conservatism, which our method allowed us to measure for the first time simultaneously at the level of the individual. While most people tended to disregard the base rate, they did so less when the prior was made explicit. Although many individuals were conservative, there was no apparent systematic relationship between base rate neglect and conservatism within each individual. We suggest that this method shows great potential for studying human probabilistic reasoning.
Chapter 1 describes the main objectives of the book. It argues that uncertainties are omnipresent in all aspects of the design, analysis, construction, operation, and maintenance of structures and infrastructure systems. It sets three goals for engineering of constructed facilities under conditions of uncertainty: safety, serviceability, and optimal use of resources. It then argues that probability theory and Bayesian statistics provide the proper mathematical framework for assessing safety and serviceability and for formulating optimal design under uncertainty. The chapter provides a brief review of the history and key developments of the field during the past 100 years. Also described are commercial and free software that can be used to carry out the kind of analyses that are described in the book. The chapter ends with a description of the organization of the book and outlines of the subsequent chapters.
Building on the success of Abadir and Magnus' Matrix Algebra in the Econometric Exercises Series, Statistics serves as a bridge between elementary and specialized statistics. Professors Abadir, Heijmans, and Magnus freely use matrix algebra to cover intermediate to advanced material. Each chapter contains a general introduction, followed by a series of connected exercises which build up knowledge systematically. The characteristic feature of the book (and indeed the series) is that all exercises are fully solved. The authors present many new proofs of established results, along with new results, often involving shortcuts that resort to statistical conditioning arguments.
This book covers the fundamental principles of environmental law; how they can be reframed from a rational actor perspective. The tools of law and economics can be brought to bear on policy questions within environmental law. The approach taken in this book is to build on the existing consensus in international environmental law and to provide it with new analytical tools to improve the design of legal rules and to enable prospective modelling of the effects of rules in pre-implementation stages of evaluation and deliberation. The Pigouvian idea of environmental injuries as economic externalities. The core idea of Pigou’s model is that manufacturing costs that are excluded from the decision-making process will inherently not be reflected in the decision making of producers, and thus, manufacturing costs will be incorrectly perceived as lower than they actually are. The key is to ensure better decision making and to prevent environmental injuries by ‘internalising’ the cost externalities. Rational actors, forced to bear the costs of the injuries resulting from their production activities, will set optimal levels of production inclusive of minimising the costs of pollution injuries via reducing the incidence of those pollution injuries
Human perceptual decisions are often described as optimal. Critics of this view have argued that claims of optimality are overly flexible and lack explanatory power. Meanwhile, advocates for optimality have countered that such criticisms single out a few selected papers. To elucidate the issue of optimality in perceptual decision making, we review the extensive literature on suboptimal performance in perceptual tasks. We discuss eight different classes of suboptimal perceptual decisions, including improper placement, maintenance, and adjustment of perceptual criteria; inadequate tradeoff between speed and accuracy; inappropriate confidence ratings; misweightings in cue combination; and findings related to various perceptual illusions and biases. In addition, we discuss conceptual shortcomings of a focus on optimality, such as definitional difficulties and the limited value of optimality claims in and of themselves. We therefore advocate that the field drop its emphasis on whether observed behavior is optimal and instead concentrate on building and testing detailed observer models that explain behavior across a wide range of tasks. To facilitate this transition, we compile the proposed hypotheses regarding the origins of suboptimal perceptual decisions reviewed here. We argue that verifying, rejecting, and expanding these explanations for suboptimal behavior – rather than assessing optimality per se – should be among the major goals of the science of perceptual decision making.
The keynote article (Goldrick, Putnam & Schwartz, 2016) discusses doubling phenomena occasionally found in code-switching corpora. Their analysis focuses on an English–Tamil sentence in which an SVO sequence in English is followed by a verb in Tamil, resulting in an apparent VOV structure:
In this paper we give necessary and sufficient optimality conditions for a vectoroptimization problem over cones involving support functions in objective as well asconstraints, using cone-convex and other related functions. We also associate a unifieddual to the primal problem and establish weak, strong and converse duality results. Anumber of previously studied problems appear as special cases.
The problem of detecting an abrupt change in the distribution of an arbitrary, sequentially observed, continuous-path stochastic process is considered and the optimality of the CUSUM test is established with respect to a modified version of Lorden's criterion. We apply this result to the case that a random drift emerges in a fractional Brownian motion and we show that the CUSUM test optimizes Lorden's original criterion when a fractional Brownian motion with Hurst index H adopts a polynomial drift term with exponent H+1/2.
For the efficient numerical solution of indefinite linear systems arising from curl conforming edge element approximations of the time-harmonic Maxwell equation, we consider local multigrid methods (LMM) on adaptively refined meshes. The edge element discretization is done by the lowest order edge elements of Nédélec’s first family. The LMM features local hybrid Hiptmair smoothers of Jacobi and Gauss–Seidel type which are performed only on basis functions associated with newly created edges/nodal points or those edges/nodal points where the support of the corresponding basis function has changed during the refinement process. The adaptive mesh refinement is based on Dörfler marking for residual-type a posteriori error estimators and the newest vertex bisection strategy. Using the abstract Schwarz theory of multilevel iterative schemes, quasi-optimal convergence of the LMM is shown, i.e., the convergence rates are independent of mesh sizes and mesh levels provided the coarsest mesh is chosen sufficiently fine. The theoretical findings are illustrated by the results of some numerical examples.
We prove the quasi-optimal convergence of a standard adaptive finite element method (AFEM) for a class of nonlinear elliptic second-order equations of monotone type. The adaptive algorithm is based on residual-type a posteriori error estimators and Dörfler’s strategy is assumed for marking. We first prove a contraction property for a suitable definition of total error, analogous to the one used by Diening and Kreuzer (2008) and equivalent to the total error defined by Cascón et. al. (2008). This contraction implies linear convergence of the discrete solutions to the exact solution in the usual H1 Sobolev norm. Secondly, we use this contraction to derive the optimal complexity of the AFEM. The results are based on ideas from Diening and Kreuzer and extend the theory from Cascón et. al. to a class of nonlinear problems which stem from strongly monotone and Lipschitz operators.
We consider the design of an effective and reliable adaptive finite element method (AFEM) for the nonlinear Poisson-Boltzmann equation (PBE). We first examine the two-term regularization technique for the continuous problem recently proposed by Chen, Holst and Xu based on the removal of the singular electrostatic potential inside biomolecules; this technique made possible the development of the first complete solution and approximation theory for the Poisson-Boltzmann equation, the first provably convergent discretization and also allowed for the development of a provably convergent AFEM. However, in practical implementation, this two-term regularization exhibits numerical instability. Therefore, we examine a variation of this regularization technique which can be shown to be less susceptible to such instability. We establish a priori estimates and other basic results for the continuous regularized problem, as well as for Galerkin finite element approximations. We show that the new approach produces regularized continuous and discrete problems with the same mathematical advantages of the original regularization. We then design an AFEM scheme for the new regularized problem and show that the resulting AFEM scheme is accurate and reliable, by proving a contraction result for the error. This result, which is one of the first results of this type for nonlinear elliptic problems, is based on using continuous and discrete a prioriL∞ estimates. To provide a high-quality geometric model as input to the AFEM algorithm, we also describe a class of feature-preserving adaptive mesh generation algorithms designed specifically for constructing meshes of biomolecular structures, based on the intrinsic local structure tensor of the molecular surface. All of the algorithms described in the article are implemented in the Finite Element Toolkit (FETK), developed and maintained at UCSD. The stability advantages of the new regularization scheme are demonstrated with FETK through comparisons with the original regularization approach for a model problem. The convergence and accuracy of the overall AFEMalgorithmis also illustrated by numerical approximation of electrostatic solvation energy for an insulin protein.
This article analyzes the consequences of environmental tax policy under public debt stabilization constraint. A public sector of pollution abatement is financed by a tax on pollutant emissions and/or by public debt. At the same time, households can also invest in private pollution abatement activities. We show that the economy may be characterized by an environmental-poverty trap if debt is too large or public abatement is not sufficiently efficient with respect to the private one. However, there exists a level of public abatement and debt at which a stable steady state is optimal.
Can the output of human cognition be predicted from the assumption that it is an optimal response to the information-processing demands of the environment? A methodology called rational analysis is described for deriving predictions about cognitive phenomena using optimization assumptions. The predictions flow from the statistical structure of the environment and not the assumed structure of the mind. Bayesian inference is used, assuming that people start with a weak prior model of the world which they integrate with experience to develop stronger models of specific aspects of the world. Cognitive performance maximizes the difference between the expected gain and cost of mental effort. (1) Memory performance can be predicted on the assumption that retrieval seeks a maximal trade-off between the probability of finding the relevant memories and the effort required to do so; in (2) categorization performance there is a similar trade-off between accuracy in predicting object features and the cost of hypothesis formation; in (3) casual inference the trade-off is between accuracy in predicting future events and the cost of hypothesis formation; and in (4) problem solving it is between the probability of achieving goals and the cost of both external and mental problem-solving search. The implemention of these rational prescriptions in neurally plausible architecture is also discussed.
In this article, we provide a framework of bilingual grammar that offers a theoretical understanding of the socio-cognitive bases of code-switching in terms of five general principles that, individually or through interaction with each other, explain how and why specific instances of code-switching arise. We provide cross-linguistic empirical evidence to claim that these general sociolinguistic principles, stated as socio-cognitive constraints on code-switching, characterize multi-linguistic competence in so far as they are able to show how “local” functions of code-switching arise as specific instantiations of these “global” principles, or (products of) their interactions.