We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Throughout his intellectual career, Carnap had developed original views on the nature of mathematical knowledge, its relation to logic, and the application of mathematics in the natural sciences. A general line of continuity in his philosophical work is the conviction that both mathematics and logic are formal or non-factual in nature. Carnap’s formality thesis can be identified in different periods, connecting his early contributions to the foundations of geometry and general axiomatics from the 1920s with his later work on the general syntax of mathematical languages in Logical Syntax. Given the centrality of this idea, how precisely did Carnap understand the formality thesis concerning mathematical knowledge? How was the thesis characterized at different stages in his philosophical work? The aim in the chapter will be to retrace the development of Carnap’s thinking about the formality of logic and mathematics from the 1920s until the late 1930s. As we will see, in spite of his general adherence to the thesis, there were several significant shifts in his understanding, corresponding to changes in his conceptual framework. Specifically, one can identify a transition from a semantic account of formality related to his study of axiomatic theories to a syntactic formalism developed in Logical Syntax.
The sharp bound for the third Hankel determinant for the coefficients of the inverse function of convex functions is obtained, thus answering a recent conjecture concerning invariance of coefficient functionals for convex functions.
Both neuroscientists and literary translators aim to understand how invariance of meaning can arise across different forms. But literary translators must render experiences that are more subtle, more contextual, more entire, than a glimpse of a cat’s tail. Thus, for the neuroscientist who seeks to understand invariances, the goals of literary translation match the highest levels of scientific aspiration. From a neuroscientific perspective, it is fascinating that literary translation is possible at all.
Symmetry is introduced as a basic notion of physics and, in particular, for soil mechanics also. Isotropy and anisotropy are discussed. A special case of isotropy of space is the principle of material frame indifference which plays an eminent role in the development of constitutive equations. The geometric scaling is discussed together with the notion of a simple material, which is – often unconsciously – basic in geotechnical engineering. Invariance with respect to stress and time scales is discussed. Mechanical similarity and the associated Pi theorem is shown to be the basis for the evaluation of so-called physical simulations with model tests.
We are now halfway through our book. It may appear to be a coincidence that insight and creative thinking appear at the “center” of a book on problem solving, a coincidence that the gestalt psychologists would surely have liked. Contemplating how the young Gauss solved an arithmetic problem, and considering how a mutilated checkerboard problem is solved, while realizing that the solution of these problems are analogous to how snowflakes look, may come as a surprise. It should not because all three are based on symmetry and invariance. Before discussing how “ordinary” insight problems are solved, this chapter describes the insights that led Galileo, Archimedes, and Einstein to their scientific discoveries. Physicists have known for over a century that there would be no science based on the natural laws, and no natural laws in the first place, if there were no symmetry in nature. So what encouraged this to happen just over a 100 years ago? In 1918, Emmy Noether formulated and proved her mathematical theorems that revolutionized physics. Her theorems showed how the conservation laws can be derived from the symmetry of these laws by applying a least-action principle. The review of symmetry in scientific discovery presented in this chapter provides the stage for a new formalism of problem solving that may apply not only to the sophisticated areas of science, but also to “ordinary” brain teasers, as well as to the TSP and the 15-puzzle.
Chapter 5 presents methods for assessing structural reliability under incomplete probability information, i.e., when complete distributional information on the basic random variables is not available. First, second-moment methods are presented where the available information is limited to the means, variances, and covariances of the basic random variables. These include the mean-centered first-order second-moment (MCFOSM) method, the first-order second-moment (FOSM) method, and the generalized second-moment method. These methods lead to approximate computations of the reliability index as a measure of safety. Lack of invariance of the MCFOSM method relative to the formulation of the limit-state function is demonstrated. The FOSM method requires finding the “design point,” which is the point in a transformed standard outcome space that has minimum distance from the origin. An algorithm for finding this point is presented. Next, methods are presented that incorporate probabilistic information beyond the second moments, including knowledge of higher moments and marginal distributions. Last, a method is presented that employs the upper Chebyshev bound for any given state of probability information. The chapter ends with a discussion of the historical significance of the above methods as well as their shortcomings and argues that they should no longer be used in practice.
In Chapter 1, I start by analysing the role of generalisations in scientific practice. Law statements or generalisations are involved in one way or another in explanation, confirmation, manipulation or prediction. I argue that these practices require a particular reading of the generalisations involved, namely, as making claims about the behaviour of systems. These practices therefore presuppose the existence of systems or things. Next, I look at the modal surface structure associated with laws. I use the term ‘surface structure’ to indicate that this structure may or may not be reduced to non-modal facts – as the Humean has it. I will sideline the debate about whether Humeanism is a tenable philosophical position. The positive claim I advance is that the modal surface structure can be explicated in terms of invariance relations – where I take invariance to be a modal notion.
This chapter expands a little on the idea that gravity is geometry, and then describes how the geometry of space and time is a subject for experiment and theory in physics. In a gravitational field, all bodies with the same initial conditions will follow the same curve in space and time. Einstein’s idea was that this uniqueness of path could be explained in terms of the geometry of the four-dimensional union of space and time called spacetime. Specifically, he proposed that the presence of a mass such as Earth curves the geometry of spacetime nearby, and that, in the absence of any other forces, all bodies move on the straight paths in this curved spacetime. We explore how simple three-dimensional geometries can be thought of as curved surfaces in a hypothetical four-dimensional Euclidean space. The key to a general description of geometry is to use differential and integral calculus to reduce all geometry to a specification of the distance between each pair of nearby points.
This chapter covers the Special Theory of Relativity, introduced by Einstein in a pair of papers in 1905, the same year in which he postulated the quantization of radiation energy and showed how to use observations of diffusion to measure constants of microscopic physics. Special relativity revolutionized our ideas of space, time, and mass, and it gave the physicists of the twentieth century a paradigm for the incorporation of conditions of invariance into the fundamental principles of physics.
Building on Nozick's invariantism about objectivity, I propose to define scientific objectivity in terms of counterfactual independence. I will argue that such a counterfactual independence account is (a) able to overcome the decisive shortcomings of Nozick's original invariantism and (b) applicable to three paradigmatic kinds of scientific objectivity (that is, objectivity as replication, objectivity as robustness, and objectivity as Mertonian universalism).
Although wisdom is a desirable life span developmental goal, researchers have often lacked brief and reliable construct measures. We examined whether an abbreviated set of items could be empirically derived from the popular 40-item five-factor Self-Assessed Wisdom Scale (SAWS).
Design:
Survey data from 709 respondents were randomly split into two and analyzed using confirmatory factor analysis (CFA).
Setting:
The survey was conducted online in Australia.
Participants:
The total sample consisted of 709 participants (Mage = 35.67 years; age range = 15–92 years) of whom 22% were male, and 78% female.
Measurement:
The study analyzed the 40-item SAWS.
Results:
Sample 1 showed the traditional five-factor structure for the 40-item SAWS did not fit the data. Exploratory factor analysis (EFA) on Sample 2 offered an alternative model based on a 15-item, five-factor solution with the latent variables Reminiscence/Reflection, Humor, Emotional Regulation, Experience, and Openness. This model, which replicates the factor structure of the original 40-item SAWS with a short form of 15 items, was then confirmed on Sample 1 using a CFA that produced acceptable fit and measurement invariance across age groups.
Conclusions:
We suggest the abbreviated SAWS-15 can be useful as a measure of individual differences in wisdom, and we highlight areas for future research.
Since the inclusion of the Internet Gaming Disorder (IGD) in the Diagnostic and statistical manual of mental disorders (5th ed.) (DSM-5), the Internet Gaming Disorder Scale-Short Form (IGDS9-SF), a short nine items test, has become one of the most used standardized instruments for its psychometric evaluation. This study presents a validation and psychometric evaluation of the Spanish version of the IGDS9-SF. A sample of 2173 videogame players between 12 and 22 years old, comprising both genders, was employed, achieved with a randomized selection process from educational institutions in the city of Madrid. Participants completed the adapted version of the IGDS9-SF, the General Health Questionnaire (GHQ-12) and a negative cognitions scale associated with videogame use, as well as sociodemographic data and frequency of videogame play. A unifactorial structure with sufficient reliability and internal consistency was found through exploratory and confirmatory analyses. In addition, the instrument was found to have good construct validity; the scoring of the IGDS9-SF were found to show a positive association with gaming frequency, with general health problems, and to a greater extent, with problematic cognitions with regard to videogames. Factorial invariance was found concerning the age of participants. However, even though the factorial structure was consistent across genders, neither metric nor scalar invariance were found; for this reason, we present a scale for the whole sample and a different one for gender. These results suggest that this Spanish version of the IGDS9-SF is a reliable and valid instrument, useful to evaluate the severity of IGD in Spanish students, and we provide a scoring scale for measurement purposes.
The short version of the Oxford-Liverpool Inventory of Feelings and Experiences (sO-LIFE) is a widely used measure assessing schizotypy. There is limited information, however, on how sO-LIFE scores compare across different countries. The main goal of the present study is to test the measurement invariance of the sO-LIFE scores in a large sample of non-clinical adolescents and young adults from four European countries (UK, Switzerland, Italy, and Spain). The scores were obtained from validated versions of the sO-LIFE in their respective languages. The sample comprised 4190 participants (M = 20.87 years; SD = 3.71 years). The study of the internal structure, using confirmatory factor analysis, revealed that both three (i.e., positive schizotypy, cognitive disorganisation, and introvertive anhedonia) and four-factor (i.e., positive schizotypy, cognitive disorganisation, introvertive anhedonia, and impulsive nonconformity) models fitted the data moderately well. Multi-group confirmatory factor analysis showed that the three-factor model had partial strong measurement invariance across countries. Eight items were non-invariant across samples. Significant statistical differences in the mean scores of the s-OLIFE were found by country. Reliability scores, estimated with Ordinal alpha ranged from 0.75 to 0.87. Using the Item Response Theory framework, the sO-LIFE provides more accuracy information at the medium and high end of the latent trait. The current results show further evidence in support of the psychometric proprieties of the sO-LIFE, provide new information about the cross-cultural equivalence of schizotypy and support the use of this measure to screen for psychotic-like features and liability to psychosis in general population samples from different European countries.
Models of mortality often require constraints in order that parameters may be estimated uniquely. It is not difficult to find references in the literature to the “identifiability problem”, and papers often give arguments to justify the choice of particular constraint systems designed to deal with this problem. Many of these models are generalised linear models, and it is known that the fitted values (of mortality) in such models are identifiable, i.e., invariant with respect to the choice of constraint systems. We show that for a wide class of forecasting models, namely ARIMA
$(p,\delta, q)$
models with a fitted mean and
$\delta = 1$
or 2, identifiability extends to the forecast values of mortality; this extended identifiability continues to hold when some model terms are smoothed. The results are illustrated with data on UK males from the Office for National Statistics for the age-period model, the age-period-cohort model, the age-period-cohort-improvements model of the Continuous Mortality Investigation and the Lee–Carter model.
Suppose one has a set of data that arises from a specific distribution with unknown parameter vector. A natural question to ask is the following: what value of this vector is most likely to have generated these data? The answer to this question is provided by the maximum-likelihood estimator (MLE). Likelihood and related functions are the subject of this chapter. It will turn out that we have already seen some examples of MLEs in the previous chapters. Here, we define likelihood, the score vector, the Hessian matrix, the information-matrix equivalence, parameter identification, the Cramér–Rao lower bound and its extensions, profile (concentrated) likelihood and its adjustments, as well as the properties of MLEs (including conditions for existence, consistency, and asymptotic normality) and the score (including martingale representation and local sufficiency). Applications are given, including some for the normal linear model.
We concluded the previous chapter by introducing two methods of inference concerning the parameter vector. Since the Bayesian approach was one of them, we focus here on the competing frequentist or classical approach in its attempt to draw conclusions about the value of this vector. We introduce hypothesis testing, test statistics and their critical regions, size, and power. We then introduce desirable properties (lack of bias, uniformly most powerful test, consistency, invariance with respect to some class of transformations, similarity, admissibility) that help us find optimal tests. The Neyman–Pearson lemma and extensions are introduced. Likelihood ratio (LR), Wald (W), score and Lagrange multiplier (LM) tests are introduced for general hypotheses, including inequality hypotheses for the parameter vector. Monotone LR and the Karlin–Rubin theorem are studied, as is Neyman's structure and its role in finding optimal tests. The exponential family features prominently in the applications. Finally, distribution-free (nonparametric) tests are studied and linked to results in earlier chapters.
Let the function $f$ be analytic in $\mathbb{D}=\{z:|z|<1\}$ and given by $f(z)=z+\sum _{n=2}^{\infty }a_{n}z^{n}$. For $0<\unicode[STIX]{x1D6FD}\leq 1$, denote by ${\mathcal{C}}(\unicode[STIX]{x1D6FD})$ the class of strongly convex functions. We give sharp bounds for the initial coefficients of the inverse function of $f\in {\mathcal{C}}(\unicode[STIX]{x1D6FD})$, showing that these estimates are the same as those for functions in ${\mathcal{C}}(\unicode[STIX]{x1D6FD})$, thus extending a classical result for convex functions. We also give invariance results for the second Hankel determinant $H_{2}=|a_{2}a_{4}-a_{3}^{2}|$, the first three coefficients of $\log (f(z)/z)$ and Fekete–Szegö theorems.
Emotional intelligence (EI) and its measures have been widespread across several countries and cultures and the need for valid and robust measures that could expand research on international settings is on the current agenda. This study aimed to assess the measurement invariance of a widely used self-report EI measure, Emotional Skills and Competence Questionnaire (ESCQ), in two cultural contexts (Portugal vs. Croatia). The ESCQ, a 42-item self-report EI scale which comprises three dimensions – Perceive and Understand Emotion, Express and Label Emotion and Manage and Regulate Emotion - was administered to 1,188 Portuguese and Croatian secondary students. The results showed that the ESCQ had satisfactory reliability and the three-factor structure was replicated on both country samples. Configural (χ2 = 308.71, df = 220, p < .01; RMSEA = .030, CFI = .956, TLI = .948) and partial metric (Δχ2 = 9.102, Δdf = 10, p = .522; ΔCFI = −.01, ΔRMSEA = .002) and scalar (Δχ2 = 15.290, Δdf = 21, p = .083; ΔCFI = .001, ΔRMSEA = .006) invariances were supported across groups. This EI measure invariance cross-cultural study highlighted cultural particularities related to emotional competence in Portugal and Croatia contexts and contributed to bring awareness to the validity of cross-cultural studies in the emotional abilities field.
This study evaluated the measurement invariance of the strengths and difficulties questionnaire (SDQ) self-report among adolescents from seven different nations.
Methods.
Data for 2367 adolescents, aged 13–18 years, from India, Indonesia, Nigeria, Serbia, Turkey, Bulgaria and Croatia were available for a series of factor analyses.
Results.
The five-factor model including original SDQ scales emotional symptoms, conduct problems, hyperactivity–inattention problems, peer problems and prosocial behaviour generated inadequate fit degree in all countries. A bifactor model with three factors (i.e., externalising, internalising and prosocial) and one general problem factor yielded adequate degree of fit in India, Nigeria, Turkey and Croatia. The prosocial behaviour, emotional symptoms and conduct problems factor were found to be common for all nations. However, originally proposed items loaded saliently on other factors besides the proposed ones or only some of them corresponded to proposed factors in all seven countries.
Conclusions.
Due to the lack of a common acceptable model across all countries, namely the same numbers of factors (i.e., dimensional invariance), it was not possible to perform the metric and scalar invariance test, what indicates that the SDQ self-report models tested lack appropriate measurement invariance across adolescents from these seven nations and it needs to be revised for cross-country comparisons.
This article presents an empirical measurement invariance study in the substantive area of satisfaction evaluation in training programs. Specifically, it (I) provides an empirical solution to the lack of explicit measurement models of satisfaction scales, offering a way of analyzing and operationalizing the substantive theoretical dimensions; (II) outlines and discusses the analytical consequences of considering the effects of categorizing supposedly continuous variables, which are not usually taken into account; (III) presents empirical results from a measurement invariance study based on 5,272 participants’ responses to a training satisfaction questionnaire in three different organizations and in two different training methods, taking into account the factor structure of the measured construct and the ordinal nature of the recorded data; and (IV) describes the substantive implications in the area of training satisfaction evaluation, such as the usefulness of the training satisfaction questionnaire to measure satisfaction in different organizations and different training methods. It also discusses further research based on these findings.