We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Network science is a broadly interdisciplinary field, pulling from computer science, mathematics, statistics, and more. The data scientist working with networks thus needs a broad base of knowledge, as network data calls for—and is analyzed with—many computational and mathematical tools. One needs good working knowledge in programming, including data structures and algorithms to effectively analyze networks. In addition to graph theory, probability theory is the foundation for any statistical modeling and data analysis. Linear algebra provides another foundation for network analysis and modeling because matrices are often the most natural way to represent graphs. Although this book assumes that readers are familiar with the basics of these topics, here we review the computational and mathematical concepts and notation that will be used throughout the book. You can use this chapter as a starting point for catching up on the basics, or as reference while delving into the book.
This article posits a theory of iterative stress that separates each facet of the stress map into its constituent parts, or ‘atoms’. Through the well-defined notion of complexity provided by Formal Language Theory, it is shown that this division of the stress map results in a more restrictive characterisation of iterative stress than a single-function analysis does. While the single-function approach masks the complexity of the atomic properties present in the pattern, the compositional analysis makes it explicitly clear. It also demonstrates the degree to which, despite what appear to be significant surface differences in the patterns, the calculation of the stress function is largely the same, even between quantity-sensitive and quantity-insensitive patterns. These stress compositions are limited to one output-local function to iterate stress, and a small number of what I call edge-oriented functions to provide ‘cleanup’ when the iteration function alone fails to capture the pattern.
Existential rules form an expressive ${{\textsf{Datalog}}}$-based language to specify ontological knowledge. The presence of existential quantification in rule-heads, however, makes the main reasoning tasks undecidable. To overcome this limitation, in the last two decades, a number of classes of existential rules guaranteeing the decidability of query answering have been proposed. Unfortunately, only some of these classes fully encompass ${{\textsf{Datalog}}}$ and, often, this comes at the price of higher computational complexity. Moreover, expressive classes are typically unable to exploit tools developed for classes exhibiting lower expressiveness. To mitigate these shortcomings, this paper introduces a novel general syntactic condition that allows us to define, systematically and in a uniform way, from any decidable class $\mathcal{C}$ of existential rules, a new class called ${{\textsf{Dyadic-}\mathcal{C}}}$ enjoying the following properties: (i) it is decidable; (ii) it generalizes ${{\textsf{Datalog}}}$; (iii) it generalizes $\mathcal{C}$; (iv) it can effectively exploit any reasoner for query answering over $\mathcal{C}$; and (v) its computational complexity does not exceed the highest between the one of $\mathcal{C}$ and the one of ${{\textsf{Datalog}}}$.
Under mild assumptions, we show that the exact convergence rate in total variation is also exact in weaker Wasserstein distances for the Metropolis–Hastings independence sampler. We develop a new upper and lower bound on the worst-case Wasserstein distance when initialized from points. For an arbitrary point initialization, we show that the convergence rate is the same and matches the convergence rate in total variation. We derive exact convergence expressions for more general Wasserstein distances when initialization is at a specific point. Using optimization, we construct a novel centered independent proposal to develop exact convergence rates in Bayesian quantile regression and many generalized linear model settings. We show that the exact convergence rate can be upper bounded in Bayesian binary response regression (e.g. logistic and probit) when the sample size and dimension grow together.
This paper is meant to be a survey about implicit characterizations of complexity classes by fragments of higher-order programming languages, with a special focus on type systems and subsystems of linear logic. Particular emphasis will be put on Martin Hofmann’s contributions to the subject, which very much helped in shaping the field.
This paper explores relational syllogistic logics, a family of logical systems related to reasoning about relations in extensions of the classical syllogistic. These are all decidable logical systems. We prove completeness theorems and complexity results for a natural subfamily of relational syllogistic logics, parametrized by constructors for terms and for sentences.
In this paper we investigate the computational complexity of deciding if the variety generated by a given finite idempotent algebra satisfies a special type of Maltsev condition that can be specified using a certain kind of finite labelled path. This class of Maltsev conditions includes several well known conditions, such as congruence permutability and having a sequence of n Jónsson terms, for some given n. We show that for such “path defined” Maltsev conditions, the decision problem is polynomial-time solvable.
This paper investigates the computational complexity of deciding if a given finite idempotent algebra has a ternary term operation $m$ that satisfies the minority equations $m(y,x,x)\approx m(x,y,x)\approx m(x,x,y)\approx y$. We show that a common polynomial-time approach to testing for this type of condition will not work in this case and that this decision problem lies in the class NP.
The computation ofgamblets is accelerated by localizing their computation in a hierarchical manner (using a hierarchy of distances), and the approximation errors caused by these localization steps are bounded based on three properties: nesting, the well-conditioned nature of the linear systems solved in the Gamblet Transform, and theexponential decay of the gamblets. These efficiently computed, accurate, andlocalized gamblets are shown to producea Fast Gamblet Transform of near-linear complexity. Application to the three primary classes ofmeasurement functions in Sobolev spaces are developed.
This chapter is logical in character. The focus is on the logical properties of one particular generic structure: the generic omega-sequence. I take the perspective that is internal to arithmetic, from which arithmetic investigates \emph{one} structure.
This article presents a proof that Buss’s $S_2^2$ can prove the consistency of a fragment of Cook and Urquhart’s PV from which induction has been removed but substitution has been retained. This result improves Beckmann’s result, which proves the consistency of such a system without substitution in bounded arithmetic $S_2^1$.
Our proof relies on the notion of “computation” of the terms of PV. In our work, we first prove that, in the system under consideration, if an equation is proved and either its left- or right-hand side is computed, then there is a corresponding computation for its right- or left-hand side, respectively. By carefully computing the bound of the size of the computation, the proof of this theorem inside a bounded arithmetic is obtained, from which the consistency of the system is readily proven.
This result apparently implies the separation of bounded arithmetic because Buss and Ignjatović stated that it is not possible to prove the consistency of a fragment of PV without induction but with substitution in Buss’s $S_2^1$. However, their proof actually shows that it is not possible to prove the consistency of the system, which is obtained by the addition of propositional logic and other axioms to a system such as ours. On the other hand, the system that we have considered is strictly equational, which is a property on which our proof relies.
The paper introduces a graph theory variation of the general position problem: given a graph $G$, determine a largest set $S$ of vertices of $G$ such that no three vertices of $S$ lie on a common geodesic. Such a set is a max-gp-set of $G$ and its size is the gp-number $\text{gp}(G)$ of $G$. Upper bounds on $\text{gp}(G)$ in terms of different isometric covers are given and used to determine the gp-number of several classes of graphs. Connections between general position sets and packings are investigated and used to give lower bounds on the gp-number. It is also proved that the general position problem is NP-complete.
In this paper we consider two natural notions of connectivity for hypergraphs: weak and strong. We prove that the strong vertex connectivity of a connected hypergraph is bounded by its weak edge connectivity, thereby extending a theorem of Whitney from graphs to hypergraphs. We find that, while determining a minimum weak vertex cut can be done in polynomial time and is equivalent to finding a minimum vertex cut in the 2-section of the hypergraph in question, determining a minimum strong vertex cut is NP-hard for general hypergraphs. Moreover, the problem of finding minimum strong vertex cuts remains NP-hard when restricted to hypergraphs with maximum edge size at most 3. We also discuss the relationship between strong vertex connectivity and the minimum transversal problem for hypergraphs, showing that there are classes of hypergraphs for which one of the problems is NP-hard, while the other can be solved in polynomial time.
Fix a finite semigroup $S$ and let $a_{1},\ldots ,a_{k},b$ be tuples in a direct power $S^{n}$. The subpower membership problem (SMP) for $S$ asks whether $b$ can be generated by $a_{1},\ldots ,a_{k}$. For combinatorial Rees matrix semigroups we establish a dichotomy result: if the corresponding matrix is of a certain form, then the SMP is in P; otherwise it is NP-complete. For combinatorial Rees matrix semigroups with adjoined identity, we obtain a trichotomy: the SMP is either in P, NP-complete, or PSPACE-complete. This result yields various semigroups with PSPACE-complete SMP including the six-element Brandt monoid, the full transformation semigroup on three or more letters, and semigroups of all $n$ by $n$ matrices over a field for $n\geq 2$.
Among the myriad of desirable properties discussed in the context of forgetting in Answer Set Programming, strong persistence naturally captures its essence. Recently, it has been shown that it is not always possible to forget a set of atoms from a program while obeying this property, and a precise criterion regarding what can be forgotten has been presented, accompanied by a class of forgetting operators that return the correct result when forgetting is possible. However, it is an open question what to do when we have to forget a set of atoms, but cannot without violating this property. In this paper, we address this issue and investigate three natural alternatives to forget when forgetting without violating strong persistence is not possible, which turn out to correspond to the different possible relaxations of the characterization of strong persistence. Additionally, we discuss their preferable usage, shed light on the relation between forgetting and notions of relativized equivalence established earlier in the context of Answer Set Programming, and present a detailed study on their computational complexity.
Pointer analysis is a fundamental static program analysis for computing the set of objects that an expression can refer to. Decades of research has gone into developing methods of varying precision and efficiency for pointer analysis for programs that use different language features, but determining precisely how efficient a particular method is has been a challenge in itself.
For programs that use different language features, we consider methods for pointer analysis using Datalog and extensions to Datalog. When the rules are in Datalog, we present the calculation of precise time complexities from the rules using a new algorithm for decomposing rules for obtaining the best complexities. When extensions such as function symbols and universal quantification are used, we describe algorithms for efficiently implementing the extensions and the complexities of the algorithms.
We propose to strengthen Popper’s notion of falsifiability by adding the requirement that when an observation is inconsistent with a theory, there must be a ‘short proof’ of this inconsistency. We model the concept of a short proof using tools from computational complexity, and provide some examples of economic theories that are falsifiable in the usual sense but not with this additional requirement. We consider several variants of the definition of ‘short proof’ and several assumptions about the difficulty of computation, and study their different implications on the falsifiability of theories.
Presburger arithmetic is the first-order theory of the natural numbers with addition (but no multiplication). We characterize sets that can be defined by a Presburger formula as exactly the sets whose characteristic functions can be represented by rational generating functions; a geometric characterization of such sets is also given. In addition, if p = (p1, . . . , pn) are a subset of the free variables in a Presburger formula, we can define a counting function g(p) to be the number of solutions to the formula, for a given p. We show that every counting function obtained in this way may be represented as, equivalently, either a piecewise quasi-polynomial or a rational generating function. Finally, we translate known computational complexity results into this setting and discuss open directions.
X-ray pulsar navigation is a promising technology for autonomous spacecraft navigation. The key measurement of pulsar navigation is the time delay (phase delay). There are various methods to estimate phase delay, but most of them have high computational complexities. In this paper, a new method for phase delay estimation is proposed, which is based on the time-shift property of Discrete Fourier Transformation (DFT). With this method, the time complexity can be greatly reduced. Also, a delta-function approximation can be used to further decrease the computational cost. Numerical simulation shows that the proposed method is effective for phase delay estimation, and the reduced complexity makes our method more suitable for on board implementation.
We answer the following question posed by Lechuga: given a simply connected space X with both H* (X; ℚ) and π*(X) ⊗ ℚ being finite dimensional, what is the computational complexity of an algorithm computing the cup length and the rational Lusternik—Schnirelmann category of X?
Basically, by a reduction from the decision problem of whether a given graph is k-colourable for k ≥ 3, we show that even stricter versions of the problems above are NP-hard.