We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Jean-Daniel Boissonnat, Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt,Mariette Yvinec, Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt
Jean-Daniel Boissonnat, Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt,Mariette Yvinec, Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt
Jean-Daniel Boissonnat, Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt,Mariette Yvinec, Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt
Computational geometry aims at designing the most efficient algorithms to solve geometric problems. For this, one must clearly agree on the criteria to estimate or measure the efficiency of an algorithm or to compare two different algorithms. This chapter recalls a few basic notions related to the analysis of algorithms. These notions are fundamental to understanding the subsequent analyses given throughout this book. The first section recalls the definition of algorithmic complexity and the underlying model of computation used in the rest of this book. The second part introduces the notion of a lower bound for the complexity of an algorithm, and optimality.
The complexity of algorithms
The model of computation
From a practical standpoint, the performances of an algorithm can be evaluated by how much time and memory is required by a program that encodes this algorithm to run on a given machine. The running time and space both depend on the particular machine or on the programming language used, or even on the skills of the programmer. It is therefore impossible to consider them relevant measures of efficiency that could serve to compare different algorithms or implementations of the same algorithm. In order to compare, one is forced to define a standard model of a computer on which to evaluate the algorithms, called the model of computation. Thus, to define a model of computation is essentially to define the units of time and space.
Jean-Daniel Boissonnat, Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt,Mariette Yvinec, Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt
Voronoi diagrams are very useful structures, frequently encountered in several disciplines because they model growth processes and distance relationships between objects: it is not surprising to see them appear in the study of crystal growth or in studies on the great structures of the universe. In nature, they can be observed in crystalline structures, on the shell of a turtle, or on the neck of a reticulate giraffe.
Voronoi diagrams are very closely related to the geometric structures encountered so far: polytopes, triangulations, and arrangements. Their mathematical properties are particularly numerous and interesting. Chapter 17 is entirely devoted to Voronoi structures with a Euclidean metric, whereas other metrics are studied in chapter 18. Chapter 19 presents results specific to dimension 2 that have no analogue in higher dimensions.
Voronoi diagrams can also be used as data structures to solve numerous problems: nearest neighbors and motion planning are two outstanding instances. Several examples are given in the exercises and throughout chapter 19.
Jean-Daniel Boissonnat, Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt,Mariette Yvinec, Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt
Jean-Daniel Boissonnat, Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt,Mariette Yvinec, Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt
Jean-Daniel Boissonnat, Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt,Mariette Yvinec, Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt
Jean-Daniel Boissonnat, Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt,Mariette Yvinec, Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt
The goal of this and subsequent chapters is to introduce the algorithmic methods that are used most frequently to solve geometric problems. Generally speaking, computational geometry has recourse to all of the classical algorithmic techniques. Readers examining all the algorithms described in this book from a methodological point of view will distinguish essentially three methods: the incremental method, the divide-and-conquer method, and the sweep method.
The incremental method is perhaps the method which is the most largely emphasized in the book. It is also the most natural method, since it consists of processing the input to the problem one item at a time. The algorithm initiates the process by solving the problem for a small subset of the input, then maintains the solution to the problem as the remaining data are inserted one by one. In some cases, the algorithm may initially sort the input, in order to take advantage of the fact that the data are sorted. In other cases, the order in which the data are processed is indifferent, sometimes even deliberately random. In the latter case, we are dealing with the randomized incremental method, which will be stated and analyzed at length in chapter 5. We therefore will not expand further on the incremental method in this chapter.
The divide-and-conquer method is one of the oldest methods for the design of algorithms, and its use goes well beyond geometry. In computational geometry, this method leads to very efficient algorithms for certain problems.
Jean-Daniel Boissonnat, Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt,Mariette Yvinec, Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt
Jean-Daniel Boissonnat, Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt,Mariette Yvinec, Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt
To triangulate a region is to describe it as the union of a collection of simplices whose interiors are pairwise disjoint. The region is then decomposed into elementary cells of bounded complexity. The words to triangulate and triangulation originate from the two-dimensional problem, but are commonly used in a broader context for regions and simplices of any dimension.
Triangulations and related meshes are ubiquitous in domains where the ambient space needs to be discretized, for instance in order to interpolate functions of several variables, or to numerically solve multi-dimensional differential equations using finite-element methods. Triangulations are largely used in the context of robotics to decompose the free configuration space of a robot, in the context of artificial vision to perform three-dimensional reconstructions of objects from their cross-sections, or in computer graphics to solve problems related to windows or to compute illuminations in rendering an image. Finally, in the context of computational geometry, the triangulation of a set of points, a planar map, a polygon, a polyhedron, an arrangement, or of any other spatial structures, is often a prerequisite to running another algorithm on the data. For instance, this is the case for algorithms performing point location in a planar map by using a hierarchy of triangulations, or for the numerous applications of triangulations to shortest paths and visibility problems.
Triangulations form the topic of the next three chapters. Chapter 11 recalls the basic definitions related to triangulations, and studies the combinatorics of triangulations in dimensions 2 and 3.
Termination is an important property of term rewriting systems. For a finite terminating rewrite system, a normal form of a given term can be found by a simple depth-first search. If the system is also confluent, the normal forms are unique, which makes the word problem for the corresponding equational theory decidable. Unfortunately, as shown in the first section of this chapter, termination is an undecidable property of term rewriting systems. This is true even if one allows for only unary function symbols in the rules, or for only one rewrite rule (but then for function symbols of arity greater than 1). In the restricted case of ground rewrite systems, i.e. rewrite systems whose rules must not contain variables, termination becomes decidable, though. In the second section of this chapter, we introduce the notion of a reduction order. These orders are an important tool for proving termination of rewrite systems. The main problem for a given rewrite system is to find an appropriate reduction order that shows its termination. Thus, it is desirable to have a wide range of different possible reduction orders available. In the third and fourth sections of the chapter, we introduce two different ways of defining reduction orders.
The decision problem
First, we show undecidability of the termination problem for term rewriting systems, and then we consider the decidable subcase of right-ground term rewriting systems (which can be treated by a slight generalization of the well-known proof for ground systems).
Jean-Daniel Boissonnat, Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt,Mariette Yvinec, Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt
Jean-Daniel Boissonnat, Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt,Mariette Yvinec, Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt
Jean-Daniel Boissonnat, Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt,Mariette Yvinec, Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt
The purpose of this chapter is twofold. On the one hand, it introduces basic notions from universal algebra (such as terms, substitutions, and identities) on a syntactic level that does not require (or give) much mathematical background. On the other hand, it presents the semantic counterparts of these syntactic notions (such as algebras, homomorphisms, and equational classes), and proves some elementary results on their connections. Most of the definitions and results presented in subsequent chapters can be understood knowing only the syntactic level introduced in Section 3.1. In order to obtain a deeper understanding of the meaning of these results, and of the context in which they are of interest, a study of the other sections in this chapter is recommended, however. For more information on universal algebra see, for example, [100, 55, 173].
Terms, substitutions, and identities
Terms will be built from function symbols and variables in the usual way. For example, if f is a binary function symbol, and x, y are variables, then f(x,y) is a term. To make clear which function symbols are available in a certain context, and which arity they have, one introduces signatures.
Definition 3.1.1 A signature Σ is a set of function symbols, where each f ∈ Σ is associated with a non-negative integer n, the arity of f. For n ≥ 0, we denote the set of all n-ary elements of Σ by Σ(n). The elements of Σ(0) are also called constant symbols.