We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This chapter describes the relations between rational series and languages.
We start with Kleene's theorem, presented as a consequence of Schützenberger's theorem. Then we describe the cases where the support of a rational series is a rational language. The most important result states that if a series has finite image, then its support is a rational language (Theorem 2.10).
The family of languages which are supports of rational series have closure properties given in Section 3.4. The iteration theorem for rational series is proved in Section 3.5. The last section is concerned with an extremal property of supports which forces their rationality; to prove it, we use a remarkable characterization of rational languages due to Ehrenfeucht, Parikh and Rozenberg.
Kleene's theorem
Definition A congruence in a monoid is an equivalence relation which is compatible with the operation in the monoid. A language L is recognizable if there exists a congruence with finite index in A* that saturates L (that is L is a union of equivalence classes).
It is equivalent to say that L is recognizable if there exists a finite monoid M, a morphism of monoids ϕ: A* → M and a subset P of M such that L = ϕ−1(P).
The product of two languages L1 and L2 is the language L1L2 ={xy∣x ϵ L1, y ϵ L2}. If L is a language, the submonoid generated by L is ∪n≥0Ln.
Formal power series have long been used in all branches of mathematics. They are invaluable in algebra, analysis, combinatorics and in theoretical computer science.
Historically, the work of M.-P. Schützenberger in the algebraic theory of finite automata and the corresponding languages has led him to introduce noncommutative formal power series. This appears in particular in his work with Chomsky on formal grammars. This last point of view is at the origin of this book.
The first part of the book, composed of Chapters 1–4, is especially devoted to this aspect: Formal power series may be viewed as formal languages with coefficients, and finite automata (and more generally weighted automata) may be considered as linear representations of the free monoid. In this sense, via formal power series, algebraic theory of automata becomes a part of representation theory.
The first two chapters contain general results and discuss in particular the equality between rational and recognizable series (Theorem of Kleene–Schützenberger) and the construction of the minimal linear representation. The exposition illustrates the synthesis of linear algebra and syntactic methods inherited from automata theory.
The next two chapters are concerned with the comparison of some typical properties of rational (regular) languages, when they are transposed to rational series. First, Chapter 3 describes the relationship with the family of regular languages studied in theoretical computer science. Next, the chapter contains iteration properties for rational series, also known as pumping lemmas, which are much more involved than those for regular languages.
This chapter contains the definitions of the basic concepts, namely rational and recognizable series in several noncommuting variables.
We start with the definition of a semiring, followed by the notation for the usual objects in free monoids and formal series. The topology on formal series is briefly introduced.
Section 1.4 contains the definition of rational series, together with some elementary properties and the fact that certain morphisms preserve the rationality of series.
Recognizable series are introduced in Section 1.5. An algebraic characterization is given. We also prove (Theorem 5.1) that the Hadamard product preserves recognizability.
Weighted automata are presented in Section 1.6.
The fundamental theorem of Schützenberger (equivalence between rational and recognizable series, Theorem 7.1) is the concern of the last section. This theorem is the starting point for the developments given in the subsequent chapters.
Semirings
Recall that a semigroup is a set equipped with an associative binary operation, and a monoid is a semigroup having a neutral element for its law.
A semiring is, roughly speaking, a ring without subtraction. More precisely, it is a set K equipped with two operations + and ·(sum and product) such that the following properties hold:
(i) (K, +) is a commutative monoid with neutral element denoted by 0.
(ii) (K, ·) is a monoid with neutral element denoted by 1.
(iii) The product is distributive with respect to the sum.
If K is a subsemiring of a semiring L, each K-rational series is clearly L-rational. The main problem considered in this chapter is the converse: how to determine which of the L-rational series are rational over K. This leads to the study of semirings of a special type, and also shows the existence of remarkable families of rational series.
In the first section, we examine principal rings from this aspect. Fatou's Lemma is proved and the rings satisfying this lemma are characterized (Chabert's Theorem 1.5).
In the second section, Fatou extensions are introduced. We show in particular that ℚ+ is a Fatou extension of ℕ (Theorem 2.2 due to Fliess).
In the third section, we apply Shirshov's theorem on rings with polynomial identities to prove criteria for rationality of series and languages. This is then applied, in the last section, to Fatou ring extensions.
Rational series over a principal ring
Let K be a commutative principal ring and let F be its quotient field. Let S ϵ K⟪A⟫ be a formal series over A with coefficients in K. If S is a rational series over F, is it also rational over K? This question admits a positive answer, and there is even a stronger result, namely that S has a minimal linear representation with coefficients in K.
Theorem 1.1 (Fliess 1974a) Let S ϵ K⟪A⟫ be a series which is rational of rank n over F.
We define rational expressions, their star height and rational identities. Section 4.1 studies the rational identity E* ≡ 1 + EE* ≡ 1 + E*E and its consequences and the operators a−1E. In Section 4.2, we show that, over a commutative ring, rational identities are all consequences of the previous identities. In Section 4.3, we show that, over a field, star height may be characterized through some minimal representation, and deduce that the star height of the star of a generic matrix of order n is n. In the last section, we see that the star height may decrease under field extension and show how to compute the absolute star height, which is the star height over the algebraic closure of the ground field.
Rational expressions
Let K be a commutative semiring and let A be an alphabet. We define below the semiring of rational expressions on A over K. This semiring, denoted ε, is defined as the union of an increasing sequence of subsemirings εn for n ≥ 0. Each such subsemiring is of the form εn = K ⟨An⟩ for some (in general infinite) alphabet An; moreover, there will be a semiring morphism E ↦ (E, 1), εn → K. We call (E, 1) the constant term of the rational expression E.
Now A0 = A, ε0 = K ⟨A⟩ and the constant term is the usual constant term.
In this appendix, we show how to use both the Scala compiler (scalac) and the Scala interpreter (scala) by experimenting with their command line arguments. Part of our presentation is based on the man pages coming with every Scala distribution. For our exposition, we assume a Unix terminal.
Scala is a scalable language. Marketing-wise this is mentioned quite frequently. The good news is that Scala is indeed scalable in many ways and, after all, there is no harm in advertising features that already exist. One such dimension of scalability has to do with the provided tools and how they can be used to increase the overall experience of programming in Scala. We will see that the features provided give a pleasant feeling that the language “grows” to our needs.
For the following, we assume that Scala is installed under a folder denoted by the value of the environment variable SCALA_HOME. Under Unix, this value is obtained by $SCALA_HOME, while under Windows this is obtained by %SCALA_HOME%. It is good practice to set this variable, since other applications that use Scala may depend on it.
The Scala compiler
The compiler is the workhorse of the whole platform. Even the interpreter uses it internally in order to give the impression of a scripting environment. As expected, it is packed with a wealth of command line options. Using scalac with no options and parameters informs us of all the options. The outcome is given in Table C.1.
Scala is a scalable object-oriented programming language with features found in functional programming languages. Nowadays, the object-oriented approach to software construction is considered the most succesful methodology for software design, mainly because it makes software reuse extremely easy. On the other hand, functional programming offers some very elegant tools which when combined with an object-oriented program development philosophy define a really powerful programming methodology. Generally, any programming language that can be extended seamlessly is called scalable. When all these ideas are combined in a single tool, then the result is a particularly powerful programming language.
Object orientation
The first object-oriented programming language was SIMULA, which was designed and implemented by Ole-Johan Dahl and Kristen Nygaard. The SIMUlation LAnguage was designed “to facilitate formal description of the layout and rules of operation of systems with discrete events (changes of state).” In other words, SIMULA was designed as a simulation tool of discrete systems. Roughly, a simulation involves the representation of the functioning of one system or process by means of the functioning of another. In order to achieve its design goal, the designers equipped the language with structures that would make easy the correspondence between a software simulation and the physical system itself. The most important of these structures is the process. A process “is intended as an aid for decomposing a discrete event system into components, which are separately describable.”
Symbolic computation refers to the use of machines, such as computers, to manipulate mathematical content (i.e., equations, expressions, etc.) in symbolic formand deliver a result in this form. A classical example of such a systemis one that can compute the derivative of a function expressed in some symbolic form(i.e., sin(x)+1). In this chapter we will explain why symbolic computation is interesting and what it can achieve, and then we are going to present a simple system that can differentiate functions.
Mechanical symbol manipulation
In the beginning of the twentieth century, the great German mathematician David Hilbert asked whether it would be possible to devise a method to solve mechanically any diophantine equation, that is, any polynomial equation with integer coefficients. In other words, he asked whether it is possible to solve any diophantine equation by dully following a clerical procedure. In fact, Hilbert was dreaming of a fully mechanized mathematical science. Unfortunately for Hilbert it has been shown that one cannot construct a general algorithm to solve any diophantine equation. A consequence of this proof was that the dream of a fully mechanized mathematical science was not feasible. Nevertheless, certain problems of mathematics can be solved by purely mechanical methods. For example, it has been demonstrated that certain operations like differentiation and integration can be performed mechanically. In general, such systems are known as symbolic computation or computer algebra systems.