Hostname: page-component-586b7cd67f-tf8b9 Total loading time: 0 Render date: 2024-11-23T03:59:57.887Z Has data issue: false hasContentIssue false

The Soviet scientific programme on AI: if a machine cannot ‘think’, can it ‘control’?

Published online by Cambridge University Press:  07 August 2023

Olessia Kirtchik*
Affiliation:
Centre internet et société (CIS-CNRS), Paris, France
Rights & Permissions [Opens in a new window]

Abstract

This article analyses the intellectual and institutional development of the artificial-intelligence (AI) research programme within the Soviet Academy of Sciences from the 1970s to the 1980s. Considering the places and ideas from which it borrowed, I contextualize its goals and projects as part of a larger technoscientific movement aimed at rationalizing Soviet governance, and unpack shared epistemological and cultural assumptions. By tracing their origins to debates accompanying the introduction of cybernetics into Soviet intellectual and political life in the 1950s and early 1960s, I show how Soviet conceptions of ‘thinking machines’ interacted with dialectical materialism and communist socio-technical imaginaries of governance and control. The programme of ‘situational management’ developed by Dmitry Pospelov helps explain the resulting conception of AI as control systems aimed at solving complex tasks that cannot be fully formalized and therefore require new modelling methods to represent real-world situations. This specific orientation can be understood, on the one hand, as a research programme competing with systems analysis and economic cybernetics to rationalize Soviet management, and, on the other hand, as a field trying to demarcate itself from a purely statistical or mathematical approach to modelling cognitive processes.

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NCCreative Common License - ND
This is an Open Access article, distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives licence (https://creativecommons.org/licenses/by-nc-nd/4.0/), which permits non-commercial re-use, distribution, and reproduction in any medium, provided the original work is unaltered and is properly cited. The written permission of Cambridge University Press must be obtained for commercial re-use or in order to create a derivative work.
Copyright
Copyright © The Author(s), 2023. Published by Cambridge University Press on behalf of British Society for the History of Science

‘Artificial intelligence in the literal sense of the word does not exist and will not exist’, Germogen Pospelov told fellow members of the Soviet Academy of Sciences in 1986, even as he argued for its development.Footnote 1 Pospelov was the main driving force behind the institutionalization of research on AI in the Soviet Union, yet his statement reveals a paradoxical situation, presenting the term ‘AI’ as an empty signifier while promoting the importance of the field for the Soviet Union's ability to compete in rapid globalization and modernization. Describing the deplorable state of Soviet R & D in new information technologies in contrast to leading capitalist countries, Pospelov argued for the establishment of a scientific Council on Artificial Intelligence. Eventually established within the Academy of Sciences in 1987, the council would allow the label ‘AI’ to be recognized and help consolidate this field of research within Soviet academia, with computerization and informatization of the nation declared a high priority for the government.

Yet rather than the beginning of AI, this is better understood as marking an ‘ending’ of an earlier period of negotiation, critique and nurturing of different ideas and approaches to the problems of modelling cognition and behaviour. De-Stalinization and the partial opening of Soviet society and academia in the second half of the 1950s had enabled a Marxist discussion of computation and cybernetics, stimulated by an intense international circulation of people and ideas between East and West. In particular, new interdisciplinary scientific fields seeking to mathematize the life sciences and humanities emerged under the institutional umbrella of ‘cybernetics’ from the late 1950s onwards.Footnote 2 These included machine translation and computational linguistics, pattern recognition and statistical learning, bionics and biocybernetics, semiotics, economic cybernetics and decision sciences, the psychology of thinking and problem solving. Although all benefited from the transfer of Western ideas and tools, they also drew on earlier theoretical advances in physiology, mathematical logic and statistics, optimization and control theory in the works of Bernstein, Gelfand, Kantorovich, Kolmogorov and Markov, as well as on national traditions in the social and human sciences established before the Second World War, such as the formalist literary theory or developmental psychology of Propp and Vygotsky, amongst others. Like counterparts in the West, these different epistemic communities in the Soviet Union helped produce and promote specific versions of what historians have described as ‘algorithmic thinking’ or ‘algorithmic rationality’.Footnote 3

The common contexts and features of the Soviet algorithmic cultures that resulted from the intervention of cybernetics, systems analysis and information theory in the social and human sciences are yet to be reconstructed. However, recent groundbreaking literature has highlighted some aspects and prominent figures. Michael Gordin's restitution of forgotten episodes of Soviet machine translation before and after the war emphasizes the embeddedness of algorithms in humans and in their socio-historical contexts and narratives.Footnote 4 Ekaterina Babintseva has analysed the approach to programmed instruction developed by psychologist Lev Landa, who drew on American ideas and creatively adapted them to the Soviet context.Footnote 5 A similar vision of an ‘algorithm’ as an amplifier of human creative capacities was promoted by Andrei Ershov, a leading figure in Soviet computer science, as Ksenia Tatarchenko shows in describing the emergence of an alternative Soviet version of ‘algorithmic thinking’ during the Cold War.Footnote 6 All these lines of research emphasize both the importance of international circulations and that Soviet ‘algorithmic’ cultures had a distinctive flavour, rejecting purely mechanistic conceptions and endorsing a kind of holistic and humanistic approach.

Situating the Soviet AI community as part of this larger ‘algorithmic’ movement, this article reconstructs some of the intellectual and organizational contexts that produced quite specific modelling practices and views of intelligent machines. Based on the archival files of the Academy of Sciences, personal and institutional archives, and various scientific and literary works on AI published during the Soviet period, I focus on the scientists and engineers who explicitly claimed the label ‘AI’ at the Soviet Academy of Sciences. Here, the original Soviet approach to defining and modelling ‘intelligence’ was consolidated around a specific project defined as ‘situational management’ in large complex systems, which dominated the Soviet ‘AI’ landscape before its intellectual and institutional redefinition after 1987. This approach was developed by Dmitry Pospelov (not to be confused with his namesake Germogen Pospelov) from the mid-1960s, then redefined in the early 1970s as part of the AI research programme. In order to clarify what did and did not count as ‘AI’ within the Soviet academic mainstream, my socio-historical reconstruction highlights the different epistemological (and in a broad sense ideological) assumptions about human thinking, learning and social coordination that underlie this specific theory of AI, but shows that these developments cannot be explained by dominant cultural metaphors alone.Footnote 7

First, I situate the Soviet programme of ‘situational management’ in relation to the debate on ‘thinking machines’ which accompanied the legitimation and ideological accommodation of cybernetics in Soviet academia in the 1950s and early 1960s. I consider the influence of anti-individualist methodologies and dialectical materialism as a distinct philosophy of science on the ways Soviet scientists and engineers perceived and contested the American man–machine analogy. The article then outlines key elements of the institutionalization of the ‘AI’ research programme at the Soviet Academy of Sciences, as part of a cybernetic and systems-analytical movement. This was the source of ideas and tools for imagining new technologies of government and even, as recent scholarship has suggested, for the development of new socialist governmentalities in the 1960s and 1970s.Footnote 8 I show how this context shaped the Soviet AI community's deep engagement with problems of control and coordination in complex organizations and systems. I finally examine in more detail the development of D. Pospelov's theory of ‘situational management’ and ‘applied semiotics’ in dialogue with the work of the mathematician Mikhail Tsetlin and the psychologist Veniamin Pushkin. I show how these interdisciplinary encounters helped shape an understanding of complexity and its implications for management and control, on the one hand, and a vision of human thinking as situated, and of machine intelligence as simulating the ways in which humans learn and operate in real-world situations, on the other.

My findings will explain the apparent paradox that, although grounded in cybernetics, the Soviet approach to AI was characterized by a radical critique of the American ‘mechanistic’ and ‘reductionist’ approach to human intelligence, and a reluctance to embrace the cybernetic mind–machine analogy. As we shall see, this premise prevented Soviet AI theorists from regarding an intelligent machine, or computer, as a ‘thinking’ entity in its own right. For them, computers could only ever be tools to augment inherently human creative capacities. Rather than focusing on modelling individual reasoning (understood as logical inference or rational decision making) like their Western counterparts, I argue that Soviet ‘AI’ specialists concentrated on the problems of coordination and control – partly as a response to the shortcomings of the centrally managed and administratively controlled Soviet economy.Footnote 9 For both Pospelovs, and their colleagues, the goal of ‘AI’ was thus to create technical means for managing large, complex systems using knowledge of the world that cannot be strictly formalized.Footnote 10

Soviet views of ‘thinking machines’ as a tool to think with

The foundational debate about the possibility of developing thinking machines in the Soviet Union originated in the early reception of cybernetics. At the beginning of the 1950s, cybernetics (as well as other fields such as genetics or linguistics) fell victim to the anti-American campaigns of late Stalinism and was stigmatized as an ‘obscurantist’, ‘bourgeois pseudoscience’ in a series of publications in the main philosophy journals and popular media.Footnote 11 However, this ideological campaign didn't affect the early development of computer technology, which was given high priority by the state and the military. In the second half of the 1950s, however, Soviet academics hesitated to define the normative implications of cybernetics in order to ensure its compatibility with the tenets of Marxism–Leninism and the goals of socialist construction. As an object of evolving and sometimes competing interpretations and constant negotiation, Soviet Marxism–Leninism shouldn't be seen as perfectly coherent and fixed in time. In particular, the progress of science and the introduction of cybernetics implied adjustments and reassessments of certain elements of official Soviet doctrine.Footnote 12 The ultimately victorious view of computers as ‘tools to think with’ rather than ‘thinking machines’ emerged through consensus building among scientists, philosophers and engineers, including, importantly, both proponents and critics of Western cybernetics and AI.

It is important to stress that the rejection of the basic ‘man–machine’ analogy in the Soviet sciences and humanities did not come exclusively from orthodox Marxist–Leninist philosophers. Contrary to the popular historical narrative, the Soviet enthusiasts of this new fashionable science also actively participated in debates on the normative implications of cybernetics and the possibilities and limits of ‘intelligent’ machines. The convergence on this particular point is remarkable, suggesting the refusal to see computers as thinking had deeper roots than the ideological anti-cybernetics campaign of the early 1950s. One factor concerns the epistemic and ideological influence of post-war Soviet Marxism–Leninism and dialectical materialism, its philosophy of science (which cannot be reduced to a mere ideological constraint).Footnote 13 This world view was based on distinctions between objective (matter) and subjective (mental) reality, between material and spiritual production, and affirmed a privileged gnoseological position of Man in relation to other living and non-living beings (a particular Soviet Marxist humanism). Arguably, post-war developments in AI also maintained a complex dialogue with scientific innovations and debates of the 1920s and 1930s, and even with earlier Russian intellectual traditions. Although precise influences and intellectual lineages are not always easy to establish, the similarities of positions and types of argument regarding the nature of human thought, learning and the relations between matter and consciousness, human and machine, in various disciplinary communities of the late Soviet Union point to common genealogies and precursors. Importantly, Lev Vygotsky's (1896–1934) theory of child development as an essentially social activity, formulated before the war, remained dominant in Soviet psychology well into the 1960s and 1970s.Footnote 14 Or, for example, Ksenia Tatarchenko, Anya Yermakova and Liesbeth de Mol trace the holistic and human-centred outlook of the post-war Soviet school of mathematical logic back to the Russian scientific tradition of the fin de siècle.Footnote 15

Nevertheless, the Soviet reluctance to embrace the man–machine analogy was based on two main arguments, repeated in countless philosophical, scientific and general writings. The first concerned the distinction between creative acts (including scientific and engineering creativity) and mechanical activities such as calculation.Footnote 16 According to this view, computers could not produce truly novel ideas, concepts or images, but only reproduce existing patterns or clichés. The second, more substantial, argument was that human thinking is social in nature, engendered not by chemical brain processes but by the collective activity of countless generations of people – a product of socialization, not of neurology. On this account, American cybernetics was judged as offering a mechanistic and reductionist approach to mind and consciousness that ignored fundamental qualitative differences between human thinking and machine performance.Footnote 17 This vision had most important consequences for defining the nature and purpose of machine intelligence in the socialist context: computers are created by humans and operate strictly according to the rules of mathematics and logic.Footnote 18 They are capable of simulating some human intellectual capacities, but, most importantly, imitating thinking is not thinking itself.

Early on this position was formulated by the Soviet science fiction writer Anatoly Dneprov, an engineer, administrator and popularizer of science. In 1961, Dneprov's short story ‘Game’ presented an argument very similar to the thought experiment now generally known as the ‘Chinese room argument’, first published in 1980.Footnote 19 In it, 1,400 participants in a congress of young mathematicians were asked to participate in an experiment or ‘game’. The group gathered at a stadium in a specific order to transmit information – coded as 0 or 1 – to each other according to an algorithm given in advance. This went on for a few hours. The participants thus formed a giant living information processor, destined, as they were told after the game was over, to translate a short phrase from Portuguese into Russian without having the slightest idea of what they were doing. According to the game's organizer, this fact definitely proved that computers are incapable of ‘thinking’. Dneprov wrote,

If you, as the thinking structural units of our logical scheme, had no idea what you were doing, is it possible to seriously talk about the thinking of electronic-mechanical devices made up of parts, whose ability to think is not defended even by the most ardent supporters of the electronic brain … I think that our game unambiguously solved the question: can a machine think? It clearly demonstrated that even the most subtle imitation of thinking by machines is not thinking itself – the highest form of movement of living matter.Footnote 20

Importantly, we find this kind of argument against the mechanistic approach to the human mind in a wide variety of Soviet thinkers at the time: from the Marxist–Leninist orthodoxies to the semi-marginal or semi-dissident but culturally authoritative. In 1966, the heterodox Marxist philosopher Evald Ilyenkov (1924–79) produced one of the most insightful critiques of machine intelligence from the viewpoint of dialectical materialism:

The Western technical intelligentsia, including the cybernetic and mathematical intelligentsia, is therefore entangled in the problem of ‘man–machine’ because they don't know how to formulate it properly; that is, as a social problem, as a problem of the relationship between man and man, mediated by the material body of civilization, including the modern machine technology of production.Footnote 21

In his view, far from being individual, intelligence results from the social and material activity of generations of people. More counterintuitively, this vision was shared by scholars of the social and human sciences explicitly inspired by the new and fashionable systemic and cybernetic approaches, who made extensive use of their vocabulary, such as ‘homeostasis’ and ‘information’.Footnote 22 In his 1977 article ‘Brain – text – culture – artificial intelligence’, the main figure of the Tartu–Moscow Semiotic School, Yury Lotman (1922–93), suggested that in order to create a theory of artificial intelligence, one should not start from the facts of individual consciousness, but from the collective consciousness that is culture.Footnote 23 Both Lotman and Ilyenkov believed that intelligence – like culture – is dialogical (that is, intersubjective) and dialectical (driven by contradiction). In his literary parody, ‘The mystery of the black box’, Ilyenkov argued forcefully that the binary and internally non-contradictory logic of computers was genuinely incompatible with dialectical human thought.Footnote 24

Ilyenkov challenged, in particular, the prominent mathematician Andrey Kolmogorov, who identified himself with the ‘radical cyberneticians’ of the general public media and popular science.Footnote 25 In a 1961 public lecture entitled ‘Automata and life’, Kolmogorov argued that it was in principle possible to study and model the human mind scientifically.Footnote 26 Computers opened unprecedented possibilities for creating artificial life and intelligence, according to a purely functionalist understanding of both (i.e. not reproducing the internal structure or substance of natural intelligence). ‘Systems consisting of a very large number of elements, each of which acts purely “arithmetically”, can acquire qualitatively new properties’, he argued. However, Kolmogorov was far from believing that present-day computers, and even supposed ‘self-learning automata’, could be represented as ‘thinking’ or imitating human creative activities, such as ‘composing music’ or ‘writing poetry’, a misconception based on ‘an extremely simplified idea of the real nature of human higher nervous activity, and especially of creative activity’.Footnote 27 Considering whether or not machines can think, Kolmogorov wrote, ‘In practice, I am a big sceptic. But it is wrong to try to hide behind the fact that there is no dialectic in the machine’.Footnote 28 In other words, the dialectical conception of thinking shouldn't prevent scientists from exploring these questions.

For Kolmogorov and other ‘cyberneticians’ (as opposed to ‘philosophers’), the legitimacy of ‘cybernetics’ and related research activities depended on demonstrating that scientific study and modelling of living organisms and their cognitive processes and functions was based on truly materialistic premises. However, in their writings and public interventions it is difficult to find expressions of a purely mechanical view of ‘thinking’ or reasoning, as was common among their Western colleagues. The view that eventually prevailed in the emerging Soviet AI mirrored the Marxist–Leninist formulation of a computer as a tool capable of assisting or augmenting human thinking, but not a thinking entity itself.

This conception of AI echoed into the 1980s in various audiences. Dmitry Pospelov, a recognized intellectual leader of the Soviet AI community, argued in a 1976 interview with the widely read general-public newspaper Literaturnaya gazeta that no existing AI programs could yet imitate natural human intelligence, which was irreducible to logic or calculation.Footnote 29 Addressing researchers in 1985, Germogen Pospelov wrote,

Artificial Intelligence doesn't ‘exist’, [but] there is a property of computers to produce the same results that are engendered in the process of human creative activity … All the properties of computers that simulate creative processes are the result of the fact that human knowledge and intellect are materialized (represented) in a computer and, of course, the machine has no intellect of its own; therefore, when they talk about a chess tournament with a computer, it is, in fact, a tournament between programmers who have invested their art in writing chess programs.Footnote 30

In sum, the debates surrounding the introduction of cybernetics in the 1950s didn't oppose those for or against computers and their usefulness for the Soviet national economy, but ‘dialecticians’ (or ‘philosophers’) who didn't believe it was possible to study and model thinking or creative activity using objective scientific methods, and those scientists, ‘cyberneticians’, who believed that such a possibility existed at least in principle. For a group of theorists and practitioners to whom we now turn, ‘AI’ became an innovative tool that would help solve complex problems in numerous social and scientific fields such as biology, medicine, management and design by teaching or programming automata to operate in the world based on a thorough and rigorous scientific study of human thinking.

Institutional and international recognition of Soviet ‘AI’

The prevailing scepticism about ‘thinking machines’ in the Soviet Union didn't prevent the adoption of the label ‘AI’, first in scientific publications and general public venues, and from the 1970s onwards in academic research. A growing number of scientists interested in modelling machine intelligence were scattered across many different institutions, most often academic computer centres and institutes specializing in control science and engineering, applied mathematics, cybernetics, information science, psychology and linguistics. The introduction of a new label and field required vigorous lobbying at the highest level of the academic administrative hierarchy, as had been the case with cybernetics in the late 1950s.Footnote 31

‘AI’ first appeared as a distinct label in official Soviet academic structures in 1973, when a Scientific Council on AI was established within the Committee on Systems Analysis under the Presidium of the USSR Academy of Sciences, the institution that coordinated most civil scientific research. A year later an AI section was established as part of the Council for ‘Cybernetics’ through the efforts of Germogen Pospelov, who headed both structures. A recent recipient of the prestigious USSR State Prize and later a full member of the academy, Pospelov was a heavyweight figure in the institutionalization of Soviet AI. A former air force general and specialist in automatic control, he became interested in applying cybernetics to the national economy after retiring from the military.

Establishing the council and section on AI allowed Soviet researchers for the first time to consider different lines of research and capacities and plan research projects and other scientific activities under this controversial label. The two institutional structures overlapped considerably and included prominent academic figures such as Viktor Glushkov, Andrey Ershov and Dmitry Okhotsimsky.Footnote 32 Under the auspices of the scientific section, a regular open seminar on AI was held in Moscow for many years, as well as numerous scientific meetings and other events that helped consolidate AI as a field of research, attracting funding and new researchers. In particular, in 1974 it made possible the first Soviet congress devoted entirely to AI in Tbilisi, Georgia. The Fourth International Joint Conference on Artificial Intelligence also took place in Tbilisi in September 1975 (the first was held in Washington in 1969). Its international committee was chaired by Patrick Winston (MIT) and included Pat Hayes (Essex), John McCarthy (Stanford), Marvin Minsky (MIT) and the dean of Soviet cybernetics, Aksel′ Berg (USSR Academy of Sciences). Attended by 220 foreign scientists (including over a hundred from the United States), this major event reflected détente academic diplomacy but also expressed a pragmatic interest on both sides of the political divide. The idea of establishing international cooperation in AI, and in particular an international laboratory for computational sciences, had been discussed between American and Soviet specialists since 1973.Footnote 33 Before and during the conference, a series of negotiations and visits took place to this effect. During a two-week trip to the United States in 1977, a group of Soviet scientists visited major American centres of AI and computer technology, such as MIT, Berkeley and Stanford, as well as research centres of private companies such as Xerox and IBM, and government organizations. The mixed Soviet–American working group sought to jointly explore the use of computers in economics and management.Footnote 34 Other scientific meetings followed, and international contacts intensified. The same year, on the joint initiative of Germogen Pospelov and the British scientist Donald Michie, an international meeting on AI was organized in Repino, near Leningrad, which brought together some of the most important Soviet and Western AI specialists.

This initial impetus for cooperation slowed in the late 1970s following the Soviet invasion of Afghanistan, the arrest and exile of Andrei Sakharov, and more generally the repression of the human rights movement in the USSR. Even without this cooling, substantial differences in approaches and styles of scientific work were already evident at the 1975 IJCAI meeting in Tbilisi, which featured narrowly pragmatic and applied papers by foreign (mainly American) scientists, and more theoretically motivated Soviet contributions. In a post-conference report, a Soviet participant criticized the American offerings as ‘very specific, utilitarian, [and] without theoretical justification and generalization, but effective’.Footnote 35 At the same time, mid-1970s international exchanges provided an opportunity for leading Soviet AI specialists to note that despite a comparable level of theoretical development (and even a superiority in some areas), Soviet AI was losing in terms of funding, training, technical equipment and applications, and also in the effective organization of the system of scientific communication that they could observe in the United States.

In the following decade, G. Pospelov and members of the academic sections sought to further institutionalize Soviet AI research, which still lacked journals, research centres, university departments and specifically dedicated curricula. If in 1975 there were thirty to thirty-five specialists capable of leading AI projects in the Soviet Union, in 1986 G. Pospelov estimated the number engaged in the field as 250. By contrast, the United States probably had several thousand people working on AI in university and private research centres. He thought the relative weakness of Soviet AI coincided with the ever-increasing lag in the development and production of computers and software engineering compared to advanced capitalist countries.Footnote 36

By the mid-1980s, Pospelov's determination had paid off; he had established a new vision of the role of AI in Soviet academic and administrative nomenclature. Following his 1986 report, the Presidium of the Academy of Sciences decided to transform the AI section of the Council for Cybernetics into a new Council for ‘Artificial Intelligence’ (while the former AI section of the Committee for Systems Analysis was renamed ‘Semiotics and Cognitology’).Footnote 37 The new council was now under the auspices of the Division of Informatics, Computer Engineering and Automation, which meant that AI research was finally detached from cybernetics and closer to the academic and ministerial bodies responsible for computer science, and to the electronics industry. In 1987, the Presidium set a high priority for AI research. It recommended the reopening of previously closed departments of computational linguistics in Moscow, Novosibirsk, Kiev and other cities, and establishing a department of cognitive psychology at Moscow State University. It also advised organizing major national and international conferences on AI, and strengthening international contacts in other ways.Footnote 38 Finally, in the turbulent year of 1989, the Soviet Association for AI was founded, just two years before the final collapse of the Soviet system.

AI understood as control systems

A specific research programme promoted by the activities of the academic sections on AI led by G. Pospelov in the 1970s and the 1980s was primarily motivated by the belief that computers and information technologies would be indispensable for rationalizing management of the socialist economy.Footnote 39 First formulated in the late 1950s by a pioneer of Soviet cybernetics, Aksel′ Berg, the idea of automated control of the socialist economy was most consistently formulated and promoted by the leader of Ukrainian cybernetics Viktor Glushkov, the author and main driving force behind the project of a nationwide automated system of economic management (OGAS).Footnote 40 Another charismatic academician and proponent of the systems approach, Nikita Moiseyev, also promoted the idea of the economy as an automated mechanism, a kind of ‘economic autopilot’.Footnote 41 Although the OGAS and other such projects were not ultimately implemented in the form that their authors envisioned, automation and algorithmic management of the socialist economy remained an important part of the late Soviet techno-political imagination.

For example, the ‘Granite’ project piloted in the 1980s by G. Pospelov was conceived as a collective problem-solving system. This would help integrate the separate automated systems of management (ASUs in Russian) massively implemented in Soviet industry since the 1970s into a single computer network, a kind of ‘distributed artificial intelligence’.Footnote 42 At a higher level, ‘Granite’ aimed to coordinate the actions of various planning and management bodies. AI was interpreted here as a set of theoretical and technical tools for the design and control of ‘large’ or ‘complex’ systems, such as oil field facilities, and ultimately the national economy, as opposed to ‘traditional’ objects and means of control. Of course this definition of its utility was important for its political legitimacy. But G. Pospelov and colleagues also intended it to consolidate ‘AI’ as a specific research programme, distinct from competing projects aimed at optimizing and automating management and decision making (such as systems analysis, operations research and economic cybernetics) on the one hand, and from general computer science and software engineering on the other. Unlike projects based on quantitative techniques such as input–output modelling, optimization techniques and so on, ‘Granite’ was intended to provide an ‘intellectual interface’ using qualitative data and textual descriptions of a given domain of activity (a kind of ‘expert system’, to use a more familiar term).

At the same time, establishing ‘AI’ as a distinct research field required a clear demarcation from perception-based approaches to pattern recognition and statistical induction. In particular, Vladimir Vapnik and Alexey Chervonenkis, later internationally renowned for their contributions to statistical-learning theory, and a competing Machine Learning group led by Mark Aizerman, both at the Institute of Control Sciences, were not members of these academic bodies on AI, and did not participate in the 1970s international congresses on AI in the Soviet Union. More generally, such approaches existed institutionally under the labels of ‘adaptive systems’ and ‘bionics’, which had their own scientific section within the Council for Cybernetics at the Academy of Sciences.

To illustrate this difference in approach, members of the academic programme on AI often described chess programs and music-generating algorithms as belonging instead to operations research.Footnote 43 This included first and foremost the successful world champion program ‘Kaissa’, developed from 1970s at the Moscow Institute of Control Sciences by a group of researchers led by Mikhail Donskoy. As its algorithms did not attempt to mimic the way human chess players think and make decisions, they were seen as an application of combinatorial methods, not AI. In contrast, the legendary Soviet chess grandmaster Mikhail Botvinnik pursued a radically different (and long-term) project more in line with the mainstream Soviet vision of intelligent machines. An electrical engineer by training, Botvinnik sought to create a computer program based on a study of the thought process of a chess player. He described and formalized a chess master's game as a three-level control system and proposed a specific ‘algorithm’ for it in the 1960s.Footnote 44 With the help of a group of mathematicians and government support, Botvinnik attempted to obtain a fully functional and competitive computer chess program. Although he never achieved this goal, this project (abandoned only a few years before his death in 1995) had other unexpected conceptual and practical offspring. As early as 1979, Botvinnik proposed using his ‘chess’ model ‘Pioneer’ to plan the repair of power station equipment (in honour of his first speciality as an electrical engineer), and later in planning the national economy for periods of fifteen and twenty-five years.Footnote 45

Although drawing a clear line between all these domains was sometimes a delicate task, AI was thus essentially defined as a control system dealing with complex and weakly formalized domains and problems, not with deterministic and numerical methods, and simulating the way humans think and operate.Footnote 46 Insofar as formal systems of logical deduction were not regarded as the golden standard of ‘intelligence’ and a detailed knowledge of the domain of practice was required to create an adequate language of its description, the engineering of AI systems implied a collaboration between engineers, mathematicians, psychologists and practitioners. Such a vision is best exemplified by the work of the mathematician and computer scientist Dmitry Pospelov on ‘situational management (control)’, which – along with other projects coordinated by the academic sections on AI, such as those focused on human–machine interaction in natural language, robotics, heuristic methods of problem solving and engineering, mathematical logic, psycholinguistics and the psychology of thinking – formed the core of the Soviet academic programme on AI.Footnote 47

‘Situational management’ in large (complex) systems

The concept of ‘situational management’ was inspired by an original vision of decentralized control developed in the 1960s by a group of scientists working at the intersection of mathematics, engineering, medicine and biology. According to some participants, the winter schools held in Komarovo, a resort near Leningrad, played a central role in fostering this interdisciplinary and interinstitutional dialogue.Footnote 48 The Komarovo meetings produced a common understanding that ‘large’ systems cannot be exhaustively and satisfactorily described or fully formalized for centralized, deterministic control, due to complexity, uncertainty or the cost of the computation required. To consider the means of control in such systems, this circle relied on the work of mathematician Mikhail Tsetlin, who developed an original theory of ‘learning automata’ and formulated strategies for ‘automata games’.Footnote 49 Tsetlin and colleagues believed that means of local coordination were more effective in getting agents to behave in a coordinated or synchronized way to achieve a common goal. It is easy to see that the normative interpretation of this argument, applied, for instance, to organizational management, ran counter to the official ideology of the centralized and vertically controlled Soviet state. Together with Tsetlin's former collaborator Viktor Varshavsky, D. Pospelov later wrote a popular exposition of the ideas of decentralized control through local interactions, collective behaviour and complex distributed systems.Footnote 50

D. Pospelov also benefited from exchanges at the psychonics seminar at the Moscow Energy Institute, which he co-organized with prominent Soviet psychologist Veniamin Pushkin from 1964 to 1970. Unlike bionics, which studied living organisms to better model technical control devices, proponents of psychonics believed that the psychology of human thinking could be studied scientifically, and that this knowledge would be useful in modelling artificial intellectual agents.

Both lines of research merged in the book Myshleniye i Avtomaty (Thinking and automata), co-authored by Pushkin and Pospelov.Footnote 51 It presented a model of giromat, a problem-solving machine designed as an alternative to logic machines of the type Simon and Newell had invented, the General Problem Solver (GPS).Footnote 52 In particular, Pushkin and Pospelov were critical of the ‘maze model’ that these embodied, where at each step of the decision-making or reasoning process the best (or a better) choice had to be made from a range of proposed options. They regarded the GPS model as too simplistic, because real-world problem solving usually took place without a predefined logical scheme, with the sets of given options rarely specified in advance. Instead, the giromat was supposed to create and store in its ‘memory’ a model of the environment in which it would operate. It had to build the ‘labyrinth’ which would lead to the desired goal itself.Footnote 53 Thus finding appropriate means to create an adequate representation of the world or a real situation inside a machine became central to D. Pospelov's work on AI. In 1969 he wrote, ‘The only way to extend a machine's ability to solve creative problems, to adapt it to an unexplored environment, to self-organize, and so on, is to create within it a semiotic system of its external world.’Footnote 54

In the 1970s and 1980s, D. Pospelov developed his research programme of ‘situational management (control)’ with which his AI school became closely associated.Footnote 55 Situational management was intended for a class of large (or complex) systems, such as a seaport, transnational corporation, city or ecosystem, where it was impossible or impractical to represent the control or decision-making processes as a formal system.Footnote 56 Such ‘non-traditional objects of control’ are often unique, the purpose of their existence cannot be formalized and therefore cannot be optimized, and some of their elements are endowed with free will.Footnote 57 Thus ‘situational’ essentially meant taking into account human behaviour as well as the specific structure, functioning and dynamics of a controlled system. As a basis for ‘situational management’, D. Pospelov introduced the class of ‘semiotic models’ (or logico-linguistic models) as opposed to formal models (both symbolic and numerical).Footnote 58 Such models should integrate semantic (sense-making, understanding) and pragmatic aspects (roles, scenarios), and would be more suitable for describing the changing environment, planning complex behaviours and supporting human–machine interaction.

The ‘semiotic models’, as we can now see, were not based on formal rules of deduction or the ‘maze model’. D. Pospelov sought to propose ‘a new approach to modelling human reasoning and decision-making about objects acting in a real physical environment’.Footnote 59 In order to design a machine that could imitate human thinking – which can be logically inconsistent, approximate, conventional, context-dependent and even ‘absurd’ – he developed the so-called ‘pseudo-physical logics’ representing, for instance, a person's spontaneous knowledge of space–time relations such as ‘distance’ or ‘size’, or causal relations. These models contained linguistic variables with values such as ‘very far, quite far, neither far nor close, closer than, very close’, denoting subjective evaluations of distance. In effect, D. Pospelov and his colleagues were tackling a well-known problem in AI: how to reproduce, in a machine, human ‘common sense’, i.e. a non-articulate, rule-of-thumb, intuitive and approximate knowledge of how the real world works, which remains efficient enough to adapt to different contexts of action.Footnote 60

This version of AI, or at least of what it should be, also allowed D. Pospelov to distinguish this field from other forms of automation. At the meeting of the Presidium of the Academy of Sciences devoted to the question of establishing a new section on AI in 1987, the president of the academy, Gury Marchuk, a prominent specialist in computational mathematics and atmospheric physics, concluded that AI reflected ‘a new factor associated with automatic decision making … not rigid, but flexible; the decision must be consistent with common sense’.Footnote 61

Due to the centrality of semiotic models for D. Pospelov's situational management, he eventually defined his entire research programme as ‘applied semiotics’, a hybrid form of knowledge that brings together control theory, computer science, psychology and even structuralist analysis of literary texts and culture. Sharing the concept of culture as a metatext (developed especially in the works of Yury Lotman and other representatives of the Tartu–Moscow School of Semiotics), D. Pospelov's later books quoted extensively from Russian literary classics and cultural analysis. For example, in order to create a language for describing typical characters, situations or action scenarios (‘frames’), he referred to Vladimir Propp's formal analysis of fairy tales, Mikhail Bakhtin's interpretation of Italian masks and so on.Footnote 62 Artificial intelligence, if it were to be achieved, would have to assimilate the human world of culture and meaning. If, in a sense, Pospelov's project (like Botvinnik's failed one) was not really to humanize the machine, it was very far from the idea of mechanizing the human mind or creating a ‘black box’ beyond human understanding.

Conclusion

The proponents of the institutional and cultural form of AI in the Soviet Union in the 1960s–1980s were strongly influenced by a reluctance to accept the ‘thinking’ property of electronic machines, and a focus on organizational management (control) rather than individual rationality (choice). This genealogy is partly explained by the intellectual influence of Marxism, in both its dogmatic Marxist–Leninist and its heterodox versions, and by the centrality of control theory to the field, with its applications to biology, economics or computer science. In the Soviet AI community, the central problem of AI was hence commonly interpreted as control in ‘large’ systems. It is no coincidence that in Ilyenkov's 1964 story ‘The mystery of the black box’, the supreme AI entity is called ‘Control System’. Ilyenkov's imagery insightfully associated AI with power relations and subordination to an all-embracing algorithmic logic of optimization and rationalization (efficiency), which he saw as inimical to the very essence of human experience.

In contrast to Ilyenkov's critique of cybernetics, Soviet research on ‘situational management’ in the 1970s and 1980s embodied a potentially subversive vision of society as a distributed system, self-regulated and self-organized through local interactions (‘the orchestra plays without a conductor’). However, Soviet AI undoubtedly participated in the algorithmic rationalization of society and human experience, designing control mechanisms in hybrid man–machine systems and emphasizing that the purpose of AI (and of ‘natural’ thought) was essentially to model and control behaviour. The AI specialists sought to provide managers with a new language and a new set of tools to help deal with problems on the ground. Dmitry Pospelov's conception of AI lies precisely at the blurred boundary where cybernetic control of machines becomes management of human societies. An ‘intelligent’ machine would eventually ‘control’ at the point where a human fails in the face of overwhelming complexity.

Although this engineering approach had some applications, or at least real-world experiments in organizational management, industry or medical diagnostics, the Soviet academic programme on AI ultimately remained far more speculative and less driven by industrial applications than its American counterparts. For various reasons, the concept of control that emerged didn't directly participate in the successive attempts to redefine the Soviet economic and political system. Ironically, at the very moment of the academic recognition and institutionalization of AI in the Soviet Union in the late 1980s, another powerful idea was gaining momentum, that of the ‘market mechanism’, which was then to coordinate the socialist economy. In this context, the emancipation of Soviet AI from cybernetics after 1987 simultaneously implied a shift in the understanding of ‘AI as a control system’ towards the design of ‘intelligent systems’, seen as a kind of general ‘information technology’ more suited to post-ideological hybrid environments characterized by a progressive technological uniformization.

Acknowledgements

This research was supported by the International Research and Collaboration Award for Histories of AI: A Genealogy of Power, Cambridge University (2020). I am indebted to the organizers and members of this project for their encouragement and insightful exchanges at the project's seminar and summer schools. I'm also grateful to the participants in the research seminar and project on Algorithmic Rationalities in the post-war Soviet Union at the Higher School of Economics, Moscow (2020–2). Both have greatly broadened my vision and enriched my understanding of the histories of AI on both sides of the Iron Curtain.

References

1 Archive of the Russian Academy of Sciences (hereafter ARAN), f. 2, op. 1, d. 1205, p. 60.

2 Graham, Loren, Science, Philosophy, and Human Behavior in the Soviet Union, New York: Columbia University Press, 1987Google Scholar; Gerovitch, Slava, From Newspeak to Cyberspeak: A History of Soviet Cybernetics, Cambridge, MA: MIT Press, 2002CrossRefGoogle Scholar.

3 Western histories of ‘algorithmic thinking’ are considered in Erickson, Paul, Klein, Judy L., Daston, Lorraine, Lemov, Rebecca, Sturm, Thomas and Gordin, Michael D., How Reason Almost Lost Its Mind, Chicago: The University of Chicago Press, 2013CrossRefGoogle Scholar; Kline, Ronald R., The Cybernetics Moment: Or Why We Call Our Age the Information Age, Baltimore: Johns Hopkins University Press, 2015CrossRefGoogle Scholar; Mirowski, Philip and Nik-Khah, Edward, The Knowledge We Have Lost in Information, New York: Oxford University Press, 2017CrossRefGoogle Scholar.

4 Gordin, Michael, ‘The forgetting and rediscovery of Soviet Machine Translation’, Critical Inquiry (2020) 4, pp. 835–66CrossRefGoogle Scholar.

5 Babintseva, Ekaterina, ‘Engineering the lay mind: Lev Landa's algo-heuristic theory and artificial intelligence’, in Abbate, Janet and Dick, Stephanie (eds.), Abstractions and Embodiments: New Histories of Computing and Society, Baltimore: Johns Hopkins University Press, 2022, pp. 318–40Google Scholar.

6 Tatarchenko, Ksenia, ‘Thinking algorithmically: from Cold War computer science to the socialist information culture’, Historical Studies in the Natural Sciences (2019) 2, pp. 194225CrossRefGoogle Scholar.

7 Slava Gerovitch attempted to explain striking differences in defining AI in the United States and the Soviet Union in terms of dominant cultural metaphors of ‘freedom’ as ‘choice’ in the former, and a presumably characteristic ‘contempt for freedom of choice’ of Russian culture with a high regard for ‘creativity’. Gerovitch, Slava, ‘Artificial intelligence with a national face: American and Soviet cultural metaphors for thought’, in Franchi, Stefano and Bianchini, Francesco (eds.), The Search for a Theory of Cognition, Amsterdam and New York: Rodopi, 2011, pp. 173–94CrossRefGoogle Scholar. Although understandings of the nature of thinking, learning and human activity are culturally specific, such explanations can be misleading: as we will see, the concept of ‘creativity’, essential for defining human thinking in the Soviet context, is conceptually and discursively opposed not to ‘freedom of choice’ but to mechanistic approaches to human activity.

8 Rindzevičiūtė, Egle, The Power of Systems: How Policy Sciences Opened Up the Cold War World, Ithaca, NY and London: Cornell University Press, 2016CrossRefGoogle Scholar.

9 On rational-choice theory redefining thinking and rationality as choice making in American academic thought in the second half of the 1950s see Heyck, Hunter, Age of System: Understanding the Development of Modern Social Science, Baltimore: Johns Hopkins University Press, 2015CrossRefGoogle Scholar.

10 Pospelov, Dmitry, Situatsionnoe Upravleniye: Teoriya i Praktika (Situational Management: Theory and Practice), Moscow: Nauka, 1986, p. 7Google Scholar.

11 Gerovitch, Slava, ‘“Russian scandals”: Soviet readings of American cybernetics in the early years of the Cold War’, Russian Review (2001) 4, pp. 545–68CrossRefGoogle Scholar.

12 On relations between Soviet dialectic materialism and cybernetics see Gotthard Gunther, ‘Cybernetics and the dialectic materialism of Marx and Lenin’, in Georg Trogemann, Alexander Nitussov and Wolfgang Ernst (eds.), Computing in Russia: The History of Computer Devices and Information Technology Revealed, Braunschweig and Wiesbaden: Vieweg Verlagsgesellschaft, 2001, pp. 317–32.

13 For the constructive influence of Marxism on Soviet science, including cybernetics and psychology, see Graham, op. cit. (2).

14 See, for instance, Anton Yasnitsky (ed.), Questioning Vygotsky's Legacy: Scientific Psychology or Heroic Cult, London and New York: Routledge, 2018. On the interest of Vygotsky's theory of child development for machine learning see Tyler Reigeluth and Michael Castelle, ‘What kind of learning is machine learning?’, in Jonathan Roberge and Michael Castelle (eds.), The Cultural Life of Machine Learning: An Incursion into Critical AI Studies, Cham: Palgrave Macmillan, 2021, pp. 79–115.

15 Ksenia Tatarchenko, Anya Yermakova and Liesbeth de Mol, ‘Russian logics and the culture of impossible: Part I – recovering intelligentsia logics’, IEEE Annals of the History of Computing (2022) 4, pp. 43–56; Tatarchenko, Yermakova and De Mol, ‘Russian logics and the culture of impossible: Part II – reinterpreting algorithmic rationality’, IEEE Annals of the History of Computing (2022) 4, pp. 57–69.

16 On ‘creativity’ and ‘creative thinking’ in Soviet science and engineering see Roman Abramov, ‘Engineering work in the late Soviet period: routine, creativity, and project discipline’, Sociology of Power (2020) 32(1), pp. 179–214; Babintseva, op. cit. (5).

17 On the critique of the ‘mechanistic materialism’ of the American behaviourists from the Marxist–Leninist perspective see also Ekaterina Babintseva, ‘“Overtake and surpass”: Soviet algorithmic thinking as a reinvention of Western theories during the Cold War’, in Mark Solovey and Christian Dayé (eds.), Cold War Social Science: Transnational Entanglements, Cham: Palgrave Macmillan, 2021, pp. 45–71.

18 Aksel′ Berg and Ernest Kol′man, Vozmozhnoye i Nevozmozhnoye v Kibernetike (Possible and Impossible in Cybernetics), Moscow: Nauka, 1963, p. 5.

19 John Searle, ‘Minds, brains, and programs’, Behavioral and Brain Sciences (1980) 3(3), pp. 417–24.

20 Translation my own. Anatoly Dneprov, ‘Igra’, Znaniye-Sila (1961) 5, p. 41.

21 Evald Ilyenkov, Aleksandr Arsen′ev and Vassily Davidov, ‘Mashina i chelovek: kibernetika i filosofiya’ (Machine and human: cybernetics and philosophy), in Fedor Konstantinov (ed.), Leninskaya Teoriya Otrazheniya i Sovremennaya Nauka, Moscow: Nauka, 1966, pp. 265–83. For more insights on this topic see Kety Chukhrov, ‘The philosophical disability of reason: Evald Ilyenkov's critique of machinic intelligence’, Radical Philosophy, Spring 2020, at www.radicalphilosophy.com/article/the-philosophical-disability-of-reason (accessed 25 March 2023).

22 For a contrasting case of the influence of cybernetics on French structuralism see Céline Lafontaine, L'empire cybernétique: Des machines à penser à la pensée machine, Paris: Seuil, 2004 ; Bernard D. Geoghegan, Code: From Information Theory to French Theory, Durham, NC and London: Duke University Press, 2023.

23 Yury Lotman, ‘Mozg – tekst – kul′tura – iskusstvennyy intellekt’ (Brain – text – culture – artificial intelligence), in Stat′i po Semiotike i Topologii Kul′tury, Tallinn: Aleksandra, 1992 (first published 1977), pp. 25–34.

24 Evald Ilyenkov, ‘Tayna chernogo yashchika: nauchno-fantasticheskaya prelyudiya’ (The mystery of the black box: sci-fi prelude), in Ilyenkov, Ob Idolakh i Idealakh, Moscow: Politizdat, 1968, pp. 11–28.

25 Andrey Kolmogorov, ‘Avtomaty i zhizn′’ (Automata and life), in Berg and Kol′man, op. cit. (18), pp. 10–29.

26 Kolmogorov, op. cit. (25), p. 11.

27 Kolmogorov, op. cit. (25), p. 11, added emphasis.

28 Kolmogorov, op. cit. (25), p. 11, added emphasis.

29 ‘Nauchnye sredy’, Literaturnaya gazeta (28 April 1976) 17, p. 13.

30 Germogen Pospelov, Valery Irikov and Andrey Kurilov, Protsedury i Algoritmy Formirovaniya Kompleksnykh Programm (Procedures and Algorithms for the Formation of Complex Programs), Moscow: Nauka, 1985, pp. 373–4.

31 The Scientific Council for the complex problem ‘Cybernetics’ was founded at the Soviet Academy of Sciences in 1959 by Aksel′ Berg, specialist in radio-frequency engineering and admiral of the Soviet Navy.

32 Viktor Glushkov was director of the Ukrainian Institute of Cybernetics and the main proponent of the National Automated System for Computation and Information Processing (OGAS); see Slava Gerovitch, ‘InterNyet: why the Soviet Union did not build a nationwide computer network’, History and Technology (2008) 4, pp. 335–50; Benjamin Peters, How Not to Network a Nation: The Uneasy History of the Soviet Internet, Cambridge, MA: MIT Press, 2016. On Andrei Ershov, who helped lead the Siberian school of computer science and was a distinguished fellow of the British Computer Society (1974) and a member of the USSR Academy of Sciences (1984), and his ideas about the use of computer algorithms in education, see Tatarchenko, op. cit. (6). Dmitry Okhotsimsky (1921–2005) was an aerospace engineer, organizer of the All-Union seminar on Theory of Systems with Elements of Artificial Intelligence at Moscow State University.

33 ARAN, f. 1807, op. 1 (313), d. 622, pp. 17-18.

34 A. Ershov's personal archive, f. 196, pp. 231–62, at http://ershov.iis.nsk.su/node/791054 (accessed 25 March 2023).

35 ARAN, f. 1807, op. 1, (314), d. 622, p. 14.

36 ARAN, f. 2, op. 1, d. 1205, pp. 33–111.

37 Presidium of the USSR Academy of Sciences, resolution of 10 September 1986, N 1122, ARAN, f. 2, op. 1, d. 1205, pp. 1–2.

38 Presidium of the USSR Academy of Sciences, resolution of 15 September 1987, N 847, ARAN, f. 2, op. 1, d. 1331, pp. 46–135.

39 Adam Leeds, ‘Dreams in cybernetic fugue: Cold War technoscience, the intelligentsia, and the birth of Soviet mathematical economics’, Historical Studies in the Natural Sciences (2016) 46, pp. 633–68.

40 Gerovitch, op. cit. (32); Peters, op. cit. (32).

41 Nikita Moiseyev, Lyudi i kibernetika (People and Cybernetics), Moscow: Molodaya Gvardiya, 1984. On Moiseyev see Rindzevičiūtė, op. cit. (8).

42 Pospelov, Irikov and Kurilov, op. cit. (30).

43 See, for example, report by D. Pospelov, ARAN, f. 2, op. 1, d. 1331, p. 54.

44 Mikhail Botvinnik, Algoritm Igry v Shakhmaty (Algorithm of the Chess Game), Moscow: Nauka, 1968.

45 On Botvinnik's project see Viacheslav Gubochkin, ‘Programma “Pioner”: zadacha, kotoruyu ne smog reshit′ Mikhail Botvinnik’ (The Pioneer program: the problem Mikhail Botvinnik could not solve), at https://vyacheslav-gubochkin.ru/programma-pioner-chast-1; and ‘Programma “Pioner”. Chast′ II: algoritm botvinnika. 1958–1972’, at https://vyacheslav-gubochkin.ru/programma-pioner-chast-2-algoritm-botvinnika (accessed 25 March 2023).

46 See, for instance, a discussion of G. Pospelov's report to the Presidium of the USSR Academy of Sciences ‘On the development of research on the problem of “artificial intelligence”’, 9 September 1982, ARAN, f. 2, op. 1, d. 764, pp. 127–211.

47 Dmitry Pospelov (1932–2019) was the first and permanent head of the Laboratory of Problems of Artificial Intelligence (formerly the Laboratory of Large Systems) at the Main Computer Centre of the Academy of Sciences (1968–98). From 1980 to 1990 he was on the ‘Scientific Problems of Computing’ Commission of the Council for Mutual Economic Assistance of the Socialist Countries, coordinating research on AI. In 1989 he became president of the Soviet (later Russian) Association for Artificial Intelligence. From 1998 to 2003 he chaired the Programme Committee of the international conference on Soft Computing and Measurements. The ‘Dialog’ project and others in computational linguistics related to machine translation and the formal model of the Russian language, coordinated by Andrei Ershov and Aleksandr Nariniani, were central activities of the Computer Centre of the Siberian Branch of the USSR Academy of Sciences in Novosibirsk, where the AI Laboratory was established in 1977.

48 A winter school on the theory of automata and pattern recognition which met for ten to fifteen days annually from 1961 to 1970 in Komarovo, near Leningrad, was organized by Mikhail Bongard at the Institute of Biophysics of the Academy of Sciences (and later at the Institute for Problems of Information Transmission, IPPI).

49 Mikhail Tsetlin, Issledovaniya po Teorii Avtomatov i Modelirovaniyu Biologicheskikh Sistem, Moscow: Nauka, 1969; published in English as Tsetlin, Automaton Theory and Modeling of Biological Systems, New York and London: Academic Press, 1973.

50 Viktor Varshavsky and Dmitry Pospelov, Orkestr Igrayet bez Dirizhera: Razmyshleniya ob Evolyutsii Nekotorykh Tekhnicheskikh Sistem i Upravlenii Imi (The Orchestra Plays without a Conductor: Reflections on the Evolution of Some Technical Systems and Their Control). Moscow: Nauka, 1984.

51 Pushkin, Veniamin and Pospelov, Dmitry, Myshleniye i Avtomaty (Thinking and Automata), Moscow: Sovetskoye Radio, 1972Google Scholar.

52 The term giromat first appeared in the novel The Magellanic Cloud (1955) by the Polish SF writer Stanislav Lem, which was widely read by the Soviet technical intelligentsia.

53 On the critique of a ‘maze model’ by Pospelov and Pushkin, and the idea of giromat, see also Gerovitch, op. cit. (7).

54 Dmitry Pospelov, ‘“Soznaniye”, “samosoznaniye” i vychislitel′nye machiny’ (Consciousness, self-awareness and computing machines), in Sistemnyye Issledovaniya: Yezhegodnik 1969, Moscow: Nauka, 1969, pp. 178–84, 178.

55 This reconstruction of the ideas of D. Pospelov about ‘situational management’ is based on his book, Pospelov, op. cit. (10).

56 D. Pospelov largely relied on V. Pushkin's experimental and theoretical work on the psychology of dispatching in railway transport.

57 Pospelov, op. cit. (10), pp. 11–17.

58 Pospelov, op. cit. (10), p. 32.

59 For a concise introduction to this topic in English see Pospelov, Dmitriy, ‘Fuzzy reasoning in pseudo-physical logics’, Fuzzy Sets and Systems (1987) 22, pp. 115–20, 115CrossRefGoogle Scholar.

60 This work was not carried out in isolation. Most importantly, Pospelov worked in dialogue with Lofti Zadeh (an American mathematician and computer scientist born in the Azerbaijan SSR), who in 1994 coined the term ‘soft computing’ to describe various methods and tools of approximate computation, such as fuzzy logic, genetic algorithms and so on.

61 ARAN, f.2, op. 1, d. 1331, p. 86–7.

62 See, for instance, Gaaze-Rapoport, M.G. and Pospelov, D.A., Ot Ameby do Robota: Modeli Povedeniya (From Amoeba to Robot: Patterns of Behaviour), Moscow: Nauka, 1987Google Scholar.