We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
What should we look for in a game? How well do existing games embody good instructional and gaming design principles? How do we put those design principles into practice? The following chapters will review several games of different genres and domains. For each game, there are two chapters. The first is an impartial review of the game and the second is a description of the methods and lessons learned from the game developer’s perspective. In this chapter, we introduce each game briefly, discuss the criteria used to review the games, and summarize some of the key points you should look for in these and other games.
Introduction
Now that we’ve introduced a number of key issues and approaches to consider when designing a learning game, let’s put it all in context. The following chapters review seven different games, each focused on a different approach to training or a different training use.
The Computer-based Corpsman Training System (CBCTS) and its forebear the Tactical Combat Casualty Care Simulation (TC3sim) are serious games designed to train military combat medical personnel. The designs of the two games do not differ significantly. TC3sim was built for the U.S. Army and involves Iraq scenarios. CBCTS has some upgraded visuals and is skinned for the Marine Corps. Its scenarios are geared toward Afghanistan. Their designs share the same learning objectives, the same medical interactions, the same assessment model, and the same physiological simulations. In their development, the complexity of simulating synthetic casualties and the combinations of user interactions were significantly underestimated. However, success came from two factors. The development of a simple user interface allowed users to quickly learn how to play the game and manage the large number of medical interactions. The employment of iterative releases allowed for constant feedback to be collected and integrated back into the game design.
Introduction
The Computer-based Corpsman Training System (CBCTS) is a first-person serious game designed to train U.S. Navy combat medical personnel who are assigned to tthe U.S. Marines (called corpsmen) how to respond to casualties on the battlefield. It is based on the U.S. Army’s Tactical Combat Casualty Care Simulation (TC3sim). CBCTS and TC3sim are essentially the same game, but CBCTS’s visuals were customized for the U.S. Marines. Instead of Iraq, CBCTS uses scenarios set in Afghanistan. The wari ghters’ characters have also been reskinned to be appropriate for the Marines and Navy services.
We introduce the motivations, history, technical approach, and design choices behind DARWARS Ambush!, a game-based, convoy operations trainer that was heavily used by the U.S. Army and Marines for five years. We discuss a number of the practical deployment concerns we addressed and discuss how we cultivated relationships to build a community of committed users. As one of the first large-scale, successful serious games for learning, DARWARS Ambush! broke new ground and led to many lessons learned on how to best design, develop, and deploy a serious game. We discuss some of our experiences, decisions, and lessons learned, and conclude with some recommendations that may help new efforts attain success as well.
Introduction
In late 2004, DARPA Program Manager Dr. Ralph Chatham asked BBN Technologies, who was already under contract on his DARWARS Training Superiority Program, whether we could quickly – within six months – deploy a training system to help soldiers better respond to convoy ambushes then prevalent in Iraq. At that time, convoy ambushes involving small arms, rocket-propelled grenades (RPGs), or improvised explosive devices (IEDs) were a leading cause of casualties. The U.S. military had recognized the need for increased training for convoy operations and aggressively pursued a variety of training solutions, including live-fire training exercises, marksmanship trainers, and driver training systems (see Steele, 2004; Tiron, 2004 for examples). Dr. Chatham recognized the need for a squad-level team trainer that would focus on situational awareness, communication, and coordination.
The Virtual Dental Implant Trainer (VDIT) is a 3-D simulation environment for dental students to practice dental implant surgery procedures. It provides a highly authentic surgery experience for trainees looking to practice techniques learned elsewhere, or for experienced dentists looking to refresh their skills. Because of its focus on being a practice environment, VDIT does not contain many of the instructional design techniques often found in many other training simulations. Furthermore, there is limited use of game elements found in many other serious games. However, given the tasks and emphasis on practice, this is acceptable. With additional effort VDIT could be transitioned into a more effective and engaging instructional environment.
Introduction
The Virtual Dental Implant Trainer (VDIT) is a highly accurate procedural training simulation environment for dentists. VDIT is not intended to be a stand-alone learning experience for those i rst learning how to perform dental implant surgery. Rather, it was specii cally designed to be used in conjunction with other training, or for those seeking a practice environment. These decisions on use greatly affected the game’s design. The remaining sections of this chapter look at the effectiveness of these decisions on VDIT.
The Computer-based Corpsman Training System (CBCTS) was developed by ECS, Inc. for the U.S. Army Research, Development and Engineering Command. Game design elements complement the instructional design elements to produce an award-winning learning game. Notable design features include a well-designed tutorial, opportunities for decision making, time to reflect and replay a scenario, and implicit and explicit feedback. While game and instructional elements work very well together in CBCTS, suggestions are made in this chapter to increase instructional guidance to gain learning efficiencies without jeopardizing gameplay. These suggestions will benefit all learning game designers striving to improve their own games. Game designers are cautioned that additional elements may increase the design and development resource requirements, and instructional and gameplay trade-offs have to be considered. Some of these trade-offs are briefly addressed.
Introduction
The Computer-based Corpsman Training System (CBCTS) is a learning game that provides combat corpsmen realistic training to prepare them to apply their skills in a combat situation. CBCTS was developed by ECS, Inc. for the U.S. Army Research, Development and Engineering Command (RDECOM). The game supports training for Navy combat medics who are assigned to the U.S. Marine Corps. CBCTS is used at the Army Medical Department (AMEDD) Center and School as part of the curriculum to prepare combat medics.
The internet has altered how people engage with each other in myriad ways, including offering opportunities for people to act distrustfully. This fascinating set of essays explores the question of trust in computing from technical, socio-philosophical, and design perspectives. Why has the identity of the human user been taken for granted in the design of the internet? What difficulties ensue when it is understood that security systems can never be perfect? What role does trust have in society in general? How is trust to be understood when trying to describe activities as part of a user requirement program? What questions of trust arise in a time when data analytics are meant to offer new insights into user behavior and when users are confronted with different sorts of digital entities? These questions and their answers are of paramount interest to computer scientists, sociologists, philosophers and designers confronting the problem of trust.
As the contributions to the first and last sections of this volume indicate, trust is a problem for those who build Internet services and those who are tasked with policing them. If only they had good models and even better specifications of users, use, and usage, or so they seem to say, they could build systems that would ensure and enhance the privacy, security, and safety of online services. Understandably (but perhaps not wisely), they tend to be impatient with what appears to be overly precious concept mongering and theoretical hairsplitting by those disciplines to which they look to provide these models and specifications. But perhaps an understanding of the provenance and distinctiveness of the range of models being offered might give those who wish to deploy them deeper insight into their domains of application as well as their limitations. Each is shaped by the presuppositions on which it is based and the conceptual and other choices made in its development. No one model, no individual summary of requirements can serve for all uses.
Awareness of this “conceptual archaeology” is especially important when the model's presuppositions are orthogonal to those that are conventional in the field. In such cases, it is critical to understand both why different starting points are taken and the benefits that are felt to be derived thereby. Difference is rarely an expression of simple contrariness but usually reflects deliberate choice made in the hope that things might be brought to light which otherwise are left obscure.
Any glance at the contemporary intellectual landscape would make it clear that trust, society, and computing are often discussed together. And any glance would also make it clear that when this happens, the questions that are produced often seem, at first glance, straightforward. Yet, on closer examination, these questions unravel into a quagmire of concerns. What starts out as, say, a question of whether computers can be relied on to do a particular job often turns into something more than doubts about a division of labor. As Douglas Rushkoff argues in his brief and provocative book, Program or be Programmed (2010), when people rely on computers to do some job, it is not like Miss Daisy trusting her chauffeur to take her car to the right destination. But it is not what computers are told to do that is the issue. At issue is what computers tell us, the humans, as they get on with whatever task is at hand. And this in turn implies things about who and what we are because of these dialogues we have with computers. I use the word dialogues purposefully here because it is suggestive of how interaction between person and machine somehow alters the sense a person has of themselves and of the machine they are interacting with, and how this in turn alters the relationship the two have – that is, the machine and the “user.” According to Rushkoff, it is not possible to know what the purpose of an interaction between a person and a machine might be; it is certainly not as simple as a question of a command and its response. In his metaphor about driving, what come into doubt are rarely questions about whether the computer has correctly heard and identified the destination the human wants – the place to which they have instructed the machine to navigate them. The interaction we have with computers lead us to doubt why a particular destination is chosen. This in turn leads to doubts about whether such choices should be in the hands of the human or the computer.
I approach the topic of trust from two converging directions. The first derives from work primarily in the domains of Information and Computing Ethics (ICE) –work that also includes perspectives from phenomenology and a range of applied ethical theories. The second draws from media and communication studies most broadly, beginning with Medium Theory or Media Ecology traditions affiliated with the likes of Marshall McLuhan, Harold Innis, Elizabeth Eisenstein, and Walter Ong. In these domains, attention to communication in online environments, including distinctively virtual environments, began within what was first demarcated as studies of Computer-Mediated Communication (CMC). The rise of the Internet and then the World Wide Web in the early 1990s inspired new kinds of research within CMC; by 2000 or so, it became possible to speak of Internet Studies (IS) as a distinctive field in its own right, as indexed, for example, by the founding of the Oxford Internet Institute.
Drawing on both of these sources to explore a range of issues at their intersections – most certainly including trust – is useful first of all as the more empirically oriented research constituting CMC and IS work thereby grounds the often more theoretical approaches of ICE in the fine-grained details of praxis. At the same time, the more theoretical approaches of ICE, as we will see, help us complement the primarily social scientific theories and methodologies that predominate in CMC and IS. By taking both together, I hope to provide an account of trust in online environments that is at once strongly rooted in empirical findings while also grounded in and illuminated by a very wide range of theoretical perspectives. This approach requires at least one important caveat, to which I return shortly.
The topics covered in this collection have been wide and varied. Some have been investigated in depth, others merely identified. As we move now to summarize what has been covered, it is important to remember that the goal has been to provide the reader with a sensibility for the various perspectives and points of view that can be brought to bear on the combined subject of trust, computing, and society. The book commenced with a call to arms: Chapter 2 by David Clark. Part of the sensibility in question demands one be alert, he argues, alert to the way issues of trust in society come in by the back door provided by technology and the Internet in particular. Other chapters made it clear that other capacities are required, too. A further sensibility is to be open to the diverse treatments that different perspectives (or disciplines) offer and to have the acuity not to allow those treatments to muddle each other. One has to be sensitive too to how the concept of “trust” is essentially a vernacular, used by ordinary people in everyday ways. Analysis of it must focus on that use and not be distracted by hypothesized uses, ones constructed through, say, theory or experiment – although these treatments might afford more nuanced understandings of the vernacular. Part of these vernacular practices entails inducing fear and worry. Such fear and worry can undermine some of the other aspects of the sensibility already mentioned; such as awareness of differences in points of view, and of course, beyond this, simply clarity and calmness of thought that might lead one to correctly resist the “crowding out” of other explanations that use of the word trust sometimes produces.