We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In recent years, the rapid convergence of artificial intelligence (AI) and low-altitude flight technology has driven significant transformations across various industries. These advancements have showcased immense potential in areas such as logistics distribution, urban air mobility (UAM) and national defense. By adopting the AI technology, low-altitude flight technology can achieve high levels of automation and operate in coordinated swarms, thereby enhancing efficiency and precision. However, as these technologies become more pervasive, they also raise pressing ethical or moral concerns, particularly regarding privacy, public safety, as well as the risks of militarisation and weaponisation. These issues have sparked extensive debates. In summary, while the integration of AI and low-altitude flight presents revolutionary opportunities, it also introduces complex ethical challenges. This article will explore these opportunities and challenges in depth, focusing on areas such as privacy protection, public safety, military applications and legal regulation, and will propose strategies to ensure that technological advancements remain aligned with ethical or moral principles.
Large Language Models (LLMs) could facilitate both more efficient administrative decision-making on the one hand, and better access to legal explanations and remedies to individuals concerned by administrative decisions on the other hand. However, it is an open research question of how performant such domain-specific models could be. Furthermore, they pose legal challenges, touching especially upon administrative law, fundamental rights, data protection law, AI regulation, and copyright law. The article provides an introduction into LLMs, outlines potential use cases for such models in the context of administrative decisions, and presents a non-exhaustive introduction to practical and legal challenges that require in-depth interdisciplinary research. A focus lies on open practical and legal challenges with respect to legal reasoning through LLMs. The article points out under which circumstances administrations can fulfil their duty to provide reasons with LLM-generated reasons. It highlights the importance of human oversight and the need to design LLM-based systems in a way that enables users such as administrative decision-makers to effectively oversee them. Furthermore, the article addresses the protection of training data and trade-offs with model performance, bias prevention and explainability to highlight the need for interdisciplinary research projects.
Large language models (LLMs) have significantly advanced artificial intelligence (AI) and natural language processing (NLP) by excelling in tasks like text generation, machine translation, question answering and sentiment analysis, often rivaling human performance. This paper reviews LLMs’ foundations, advancements and applications, beginning with the transformative transformer architecture, which improved on earlier models like recurrent neural networks and convolutional neural networks through self-attention mechanisms that capture long-range dependencies and contextual relationships. Key innovations such as masked language modeling and causal language modeling underpin leading models like Bidirectional encoder representations from transformers (BERT) and the Generative Pre-trained Transformer (GPT) series. The paper highlights scaling laws, model size increases and advanced training techniques that have driven LLMs’ growth. It also explores methodologies to enhance their precision and adaptability, including parameter-efficient fine-tuning and prompt engineering. Challenges like high computational demands, biases and hallucinations are addressed, with solutions such as retrieval-augmented generation to improve factual accuracy. By discussing LLMs’ strengths, limitations and transformative potential, this paper provides researchers, practitioners and students with a comprehensive understanding. It underscores the importance of ongoing research to improve efficiency, manage ethical concerns and shape the future of AI and language technologies.
This commentary explores MENA”s AI governance, addressing gaps, showcasing successful strategies, and comparing national approaches. It emphasizes current deficiencies, highlights regional contributions to global AI governance, and offers insights into effective frameworks. The study reveals distinctions and trends in MENA”s national AI strategies, serving as a concise resource for policymakers and industry stakeholders.
The international community, and the UN in particular, is in urgent need of wise policies, and a regulatory institution to put data-based systems, notably AI, to positive use and guard against their abuse. Digital transformation and “artificial intelligence (AI)”—which can more adequately be called “data-based systems (DS)”—present ethical opportunities and risks. Helping humans and the planet to flourish sustainably in peace and guaranteeing globally that human dignity is respected not only offline but also online, in the digital sphere, and the domain of DS requires two policy measures: (1) human rights-based data-based systems (HRBDS) and (2) an International Data-Based Systems Agency (IDA): IDA should be established at the UN as a platform for cooperation in the field of digital transformation and DS, fostering human rights, security, and peaceful uses of DS.
The transformative journey of law librarianship has been marked by significant milestones, from the transition from hard copy to online access, to the current development and implementation of artificial intelligence (AI). This article is based on a presentation at the BIALL Conference in June 2024 by Melissa Mills, Knowledge Manager at William Roberts Lawyers, and explores this evolution, focusing on her experiences and insights gained over the years, particularly in the Australian context.
LexisNexis’ Matthew Leopold explains how his team conducted a wide range of interviews with those involved in the legal education system – including librarians, academics, heads of law schools and university leaders – in order to gauge their feelings and thoughts on the impact artificial intelligence (AI) will have, and is having, on the sector. Matthew then goes through the findings, which shows a diverse set of views on AI.
The Japanese art of Kintsugi teaches that imperfections and failures are not flaws to hide but opportunities for growth and enrichment; it says that true strength and beauty come from embracing imperfection and learning from the fractures along the way. In this article Hélène Russell draws on insights from three conference experiences to show how KM professionals can make use of Kintsugi and act as the ‘golden joiners’ within their firms when it comes to AI projects, making use of their blend of resilience, organisational cultural awareness, communication skills, and adaptive knowledge-sharing practices.
Abstract: Chapter 3 delves into the world of peer interactions. I present general patterns of children’s social networks, highlighting the importance of child-to-child ties. I illustrate the key features of this humorous, playful world and examine how peer play facilitates children’s moral learning. In peer play children are developing what I call “the spectrum of moral sensibilities:” They are learning about and engaging in cooperation and care, conflict and dominance, and creating gray areas in between. This poses a stark contrast to the imagery of “the innocent child” permeating in historical and philosophical views of Chinese childhood that fixate on the brighter side of human nature in moral cultivation. Moreover, through deciphering children’s pretend play, I argue that these non-elite children, often relegated to history’s silent margins, have a much richer inner life than my predecessors assumed. Lastly, using a human–machine hybrid approach, I find that young learners’ sensibilities in discerning layered intentions and moral sentiments defeat AI algorithms. This sheds light on the mystery of human sensemaking and inspires reflections on ethnographic epistemology.
This Article is dedicated to what is arguably one of the most significant tests to which constitutionalism has been subject to in recent times. It examines the theoretical and practical challenges to constitutionalism arising from the profound technological changes under the influence of artificial intelligence (AI) in our emerging algorithmic society. The unprecedented rapid development of AI technology has not only rendered conventional theories of modern constitutionalism obsolete, but it has also created an epistemic gap in constitutional theory. As a result, there is a clear need for a new, compelling constitutional theory that adequately accounts for the scale of technological change by accurately capturing it, engaging with it, and ultimately, responding to it in a conceptually and normatively convincing way.
Despite the recognized importance of datasets in data-driven design approaches, their extensive study remains limited. We review the current landscape of design datasets and highlight the ongoing need for larger and more comprehensive datasets. Three categories of challenges in dataset development are identified. Analyses show critical dataset gaps in design process where future studies can be directed. Synthetic and end-to-end datasets are suggested as two less explored avenues. The recent application of Generative Pretrained Transformers (GPT) shows their potential in addressing these needs.
With the swift entry of artificial intelligence (AI) into everyday life, human-product interactions are becoming increasingly complex. We suggest an ecosystem-minded, humanity-centered design approach to better understand this complexity. Simultaneously with the development of interaction types, discussions and developments on theories of mental models are crucial to understanding and improving the nature of these interactions. In this paper, we address the gap in mental model theories and extend Norman's conceptual model at three dialogue levels: dialogue in language, mind, and use.
Despite the rapid advancement of generative Large Language Models (LLMs), there is still limited understanding of their potential impacts on engineering design (ED). This study fills this gap by collecting the tasks LLMs can perform within ED, using a Natural Language Processing analysis of 15,355 ED research papers. The results lead to a framework of LLM tasks in design, classifying them for different functions of LLMs and ED phases. Our findings illuminate the opportunities and risks of using LLMs for design, offering a foundation for future research and application in this domain.
Engineering standards are an important source of knowledge in product development. Despite the increasing digitalisation, the provision and usage of standards is characterised by lots of manual steps. This research paper aims at applying automatic knowledge graph creation in the domain of engineering standards to enable machine-actionable standards. For this, a formula knowledge graph ontology as well as suitable information extraction techniques are developed. The concept is validated using the example of DIN ISO 281, showing the overall capability of automatic knowledge graph creation.
The increased complexity of development projects surpass the capabilities of existing methods. While Model Based Systems Engineering pursues technically holistic approaches to realize complex products, aspects of organization as well as risk management, are still considered separately. The identification and management of risks are crucial in order to take suitable measures to minimize adverse effects on the project or the organization. To counter this, a new graph-based method and tool using AI, tailored to the needs of complex development projects and organizations is introduced here.
Natural Language Processing (NLP) has been extensively applied in design, particularly for analyzing technical documents like patents and scientific papers to identify entities such as functions, technical feature, and problems. However, there has been less focus on understanding semantic relations within literature, and a comprehensive definition of what constitutes a relation is still lacking. In this paper, we define relation in the context of design and the fundamental concepts linked to it. Subsequently, we introduce a framework for employing NLP to extract relations relevant to design.
In the realm of process engineering, the pursuit of sustainability is paramount. Traditional approaches can be time-consuming and often struggle to address modern environmental challenges effectively. This article explores the integration of generative AI, as a powerful tool to generate solution ideas and solve problems in process engineering using a Solution-Driven Approach (SDA). SDA applies nature-inspired principles to tackle intricate engineering challenges. In this study, generative AI is trained to understand and use the SDA patterns to suggest solutions to complex engineering challenges.
Despite the potential to enhance efficiency and improve quality, AI methods are not widely adopted in the context of product development due to the need for specialized applications. The necessary identification of a suitable machine learning (ML) algorithm requires expert knowledge, often lacking in companies. Therefore, a concept based on a multi-criteria decision analysis is applied, enabling the identification of a suitable ML algorithm for tasks in the early phase of product development. The application and resulting advantages of the concept are presented through a practical example.
In the era of digitization and the growing flood of information, the automatic, role-specific identification of information is crucial. This research paper aims to investigate whether the adaptation of LLM is suitable for classifying information obtained from standards for corresponding role profiles. This research reveals that with systematic fine-tuning, prediction accuracy can be increased by almost 100%. The validation was carried out using a two-digit number of standards for three predefined roles and demonstrates the significant potential of LM for labelling content with regard to roles.
This study explores how large language models like ChatGPT comprehend language and assess information. Through two experiments, we compare ChatGPT's performance with humans', addressing two key questions: 1) How does ChatGPT compare with human raters in evaluating judgment-based tasks like speculative technology realization? 2) How well does ChatGPT extract technical knowledge from non-technical content, such as mining speculative technologies from text, compared to humans? Results suggest ChatGPT's promise in knowledge extraction but also reveal a disparity with humans in decision-making.