After a slow incubation period of nearly 15 years, a large and growing number of organizations now have one or more projects using the Semantic Web stack of technologies. The Web Ontology Language (OWL) is an essential ingredient in this stack, and the need for ontologists is increasing faster than the number and variety of available resources for learning OWL. This is especially true for the primary target audience for this book: modelers who want to build OWL ontologies for practical use in enterprise and government settings. The purpose of this book is to speed up the process of learning and mastering OWL. To that end, the focus is on the 30% of OWL that gets used 90% of the time. Others who may benefit from this book include technically oriented managers, semantic technology developers, undergraduate and post-graduate students, and finally, instructors looking for new ways to explain OWL. The book unfolds in a spiral manner, starting with the core ideas. Each subsequent cycle reinforces and expands on what has been learned in prior cycles and introduces new related ideas. Part 1 is a cook's tour of ontology and OWL, giving an informal overview of what things need to be said to build an ontology, followed by a detailed look at how to say them in OWL. This is illustrated using a healthcare example. Part 1 concludes with an explanation of some foundational ideas about meaning and semantics to prepare the reader for subsequent chapters. Part 2 goes into depth on properties and classes, which are the core of OWL. There are detailed descriptions of the main constructs that you are likely to need in every day modeling, including what inferences are sanctioned. Each is illustrated with real world examples. Part 3 explains and illustrates how to put OWL into practice, using examples in healthcare, collateral, and financial transactions. A small ontology is described for each, along with some key inferences. Key limitations of OWL are identified, along with possible workarounds. The final chapter gives a variety of practical tips and guidelines to send the reader on their way.
This book is aimed at those in engineering/scientific fields who have never learned programming before but are eager to master the C language quickly so as to immediately apply it to problem solving in numerical analysis. The book skips unnecessary formality but explains all the important aspects of C essential for numerical analysis. Topics covered in numerical analysis include single and simultaneous equations, differential equations, numerical integration, and simulations by random numbers. In the Appendices, quick tutorials for gnuplot, Octave/MATLAB, and FORTRAN for C users are provided.
Human decision-making often transcends our formal models of «rationality.» Designing intelligent agents that interact proficiently with people necessitates the modeling of human behavior and the prediction of their decisions. In this book, we explore the task of automatically predicting human decision-making and its use in designing intelligent human-aware automated computer systems of varying natures—from purely conflicting interaction settings (e.g., security and games) to fully cooperative interaction settings (e.g., autonomous driving and personal robotic assistants). We explore the techniques, algorithms, and empirical methodologies for meeting the challenges that arise from the above tasks and illustrate major benefits from the use of these computational solutions in real-world application domains such as security, negotiations, argumentative interactions, voting systems, autonomous driving, and games. The book presents both the traditional and classical methods as well as the most recent and cutting edge advances, providing the reader with a panorama of the challenges and solutions in predicting human decision-making.
The field of multimedia is unique in offering a rich and dynamic forum for researchers from “traditional” fields to collaborate and develop new solutions and knowledge that transcend the boundaries of individual disciplines. Despite the prolific research activities and outcomes, however, few efforts have been made to develop books that serve as an introduction to the rich spectrum of topics covered by this broad field. A few books are available that either focus on specific subfields or basic background in multimedia. Tutorial-style materials covering the active topics being pursued by the leading researchers at frontiers of the field are currently lacking. In 2015, ACM SIGMM, the special interest group on multimedia, launched a new initiative to address this void by selecting and inviting 12 rising-star speakers from different subfields of multimedia research to deliver plenary tutorial-style talks at the ACM Multimedia conference for 2015. Each speaker discussed the challenges and state-of-the-art developments of their prospective research areas in a general manner to the broad community. The covered topics were comprehensive, including multimedia content understanding, multimodal human-human and human-computer interaction, multimedia social media, and multimedia system architecture and deployment. Following the very positive responses to these talks, the speakers were invited to expand the content covered in their talks into chapters that can be used as reference material for researchers, students, and practitioners. Each chapter discusses the problems, technical challenges, state-of-the-art approaches and performances, open issues, and promising direction for future work. Collectively, the chapters provide an excellent sampling of major topics addressed by the community as a whole. This book, capturing some of the outcomes of such efforts, is well positioned to fill the aforementioned needs in providing tutorial-style reference materials for frontier topics in multimedia. At the same time, the speed and sophistication required of data processing have grown. In addition to simple queries, complex algorithms like machine learning and graph analysis are becoming common. And in addition to batch processing, streaming analysis of real-time data is required to let organizations take timely action. Future computing platforms will need to not only scale out traditional workloads, but support these new applications too. This book, a revised version of the 2014 ACM Dissertation Award winning dissertation, proposes an architecture for cluster computing systems that can tackle emerging data processing workloads at scale. Whereas early cluster computing systems, like MapReduce, handled batch processing, our architecture also enables streaming and interactive queries, while keeping MapReduce's scalability and fault tolerance. And whereas most deployed systems only support simple one-pass computations (e.g., SQL queries), ours also extends to the multi-pass algorithms required for complex analytics like machine learning. Finally, unlike the specialized systems proposed for some of these workloads, our architecture allows these computations to be combined, enabling rich new applications that intermix, for example, streaming and batch processing. We achieve these results through a simple extension to MapReduce that adds primitives for data sharing, called Resilient Distributed Datasets (RDDs). We show that this is enough to capture a wide range of workloads. We implement RDDs in the open source Spark system, which we evaluate using synthetic and real workloads. Spark matches or exceeds the performance of specialized systems in many domains, while offering stronger fault tolerance properties and allowing these workloads to be combined. Finally, we examine the generality of RDDs from both a theoretical modeling perspective and a systems perspective. This version of the dissertation makes corrections throughout the text and adds a new section on the evolution of Apache Spark in industry since 2014. In addition, editing, formatting, and links for the references have been added.
This book is the first technical overview of autonomous vehicles written for a general computing and engineering audience. The authors share their practical experiences of creating autonomous vehicle systems. These systems are complex, consisting of three major subsystems: (1) algorithms for localization, perception, and planning and control; (2) client systems, such as the robotics operating system and hardware platform; and (3) the cloud platform, which includes data storage, simulation, high-definition (HD) mapping, and deep learning model training. The algorithm subsystem extracts meaningful information from sensor raw data to understand its environment and make decisions about its actions. The client subsystem integrates these algorithms to meet real-time and reliability requirements. The cloud platform provides offline computing and storage capabilities for autonomous vehicles. Using the cloud platform, we are able to test new algorithms and update the HD map—plus, train better recognition, tracking, and decision models. This book consists of nine chapters. Chapter 1 provides an overview of autonomous vehicle systems; Chapter 2 focuses on localization technologies; Chapter 3 discusses traditional techniques used for perception; Chapter 4 discusses deep learning based techniques for perception; Chapter 5 introduces the planning and control sub-system, especially prediction and routing technologies; Chapter 6 focuses on motion planning and feedback control of the planning and control subsystem; Chapter 7 introduces reinforcement learning-based planning and control; Chapter 8 delves into the details of client systems design; and Chapter 9 provides the details of cloud platforms for autonomous driving. This book should be useful to students, researchers, and practitioners alike. Whether you are an undergraduate or a graduate student interested in autonomous driving, you will find herein a comprehensive overview of the whole autonomous vehicle technology stack. If you are an autonomous driving practitioner, the many practical techniques introduced in this book will be of interest to you. Researchers will also find plenty of references for an effective, deeper exploration of the various technologies.
Intelligent systems often depend on data provided by information agents, for example, sensor data or crowdsourced human computation. Providing accurate and relevant data requires costly effort that agents may not always be willing to provide. Thus, it becomes important not only to verify the correctness of data, but also to provide incentives so that agents that provide high-quality data are rewarded while those that do not are discouraged by low rewards. We cover different settings and the assumptions they admit, including sensing, human computation, peer grading, reviews, and predictions. We survey different incentive mechanisms, including proper scoring rules, prediction markets and peer prediction, Bayesian Truth Serum, Peer Truth Serum, Correlated Agreement, and the settings where each of them would be suitable. As an alternative, we also consider reputation mechanisms. We complement the game-theoretic analysis with practical examples of applications in prediction platforms, community sensing, and peer grading.
Machine learning, and specifically deep learning, has been hugely disruptive in many fields of computer science. The success of deep learning techniques in solving notoriously difficult classification and regression problems has resulted in their rapid adoption in solving real-world problems. The emergence of deep learning is widely attributed to a virtuous cycle whereby fundamental advancements in training deeper models were enabled by the availability of massive datasets and high-performance computer hardware. This text serves as a primer for computer architects in a new and rapidly evolving field. We review how machine learning has evolved since its inception in the 1960s and track the key developments leading up to the emergence of the powerful deep learning techniques that emerged in the last decade. Next we review representative workloads, including the most commonly used datasets and seminal networks across a variety of domains. In addition to discussing the workloads themselves, we also detail the most popular deep learning tools and show how aspiring practitioners can use the tools with the workloads to characterize and optimize DNNs. The remainder of the book is dedicated to the design and optimization of hardware and architectures for machine learning. As high-performance hardware was so instrumental in the success of machine learning becoming a practical solution, this chapter recounts a variety of optimizations proposed recently to further improve future designs. Finally, we present a review of recent research published in the area as well as a taxonomy to help readers understand how various contributions fall in context.
Neural networks are a family of powerful machine learning models. This book focuses on the application of neural network models to natural language data. The first half of the book (Parts I and II) covers the basics of supervised machine learning and feed-forward neural networks, the basics of working with machine learning over language data, and the use of vector-based rather than symbolic representations for words. It also covers the computation-graph abstraction, which allows to easily define and train arbitrary neural networks, and is the basis behind the design of contemporary neural network software libraries. The second part of the book (Parts III and IV) introduces more specialized neural network architectures, including 1D convolutional neural networks, recurrent neural networks, conditioned-generation models, and attention-based models. These architectures and techniques are the driving force behind state-of-the-art algorithms for machine translation, syntactic parsing, and many other applications. Finally, we also discuss tree-shaped networks, structured prediction, and the prospects of multi-task learning.
This book discusses how model-based approaches can improve the daily practice of software professionals. This is known as Model-Driven Software Engineering (MDSE) or, simply, Model-Driven Engineering (MDE). MDSE practices have proved to increase efficiency and effectiveness in software development, as demonstrated by various quantitative and qualitative studies. MDSE adoption in the software industry is foreseen to grow exponentially in the near future, e.g., due to the convergence of software development and business analysis. The aim of this book is to provide you with an agile and flexible tool to introduce you to the MDSE world, thus allowing you to quickly understand its basic principles and techniques and to choose the right set of MDSE instruments for your needs so that you can start to benefit from MDSE right away. The book is organized into two main parts. The first part discusses the foundations of MDSE in terms of basic concepts (i.e., models and transformations), driving principles, application scenarios, and current standards, like the well-known MDA initiative proposed by OMG (Object Management Group) as well as the practices on how to integrate MDSE in existing development processes. The second part deals with the technical aspects of MDSE, spanning from the basics on when and how to build a domain-specific modeling language, to the description of Model-to-Text and Model-to-Model transformations, and the tools that support the management of MDSE projects. The second edition of the book features: a set of completely new topics, including: full example of the creation of a new modeling language (IFML), discussion of modeling issues and approaches in specific domains, like business process modeling, user interaction modeling, and enterprise architecture complete revision of examples, figures, and text, for improving readability, understandability, and coherence better formulation of definitions, dependencies between concepts and ideas addition of a complete index of book content In addition to the contents of the book, more resources are provided on the book's website http://www.mdse-book.com, including the examples presented in the book.
Computational Approaches in Physics reviews computational schemes which are used in the simulations of physical systems. These range from very accurate ab initio techniques up to coarse-grained and mesoscopic schemes. The choice of the method is based on the desired accuracy and computational efficiency. A bottom-up approach is used to present the various simulation methods used in Physics, starting from the lower level and the most accurate methods, up to particle-based ones. The book outlines the basic theory underlying each technique and its complexity, addresses the computational implications and issues in the implementation, as well as present representative examples. A link to the most common computational codes, commercial or open source is listed in each chapter. The strengths and deficiencies of the variety of techniques discussed in this book are presented in detail and visualization tools commonly used to make the simulation data more comprehensive are also discussed. In the end, specific techniques are used as bridges across different disciplines. To this end, examples of different systems tackled with the same methods are presented. The appendices include elements of physical theory which are prerequisites in understanding the simulation methods.