There is an increasing need to rein in the cost of scientific study without sacrificing accuracy in statistical inference. Optimal design is the judicious allocation of resources to achieve the objectives of studies using minimal cost via careful statistical planning. Researchers and practitioners in various fields of applied science are now beginning to recognize the advantages and potential of optimal experimental design. Applied Optimal Designs is the first book to catalogue the application of optimal design to real problems, documenting its widespread use across disciplines as diverse as drug development, education and ground water modelling. Includes contributions covering: Bayesian design for measuring cerebral blood-flow Optimal designs for biological models Computer adaptive testing Ground water modelling Epidemiological studies and pharmacological models Applied Optimal Designs bridges the gap between theory and practice, drawing together a selection of incisive articles from reputed collaborators. Broad in scope and inter-disciplinary in appeal, this book highlights the variety of opportunities available through the use of optimal design. The wide range of applications presented here should appeal to statisticians working with optimal designs, and to practitioners new to the theory and concepts involved.
Statistical pattern recognition is a very active area of study and research, which has seen many advances in recent years. New and emerging applications – such as data mining, web searching, multimedia data retrieval, face recognition, and cursive handwriting recognition – require robust and efficient pattern recognition techniques. Statistical decision making and estimation are regarded as fundamental to the study of pattern recognition. Statistical Pattern Recognition, Second Edition has been fully updated with new methods, applications and references. It provides a comprehensive introduction to this vibrant area – with material drawn from engineering, statistics, computer science and the social sciences – and covers many application areas, such as database design, artificial neural networks, and decision support systems. * Provides a self-contained introduction to statistical pattern recognition. * Each technique described is illustrated by real examples. * Covers Bayesian methods, neural networks, support vector machines, and unsupervised classification. * Each section concludes with a description of the applications that have been addressed and with further developments of the theory. * Includes background material on dissimilarity, parameter estimation, data, linear algebra and probability. * Features a variety of exercises, from 'open-book' questions to more lengthy projects. The book is aimed primarily at senior undergraduate and graduate students studying statistical pattern recognition, pattern processing, neural networks, and data mining, in both statistics and engineering departments. It is also an excellent source of reference for technical professionals working in advanced information development environments. For further information on the techniques and applications discussed in this book please visit www.statistical-pattern-recognition.net
Multivariable regression models are of fundamental importance in all areas of science in which empirical data must be analyzed. This book proposes a systematic approach to building such models based on standard principles of statistical modeling. The main emphasis is on the fractional polynomial method for modeling the influence of continuous variables in a multivariable context, a topic for which there is no standard approach. Existing options range from very simple step functions to highly complex adaptive methods such as multivariate splines with many knots and penalisation. This new approach, developed in part by the authors over the last decade, is a compromise which promotes interpretable, comprehensible and transportable models.
This practical text is an essential source of information for those wanting to know how to deal with the variability that exists in every engineering situation. Using typical engineering data, it presents the basic statistical methods that are relevant, in simple numerical terms. In addition, statistical terminology is translated into basic English. In the past, a lack of communication between engineers and statisticians, coupled with poor practical skills in quality management and statistical engineering, was damaging to products and to the economy. The disastrous consequence of setting tight tolerances without regard to the statistical aspect of process data is demonstrated. This book offers a solution, bridging the gap between statistical science and engineering technology to ensure that the engineers of today are better equipped to serve the manufacturing industry. Inside, you will find coverage on: the nature of variability, describing the use of formulae to pin down sources of variation; engineering design, research and development, demonstrating the methods that help prevent costly mistakes in the early stages of a new product; production, discussing the use of control charts, and; management and training, including directing and controlling the quality function. The Engineering section of the index identifies the role of engineering technology in the service of industrial quality management. The Statistics section identifies points in the text where statistical terminology is used in an explanatory context. Engineers working on the design and manufacturing of new products find this book invaluable as it develops a statistical method by which they can anticipate and resolve quality problems before launching into production. This book appeals to students in all areas of engineering and also managers concerned with the quality of manufactured products. Academic engineers can use this text to teach their students basic practical skills in quality management and statistical engineering, without getting involved in the complex mathematical theory of probability on which statistical science is dependent.
Statistical methodology plays a key role in ensuring that DNA evidence is collected, interpreted, analyzed and presented correctly. With the recent advances in computer technology, this methodology is more complex than ever before. There are a growing number of books in the area but none are devoted to the computational analysis of evidence. This book presents the methodology of statistical DNA forensics with an emphasis on the use of computational techniques to analyze and interpret forensic evidence.
Complex mathematical and computational models are used in all areas of society and technology and yet model based science is increasingly contested or refuted, especially when models are applied to controversial themes in domains such as health, the environment or the economy. More stringent standards of proofs are demanded from model-based numbers, especially when these numbers represent potential financial losses, threats to human health or the state of the environment. Quantitative sensitivity analysis is generally agreed to be one such standard. Mathematical models are good at mapping assumptions into inferences. A modeller makes assumptions about laws pertaining to the system, about its status and a plethora of other, often arcane, system variables and internal model settings. To what extent can we rely on the model-based inference when most of these assumptions are fraught with uncertainties? Global Sensitivity Analysis offers an accessible treatment of such problems via quantitative sensitivity analysis, beginning with the first principles and guiding the reader through the full range of recommended practices with a rich set of solved exercises. The text explains the motivation for sensitivity analysis, reviews the required statistical concepts, and provides a guide to potential applications. The book: Provides a self-contained treatment of the subject, allowing readers to learn and practice global sensitivity analysis without further materials. Presents ways to frame the analysis, interpret its results, and avoid potential pitfalls. Features numerous exercises and solved problems to help illustrate the applications. Is authored by leading sensitivity analysis practitioners, combining a range of disciplinary backgrounds. Postgraduate students and practitioners in a wide range of subjects, including statistics, mathematics, engineering, physics, chemistry, environmental sciences, biology, toxicology, actuarial sciences, and econometrics will find much of use here. This book will prove equally valuable to engineers working on risk analysis and to financial analysts concerned with pricing and hedging.
High response rates have traditionally been considered as one of the main indicators of survey quality. Obtaining high response rates is sometimes difficult and expensive, but clearly plays a beneficial role in terms of improving data quality. It is becoming increasingly clear, however, that simply boosting response to achieve a higher response rate will not in itself eradicate nonresponse bias. In this book the authors argue that high response rates should not be seen as a goal in themselves, but rather as part of an overall survey quality strategy based on random probability sampling and aimed at minimising nonresponse bias. Key features of Improving Survey Response: A detailed coverage of nonresponse issues, including a unique examination of cross-national survey nonresponse processes and outcomes. A discussion of the potential causes of nonresponse and practical strategies to combat it. A detailed examination of the impact of nonresponse and of techniques for adjusting for it once it has occurred. Examples of best practices and experiments drawn from 25 European countries. Supplemented by the European Social Survey (ESS) websites, containing materials for the measurement and analysis of nonresponse based on detailed country-level response process datasets. The book is designed to help survey researchers and those commissioning surveys by explaining how to prioritise the reduction of nonresponse bias rather than focusing on increasing the overall response rate. It shows substantive researchers how nonresponse can impact on substantive outcomes.
The high-level language of R is recognized as one of the most powerful and flexible statistical software environments, and is rapidly becoming the standard setting for quantitative analysis, statistics and graphics. R provides free access to unrivalled coverage and cutting-edge applications, enabling the user to apply numerous statistical methods ranging from simple regression to time series or multivariate analysis. Building on the success of the author’s bestselling Statistics: An Introduction using R, The R Book is packed with worked examples, providing an all inclusive guide to R, ideal for novice and more accomplished users alike. The book assumes no background in statistics or computing and introduces the advantages of the R environment, detailing its applications in a wide range of disciplines. Provides the first comprehensive reference manual for the R language, including practical guidance and full coverage of the graphics facilities. Introduces all the statistical models covered by R, beginning with simple classical tests such as chi-square and t-test. Proceeds to examine more advance methods, from regression and analysis of variance, through to generalized linear models, generalized mixed models, time series, spatial statistics, multivariate statistics and much more. The R Book is aimed at undergraduates, postgraduates and professionals in science, engineering and medicine. It is also ideal for students and professionals in statistics, economics, geography and the social sciences.
A modern and comprehensive treatment of tolerance intervals and regions The topic of tolerance intervals and tolerance regions has undergone significant growth during recent years, with applications arising in various areas such as quality control, industry, and environmental monitoring. Statistical Tolerance Regions presents the theoretical development of tolerance intervals and tolerance regions through computational algorithms and the illustration of numerous practical uses and examples. This is the first book of its kind to successfully balance theory and practice, providing a state-of-the-art treatment on tolerance intervals and tolerance regions. The book begins with the key definitions, concepts, and technical results that are essential for deriving tolerance intervals and tolerance regions. Subsequent chapters provide in-depth coverage of key topics including: Univariate normal distribution Non-normal distributions Univariate linear regression models Nonparametric tolerance intervals The one-way random model with balanced data The multivariate normal distribution The one-way random model with unbalanced data The multivariate linear regression model General mixed models Bayesian tolerance intervals A final chapter contains coverage of miscellaneous topics including tolerance limits for a ratio of normal random variables, sample size determination, reference limits and coverage intervals, tolerance intervals for binomial and Poisson distributions, and tolerance intervals based on censored samples. Theoretical explanations are accompanied by computational algorithms that can be easily replicated by readers, and each chapter contains exercise sets for reinforcement of the presented material. Detailed appendices provide additional data sets and extensive tables of univariate and multivariate tolerance factors. Statistical Tolerance Regions is an ideal book for courses on tolerance intervals at the graduate level. It is also a valuable reference and resource for applied statisticians, researchers, and practitioners in industry and pharmaceutical companies.
Praise for Modeling for Insight «Most books on modeling are either too theoretical or too focused on the mechanics of programming. Powell and Batt's emphasis on using simple spreadsheet models to gain business insight (which is, after all, the name of the game) is what makes this book stand head and shoulders above the rest. This clear and practical book deserves a place on the shelf of every business analyst.» —Jonathan Koomey, PhD, Lawrence Berkeley National Laboratory and Stanford University, author of Turning Numbers into Knowledge: Mastering the Art of Problem Solving Most business analysts are familiar with using spreadsheets to organize data and build routine models. However, analysts often struggle when faced with examining new and ill-structured problems. Modeling for Insight is a one-of-a-kind guide to building effective spreadsheet models and using them to generate insights. With its hands-on approach, this book provides readers with an effective modeling process and specific modeling tools to become a master modeler. The authors provide a structured approach to problem-solving using four main steps: frame the problem, diagram the problem, build a model, and generate insights. Extensive examples, graduated in difficulty, help readers to internalize this modeling process, while also demonstrating the application of important modeling tools, including: Influence diagrams Spreadsheet engineering Parameterization Sensitivity analysis Strategy analysis Iterative modeling The real-world examples found in the book are drawn from a wide range of fields such as financial planning, insurance, pharmaceuticals, advertising, and manufacturing. Each chapter concludes with a discussion on how to use the insights drawn from these models to create an effective business presentation. Microsoft Office Excel and PowerPoint are used throughout the book, along with the add-ins Premium Solver, Crystal Ball, and Sensitivity Toolkit. Detailed appendices guide readers through the use of these software packages, and the spreadsheet models discussed in the book are available to download via the book's related Web site. Modeling for Insight is an ideal book for courses in engineering, operations research, and management science at the upper-undergraduate and graduate levels. It is also a valuable resource for consultants and business analysts who often use spreadsheets to better understand complex problems.