Математика

Различные книги в жанре Математика

An Introduction to Discrete-Valued Time Series

Группа авторов

A much-needed introduction to the field of discrete-valued time series, with a focus on count-data time series Time series analysis is an essential tool in a wide array of fields, including business, economics, computer science, epidemiology, finance, manufacturing and meteorology, to name just a few. Despite growing interest in discrete-valued time series—especially those arising from counting specific objects or events at specified times—most books on time series give short shrift to that increasingly important subject area. This book seeks to rectify that state of affairs by providing a much needed introduction to discrete-valued time series, with particular focus on count-data time series. The main focus of this book is on modeling. Throughout numerous examples are provided illustrating models currently used in discrete-valued time series applications. Statistical process control, including various control charts (such as cumulative sum control charts), and performance evaluation are treated at length. Classic approaches like ARMA models and the Box-Jenkins program are also featured with the basics of these approaches summarized in an Appendix. In addition, data examples, with all relevant R code, are available on a companion website. Provides a balanced presentation of theory and practice, exploring both categorical and integer-valued series Covers common models for time series of counts as well as for categorical time series, and works out their most important stochastic properties Addresses statistical approaches for analyzing discrete-valued time series and illustrates their implementation with numerous data examples Covers classical approaches such as ARMA models, Box-Jenkins program and how to generate functions Includes dataset examples with all necessary R code provided on a companion website An Introduction to Discrete-Valued Time Series is a valuable working resource for researchers and practitioners in a broad range of fields, including statistics, data science, machine learning, and engineering. It will also be of interest to postgraduate students in statistics, mathematics and economics.

Evidence Synthesis for Decision Making in Healthcare

Nicola Cooper

In the evaluation of healthcare, rigorous methods of quantitative assessment are necessary to establish interventions that are both effective and cost-effective. Usually a single study will not fully address these issues and it is desirable to synthesize evidence from multiple sources. This book aims to provide a practical guide to evidence synthesis for the purpose of decision making, starting with a simple single parameter model, where all studies estimate the same quantity (pairwise meta-analysis) and progressing to more complex multi-parameter structures (including meta-regression, mixed treatment comparisons, Markov models of disease progression, and epidemiology models). A comprehensive, coherent framework is adopted and estimated using Bayesian methods. Key features: A coherent approach to evidence synthesis from multiple sources. Focus is given to Bayesian methods for evidence synthesis that can be integrated within cost-effectiveness analyses in a probabilistic framework using Markov Chain Monte Carlo simulation. Provides methods to statistically combine evidence from a range of evidence structures. Emphasizes the importance of model critique and checking for evidence consistency. Presents numerous worked examples, exercises and solutions drawn from a variety of medical disciplines throughout the book. WinBUGS code is provided for all examples. Evidence Synthesis for Decision Making in Healthcare is intended for health economists, decision modelers, statisticians and others involved in evidence synthesis, health technology assessment, and economic evaluation of health technologies.

The Subjectivity of Scientists and the Bayesian Approach

S. Press James

Comparing and contrasting the reality of subjectivity in the work of history's great scientists and the modern Bayesian approach to statistical analysis Scientists and researchers are taught to analyze their data from an objective point of view, allowing the data to speak for themselves rather than assigning them meaning based on expectations or opinions. But scientists have never behaved fully objectively. Throughout history, some of our greatest scientific minds have relied on intuition, hunches, and personal beliefs to make sense of empirical data-and these subjective influences have often aided in humanity's greatest scientific achievements. The authors argue that subjectivity has not only played a significant role in the advancement of science, but that science will advance more rapidly if the modern methods of Bayesian statistical analysis replace some of the classical twentieth-century methods that have traditionally been taught. To accomplish this goal, the authors examine the lives and work of history's great scientists and show that even the most successful have sometimes misrepresented findings or been influenced by their own preconceived notions of religion, metaphysics, and the occult, or the personal beliefs of their mentors. Contrary to popular belief, our greatest scientific thinkers approached their data with a combination of subjectivity and empiricism, and thus informally achieved what is more formally accomplished by the modern Bayesian approach to data analysis. Yet we are still taught that science is purely objective. This innovative book dispels that myth using historical accounts and biographical sketches of more than a dozen great scientists, including Aristotle, Galileo Galilei, Johannes Kepler, William Harvey, Sir Isaac Newton, Antoine Levoisier, Alexander von Humboldt, Michael Faraday, Charles Darwin, Louis Pasteur, Gregor Mendel, Sigmund Freud, Marie Curie, Robert Millikan, Albert Einstein, Sir Cyril Burt, and Margaret Mead. Also included is a detailed treatment of the modern Bayesian approach to data analysis. Up-to-date references to the Bayesian theoretical and applied literature, as well as reference lists of the primary sources of the principal works of all the scientists discussed, round out this comprehensive treatment of the subject. Readers will benefit from this cogent and enlightening view of the history of subjectivity in science and the authors' alternative vision of how the Bayesian approach should be used to further the cause of science and learning well into the twenty-first century.

Methods for Testing and Evaluating Survey Questionnaires

Elizabeth Martin

The definitive resource for survey questionnaire testing and evaluation Over the past two decades, methods for the development, evaluation, and testing of survey questionnaires have undergone radical change. Research has now begun to identify the strengths and weaknesses of various testing and evaluation methods, as well as to estimate the methods’ reliability and validity. Expanding and adding to the research presented at the International Conference on Questionnaire Development, Evaluation and Testing Methods, this title presents the most up-to-date knowledge in this burgeoning field. The only book dedicated to the evaluation and testing of survey questionnaires, this practical reference work brings together the expertise of over fifty leading, international researchers from a broad range of fields. The volume is divided into seven sections: Cognitive interviews Mode of administration Supplements to conventional pretests Special populations Experiments Multi-method applications Statistical modeling Comprehensive and carefully edited, this groundbreaking text offers researchers a solid foundation in the latest developments in testing and evaluating survey questionnaires, as well as a thorough introduction to emerging techniques and technologies.

Weight-of-Evidence for Forensic DNA Profiles

Группа авторов

Assessing Weight-of-Evidence for DNA Profiles is an excellent introductory text to the use of statistical analysis for assessing DNA evidence. It offers practical guidance to forensic scientists with little dependence on mathematical ability as the book includes background information on statistics – including likelihood ratios – population genetics, and courtroom issues. The author, who is highly experienced in this field, has illustrated the book throughout with his own experiences as well as providing a theoretical underpinning to the subject. It is an ideal choice for forensic scientists and lawyers, as well as statisticians and population geneticists with an interest in forensic science and DNA.

Uncertainty Analysis with High Dimensional Dependence Modelling

Dorota Kurowicka

Mathematical models are used to simulate complex real-world phenomena in many areas of science and technology. Large complex models typically require inputs whose values are not known with certainty. Uncertainty analysis aims to quantify the overall uncertainty within a model, in order to support problem owners in model-based decision-making. In recent years there has been an explosion of interest in uncertainty analysis. Uncertainty and dependence elicitation, dependence modelling, model inference, efficient sampling, screening and sensitivity analysis, and probabilistic inversion are among the active research areas. This text provides both the mathematical foundations and practical applications in this rapidly expanding area, including: An up-to-date, comprehensive overview of the foundations and applications of uncertainty analysis. All the key topics, including uncertainty elicitation, dependence modelling, sensitivity analysis and probabilistic inversion. Numerous worked examples and applications. Workbook problems, enabling use for teaching. Software support for the examples, using UNICORN – a Windows-based uncertainty modelling package developed by the authors. A website featuring a version of the UNICORN software tailored specifically for the book, as well as computer programs and data sets to support the examples. Uncertainty Analysis with High Dimensional Dependence Modelling offers a comprehensive exploration of a new emerging field. It will prove an invaluable text for researches, practitioners and graduate students in areas ranging from statistics and engineering to reliability and environmetrics.

Rare Event Simulation using Monte Carlo Methods

Gerardo Rubino

In a probabilistic model, a rare event is an event with a very small probability of occurrence. The forecasting of rare events is a formidable task but is important in many areas. For instance a catastrophic failure in a transport system or in a nuclear power plant, the failure of an information processing system in a bank, or in the communication network of a group of banks, leading to financial losses. Being able to evaluate the probability of rare events is therefore a critical issue. Monte Carlo Methods, the simulation of corresponding models, are used to analyze rare events. This book sets out to present the mathematical tools available for the efficient simulation of rare events. Importance sampling and splitting are presented along with an exposition of how to apply these tools to a variety of fields ranging from performance and dependability evaluation of complex systems, typically in computer science or in telecommunications, to chemical reaction analysis in biology or particle transport in physics. Graduate students, researchers and practitioners who wish to learn and apply rare event simulation techniques will find this book beneficial.

Nonparametric Analysis of Univariate Heavy-Tailed Data

Группа авторов

Heavy-tailed distributions are typical for phenomena in complex multi-component systems such as biometry, economics, ecological systems, sociology, web access statistics, internet traffic, biblio-metrics, finance and business. The analysis of such distributions requires special methods of estimation due to their specific features. These are not only the slow decay to zero of the tail, but also the violation of Cramer’s condition, possible non-existence of some moments, and sparse observations in the tail of the distribution. The book focuses on the methods of statistical analysis of heavy-tailed independent identically distributed random variables by empirical samples of moderate sizes. It provides a detailed survey of classical results and recent developments in the theory of nonparametric estimation of the probability density function, the tail index, the hazard rate and the renewal function. Both asymptotical results, for example convergence rates of the estimates, and results for the samples of moderate sizes supported by Monte-Carlo investigation, are considered. The text is illustrated by the application of the considered methodologies to real data of web traffic measurements.

Symbolic Data Analysis and the SODAS Software

Edwin Diday

Symbolic data analysis is a relatively new field that provides a range of methods for analyzing complex datasets. Standard statistical methods do not have the power or flexibility to make sense of very large datasets, and symbolic data analysis techniques have been developed in order to extract knowledge from such data. Symbolic data methods differ from that of data mining, for example, because rather than identifying points of interest in the data, symbolic data methods allow the user to build models of the data and make predictions about future events. This book is the result of the work f a pan-European project team led by Edwin Diday following 3 years work sponsored by EUROSTAT. It includes a full explanation of the new SODAS software developed as a result of this project. The software and methods described highlight the crossover between statistics and computer science, with a particular emphasis on data mining.

Modeling and Simulation for Analyzing Global Events

John Sokolowski A.

one-of-a-kind introduction to the theory and application of modeling and simulation techniques in the realm of international studies Modeling and Simulation for Analyzing Global Events provides an orientation to the theory and application of modeling and simulation techniques in social science disciplines. This book guides readers in developing quantitative and numeric representations of real-world events based on qualitative analysis. With an emphasis on gathering and mapping empirical data, the authors detail the steps needed for accurately analyzing global events and outline the selection and construction of the best model for understanding the event¿s data. Providing a theoretical foundation while also illustrating modern examples, the book contains three parts: Advancing Global Studies—introduces the what, when, and why of modeling and simulation and also explores its brief history, various uses, and some of the advantages and disadvantages of modeling and simulation in problem solving. In addition, the differences in qualitative and quantitative research methods, mapping data, and conducting model validation are also discussed. Modeling Paradigms—examines various methods of modeling including system dynamics, agent-based modeling, social network modeling, and game theory. This section also explores the theory and construction of these modeling paradigms, the fundamentals for their application, and various contexts for their use. Modeling Global Events—applies the modeling paradigms to four real-world events that are representative of several fundamental areas of social science studies: internal commotion within an anarchic state, a multi-layered study of the Solidarity movement in Poland, uni-lateral military intervention, and the issue of compellence and deterrence during a national security crisis. Modeling and Simulation for Analyzing Global Events is an excellent book for statistics, engineering, computer science, economics, and social sciences courses on modeling and simulation at the upper-undergraduate and graduate levels. It is also an insightful reference for professionals who would like to develop modeling and simulation skills for analyzing and communicating human behavior observed in real-world events and complex global case studies.