The Six Stu Hunter Research Conference 2016 plenary talks

Stu Hunter Research Conference 2016 Homepage

Select the talk title to see the corresponding draft paper that will be presented during the Hunter conference. Please do not share this paper with people who are not attending the conference.
 

Christine Anderson-Cook
Los Alamos National Lab

"Optimizing in a Complex World"

As applied statisticians increasingly participate as active members of problem‐solving and decision making teams, our role continues to evolve. Historically, we may have been seen as those who can help with data collection strategies or answer a specific question from a set of data. Nowadays, we are or strive to be more deeply evolved throughout the entire problem‐solving process. An emerging role is to provide a set of leading choices from which subject matter experts and managers can choose to make informed decisions. A key to success is to provide vehicles for understanding the trade‐offs between candidates and interpreting the merits of each choice in the context of the decision‐makers priorities. To
achieve this objective, it is helpful to be able (a) to help subject matter experts identify quantitative criteria which match their priorities, (b) eliminate non‐competitive choices through the use of a Pareto front, and (c) provide summary tools from which the trade‐offs between alternatives can be quantitatively evaluated and discussed. A structured but flexible process for contributing to team decisions is described for situations when all choices can easily be enumerated as well as when a search algorithm to explore a vast number of potential candidates is required. A collection of diverse examples ranging from model selection, through multiple response optimization, and designing an experiment illustrate the approach.

Richard Jarrett
Visiting Research Fellow, University of Adelaide

"Does theory work in practice? Two case studies"

This paper considers two different studies.  Each study explored the properties of a production process and each had a number of issues that needed to be resolved before experimental runs could be performed.  In the first case, the process was a continuous rubber extrusion line, producing windscreen wiper blades.  Planning involved people on three different continents, so issues of building trust were paramount.  Only a narrow window was available for experimentation, so flexibility and a quick response to problems as they arose were needed.  The second study was an off-line batch process aimed at producing polymers suitable for artificial corneas.  There were two competing variables of interest.  Previous attempts to improve the product had been piecemeal and unsuccessful, but a fractional factorial experiment provided guidance on a way forward.  Subsequent runs then aimed to optimise the primary variable whilst holding the second variable constant.  By comparing and contrasting these studies, there are many valuable lessons to be learnt.
 
Galit Shmueli
National Tsing Hua University, Taiwan

"Analyzing Behavioral Big Data: Methodological, Practical, Ethical, and Moral Issues""

The term “Big Data” evokes emotions ranging from excitement to exasperation in the statistics community. Looking beyond these emotions reveals several important changes that affect us as statisticians
and as humans. I focus on Behavioral Big Data (BBD), or very large and rich multidimensional datasets on human behaviors, actions and interactions, which have become available to companies, governments,
and researchers. The paper describes the BBD landscape and examines opportunities and critical issues that arise when applying statistical and data mining approaches to Behavioral Big Data, including the move from macro- to micro-decisioning and its implications.

Bill Woodall
Virginia Tech

"Basic Statistical Process Monitoring: Reassessing the Gap between Theory and Practice"

Some issues are discussed relative to the gap between theory and practice in the area of statistical process monitoring (SPM). It is argued that the collection and use of baseline data in Phase I needs a greater emphasis. Also, the use of sample ranges in practice to estimate process standard deviations deserves reconsideration. A discussion is given on the role of modeling in SPM. Then some work on profile monitoring and the effect of estimation error on Phase II chart performance is summarized. Finally, some ways that researchers could influence practice more effectively are discussed along with how SPM research could become more useful to practitioners.

KEYWORDS average run length, control chart, false alarm rate, SPC, SPM, statistical process control.
 

Dave Woods
University of Southampton

"Bayesian design of experiments for industrial and scientific applications via Gaussian processes"

The design of an experiment can be considered to be at least implicitly Bayesian, with prior knowledge used informally to aid decisions such as the variables to be studied and the choice of a plausible relation-
ship between the explanatory variables and measured responses. Bayesian methods allow uncertainty in these decisions to be incorporated into design selection through prior distributions that encapsulate
information available from scientific knowledge or previous experimentation. Further, a design may be explicitly tailored to the aim of the experiment through a decision-theoretic approach using an appropri-
ate loss function. We review the area of decision-theoretic Bayesian design, with particular emphasis on recent advances in computational methods.

For many problems arising in industry and science, Bayesian design is often seen as impractical, particularly for finding designs for nonlinear models that have intractable expected loss and for larger factorial experiments with many potential response models. We describe how Gaussian process emulation, commonly used in computer experiments, can play an important role in facilitating Bayesian design for realistic problems. A main focus is the combination of Gaussian process regression to approximate the expected loss with cyclic descent (coordinate exchange) optimisation algorithms to allow optimal designs to be found for previously infeasible problems. The methods are motivated and illustrated using applications through the pharmaceutical and biological sciences.

Alyson Wilson

North Carolina State University

"Bayesian Reliability: Combining Information"

One of the most powerful features of Bayesian analyses is the ability to combine multiple sources of information in a principled way to perform inference. This feature can be particularly valuable in assessing the reliability of systems where testing is limited for some reason (e.g., expense, treaty). At their most basic, Bayesian methods for reliability develop informative prior distributions using expert judgment or similar systems. Appropriate models allow the incorporation of many other sources of information, including historical data, information from similar systems, and computer models. I will introduce several of the models and approaches and then consider two extensions/open problems. The first examines how to combine multiple sources of information about components to assess the reliability of a system. The second considers how test planning changes when there is previous relevant information.  I will motivate the discussion using several examples from defense acquisition and lifecycle extension.

[back to top]