Unless stated otherwise, colloquia are scheduled for **Thursdays 4:30-5:30pm** in LN 2205 with refreshments served from
4:00-4:25 pm in the Anderson Memorial Reading Room.

Here you find some directions to Binghamton University and the Department of Mathematical Sciences.

**Thursday, February 6th, 2014**Atul Mallik (University of Michigan)**Speaker:**Estimating thresholds and baseline sets using p-values**Title:****Time****:**4:30 - 5:30 pm**Room****:**LN-2205We propose a novel approach that uses p-values to identify the threshold level at which a one-dimensional regression function leaves its baseline value, a problem motivated by applications in dose-response studies, environmental statistics and time series. The procedure relies on the dichotomous behavior of /p/-value type statistics that arise from testing the hypothesis that the regression function is at its baseline value at each covariate value. We study the large sample behavior of our estimate in two different sampling settings for constructing confidence intervals that yield some elegant results. The rate of convergence depends on the smoothness of the regression function in the vicinity of the threshold while the limit distribution changes from the minimizer of a generalized Poisson process to that of an integrated and transformed Gaussian process across the settings. Further, the multi-dimensional version of the threshold estimation problem has connections to fMRI studies, edge detection and image processing. Interest centers here on estimating a region (equivalently, its complement) where a function is at its baseline level. This region corresponds to the background of an image in image-reconstruction problems. We study this problem under a convex shape-constraint in two dimensions and explore its extensions.**Abstract:****Friday, February 7th, 2014**Cynthia Northrup**Speaker:**Building a premier Calculus program at Binghamton**Title:****Time****:**4:00 - 5:00 pm**Room****:**LN-2205We will first discuss what it means for our students to succeed in mathematics, what the desired student learning outcomes are, and centralize the decision making for the calculus track. We must then assess the current state of the calculus program at Binghamton, determine instructor and teaching assistant preparedness, and get a baseline for student performance from which to improve. We want to raise the level of instruction and enhance student learning. This begins by improving teacher training, the importance of which is highlighted in Bressoud and Ramussens Seven Characteristics of Successful Calculus Programs. Incoming Teaching Assistants should have the opportunity to learn the basics before stepping in front of a class for the first time, but also be inspired to utilize innovative methods and technology in their classrooms. The Center for Learning and Teaching and the Mathematics Department can work together to provide both TAs and Instructors with continued training and support in order to create a classroom environment in which student exploration can thrive.**Abstract:****Tuesday, February 11th, 2014**Dewei Wang (Clemson University)**Speaker:**Semiparametric group testing regression models**Title:****Time****:**4:30 - 5:30 pm**Room****:**LN-2205Group testing, through the use of pooling, has proven to be an efficient method of reducing the time and cost associated with screening for a binary characteristic of interest, such as infection status. The salient feature of group testing that provides for these gains in efficiency is that testing is performed on pooled specimens, rather than testing specimens one-by-one. Typically, the statistical literature surrounding group testing has investigated the implementation of pooled testing for the purposes of either case identification or estimation. A topic of key interest in the estimation problem involves the development of regression models that relate individual level covariates to testing responses observed from pooled specimens. The research in this area has primarily focused on parametric regression models that make use of data arising from master pool testing. Specifically, these regression techniques are not designed to make use of the additional information gained from decoding pools that initially test positive. In this talk, I would like to merge the goals of classification and estimation by proposing a general semiparametric framework which allows for the inclusion of multiple covariates, decoding information, and imperfect testing. The asymptotic properties of our estimators will be presented and guidance on finite sample implementation will be provided. Further, I will illustrate the performance of our methods through simulation and by applying them to chlamydia and gonorrhea data collected by the Nebraska Public Health Laboratory as a part of the Infertility Prevention Project.**Abstract:**~~Friday, February 14th, 2014~~CANCELLEDLizhen Lin (Duke University)**Speaker:**Shape constrained regression using Gaussian process projections**Title:****Time****:**4:40 - 5:40 pm**Room****:**LN-2205Shape constrained regression analysis has applications in dose-response modeling, environmental risk assessment, disease screening and many other areas. Incorporating the shape constraints can improve estimation efficiency and avoid implausible results. In this talk, I will talk about nonparametric methods for estimating shape constrained (mainly monotone constrained) regression functions. I will focus on a novel Bayesian method from our recent work for estimating monotone curves and surfaces using Gaussian process projections. Inference is based on projecting posterior samples from the Gaussian process. Theory is developed on continuity of the projection and rates of contraction. Our approach leads to simple computation with good performance in finite samples. The projection approach can be applied in other constrained function estimation problems including in multivariate settings.**Abstract:****Thursday, February 20th, 2014**Zhongyang Li (University of Cambridge)**Speaker:**Critical Parameters of Lattice Models**Title:****Time****:**4:30 - 5:30 pm**Room****:**LN-2205A lattice model is a probability measure on configurations of a graph parameterized by a continuous variable. The critical parameter is the parameter where the phase transition occurs, i.e., when the macroscopic properties of a lattice model change sharply with respect to the parameter. I will talk about three different lattice models including percolation, Ising model and self-avoiding walk, as well as recent progress on identifying the exact values of their critical parameters or bounding their critical parameters. Part of the talk is based on joint work with Geoffrey Grimmett.**Abstract:****Friday, February 21th, 2014**William Kazmierczak**Speaker:**Building a premier Calculus program at Binghamton**Title:****Time****:**4:00 - 5:30 pm**Room****:**LN-2205**Thursday, March 6th, 2014**Karl Liechty**Speaker:**Determinantal processes and nonintersecting paths**Title:****Time****:**4:30 - 5:30 pm**Room****:**LN-2205The first part of the talk will be an introduction to determinantal processes, which are point processes featuring a characteristic repulsion between particles. Determinantal point processes arise naturally in physics to describe configurations of fermions in thermal equilibrium, but their ubiquity outside of this physical setting is quite remarkable. In the second part of the talk I will discuss recent work (with Dong Wang) on a model of nonintersecting paths on the circle, in which quite a few universal processes emerge as scaling limits.**Abstract:****Tuesday, March 11, 2014**Christina Dan Wang, Princeton University**Speaker:**The Estimation of Leverage Effect with High Frequency Data**Title:****Time****:**4:30 - 5:30 pm**Room****:**LN-2205The leverage effect has become an extensively studied phenomenon which describes the (usu- ally) negative relation between stock returns and their volatility. Although this characteristic of stock returns is well acknowledged, most studies of the phenomenon are based on cross-sectional calibration with parametric models. On the statistical side, most previous works are conducted over daily or longer return horizons, and few of them have carefully studied its estimation, especially with high frequency data. However, estimation of the leverage effect is important because sensible inference is possible only when the leverage effect is estimated reliably. In this study, we are the first to provide nonparametric estimation for a class of stochastic measures of leverage effect. In order to construct estimators with good statistical properties, we introduce a new stochastic leverage effect parameter. The estimators and their statistical properties are provided in cases both with and without microstructure noise, under the stochastic volatility model. In asymptotics, the consistency and limiting distribution of the estimators are derived and corroborated by simulation results. Applications of the estimators are also explored. This estimator provides the opportunity to study high frequency regression, which leads to the pre- diction of volatility using not only previous volatility but also the leverage effect. An empirical study shows the significant prediction power of the return scaled by the leverage effect. The estimator also reveals a theoretical connection between skewness and the leverage effect, which further leads to the prediction of skewness. Furthermore, adopting the ideas similar to the esti- mation of the leverage effect, it is easy to extend the methods to study other important aspects of stock returns, such as volatility of volatility.**Abstract:****Thursday, March 13, 2014**Ganggang Xu**Speaker:**A Bayesian Spatio-Temporal Geostatistical Model with an Auxiliary Lattice for Large Datasets**Title:****Time****:**4:30 - 5:30 pm**Room****:**LN-2205When spatio-temporal datasets are large in size, the aggravated computational burden can often lead to failures in the implementation of traditional geostatistical tools. In this paper, we propose a computationally efficient Bayesian hierarchical spatio-temporal model in which the spatial dependence is approximated by a Gaussian Markov random field while the temporal correlation is described using a vector autoregressive model. By introducing an auxiliary lattice on the spatial region of interest, the proposed method is not only able to handle irregularly spaced observations in the spatial domain, but it is also nicely able to bypass the missing data problem in a spatio-temporal process. Because the computational complexity of the proposed Markov chain Monte Carlo algorithm is of the order $O(n)$ with $n$ being the total number of observations in space and time, our method can be used to handle very large spatio-temporal datasets with reasonable CPU times.**Abstract:****Friday, March 14, 2014**Vladisav Kargin**Speaker:**Ihara's graph zeta and random matrices**Title:****Time****:**4:40 - 5:40 pm**Room****:**LN-2205In this talk, I am going to explain the definition of Ihara's graph zeta function and to outline how this zeta function is related both to the arithmetic zeta functions and to the spectra of random matrices. Then, I will discuss the statistical properties of Ihara's zeta singularities for a random regular graph.**Abstract:****Tuesday, March 18, 2014**Daniel O'Malley**Speaker:**Statistics applied to fracking, human-computer interaction, and groundwater remediation**Title:****Time****:**4:30 - 5:30 pm**Room****:**LN-2205Millions of gallons of fracking fluid are pumped into the deep subsurface as part of hydraulic fracturing operations associated with natural gas production in shale formations. A small fraction of the fluid is recovered. What is the fate of the remaining water? A statistical model of fracture networks will be utilized to shed light on this question. The film Erin Brokovich concerned a plume of hexavalent chromium originating from the cooling towers of a power plant in Hinkley, CA during the 1950s and 1960s. Around the same time, a similar release of hexavalent chromium occurred in Los Alamos, NM. Cleaning up this subsurface contamination is necessary, but fraught with severe uncertainty. A decision framework that combines Bayes theorem with non-probabilistic uncertainty quantification techniques will be presented. A progress bar is used to indicate the extent to which a computer has completed a task. It is frequently used to show progress while loading a web page, copying files, installing software, etc. Despite its ubiquity, surprisingly little research into its improvement has been carried out. The developments that have occurred are superficial and some appear to be designed to deceive rather than inform the user. An alternative to the traditional progress bar that embraces uncertainty and employs robust statistical tools will be explored.**Abstract:**