The Library
Semiparametric dynamic time series modelling with applications to detecting neural dynamics
Tools
Rigat, Fabio, 1975 and Smith, J. Q., 1953 (2007) Semiparametric dynamic time series modelling with applications to detecting neural dynamics. Working Paper. Coventry: University of Warwick. Centre for Research in Statistical Methodology. (Working papers).

PDF
WRAP_Rigat_0707v3.pdf  Published Version  Requires a PDF viewer such as GSview, Xpdf or Adobe Acrobat Reader Download (1272Kb) 
Official URL: http://www2.warwick.ac.uk/fac/sci/statistics/crism...
Abstract
This paper illustrates the theory and applications of a methodology for nonstationary time series modeling which combines sequential parametric Bayesian estimation with nonparametric changepoint testing. A novel KullbackLeibler divergence between posterior distributions arising from different sets of data is proposed as a nonparametric test statistic. A closed form expression of this test statistic is derived for exponential family models whereas Markov chain Monte Carlo simulation is used in general to approximate its value and that of its critical region. The effects of detecting a changepoint using our method are assessed analytically for the onestep ahead predictive distribution of a linear dynamic Gaussian time series model. Conditions under which our approach reduces to fully parametric statespace modeling are illustrated. The method is applied to estimating the functional dynamics of a wide range of neural data, including multichannel electroencephalogram recordings, the learning performance in longitudinal behavioural experiments and invivo multiple spike trains. The estimated dynamics are related to the presentation of visual stimuli, to the generation of motor responses and to variations of the functional connections between neurons across different experiments.
Item Type:  Working or Discussion Paper (Working Paper) 

Subjects:  Q Science > QA Mathematics 
Divisions:  Faculty of Science > Statistics 
Library of Congress Subject Headings (LCSH):  Timeseries analysis, Changepoint problems, Neurons  Mathematical models 
Series Name:  Working papers 
Publisher:  University of Warwick. Centre for Research in Statistical Methodology 
Place of Publication:  Coventry 
Date:  2007 
Volume:  Vol.2007 
Number:  No.7 
Number of Pages:  32 
Status:  Not Peer Reviewed 
Access rights to Published version:  Open Access 
Funder:  University of Warwick. Centre for Research in Statistical Methodology 
References:  H. Akaike. Likelihood of a model and information criteria. Journal of Econometrics, 16:3–14, 1981. H. Akaike. On the likelihood of a time series model. The Statistician, 27:217–235, 1978. J.H. Albert and S. Chib. Bayes inference via Gibbs sampling for autoregressive time series subject to Markov mean and variance shifts. Journal of Business and Economic Statistics, 11:1 – 15, 1993. M.M. Barbieri and J.O. Berger. Optimal predictive model selection. The Annals of Statistics, 32:870–897, 2004. M.J. Bayarri and J.O. Berger. The interplay between Bayesian and frequentist analysis. Statistical Science, 19:58 – 80, 2004. M.J. Bayarri and J. Morales. Bayesian measures of surprise for outlier detection. Journal of Statistical Planning and Inference, 111:3 – 22, 2003. P. B´elisle, L. Joseph, B.MacGibbon, D.B.Wolfson, and R. du Berger. Changepoint analysis of neuron spike train data. Biometrics, 54:113 – 123, 1998. T. Bengtsson and J.E. Cavanaugh. An improved Akaike information criterion for statespace model selection. Computational Statistics and Data Analysis, 50:2635 – 2654, 2006. J.O. Berger and M.L. Bayarri. Measures of surprise in bayesian analysis. ISDS Discussion Paper, 46, 1997. J.O. Berger, L. Brown, and R.L. Wolpert. A unified conditional frequentist and Bayesian test for fixed and sequential hypothesis testing. The Annals of Statistics, 22:1787 – 1807, 1994. J. Bernardo. Expected information as expected utility. The Annals of Statistics, 7: 686–690, 1979. J.M. Bernardo and A.F.M. Smith. Bayesian theory. Wiley, 2007. G. E. P. Box. Sampling and Bayes’ inference in scientific modelling and robustness. Journal of the American Statistical Association A, 143:383–430, 1980. D.R. Brillinger. Some statistical methods for random processes data from seismology and neurophysiology. The Annals of Statistics, 16:1–54, 1988. E. Brown, R.E. Kass, and P.P. Mitra. Multiple neural spike train data analysis: stateoftheart and future challenges. Nature Neuroscience, 7:456–461, 2004. E.N. Brown and R. Barbieri. Dynamic analyses of neural representations using the statespace modeling paradigm. In: Madras, B., Von Zastrow, M. Colvis, C., Rutter, J. Shurtleff, D. and Pollock, J. eds. ”The Cell Biology of Addiction”, New York, Cold Spring Harbor Laboratory press, 2006. G. Buzs´aki. Large scale recording of neuronal ensembles. Nature Neuroscience, 7: 446–451, 2004. O. Cappe, E. Moulines, and T. Ryden. Inference in Hidden Markov Models. Springer, 2005. B.P. Carlin, A.E. Gelfand, and A.F.M. Smith. Hierarchical Bayesian analysis of changepoint problems. Applied Statistics, 41:389–405, 1992. C. Carota, G. Parmigiani, and N. Polson. Diagnostic measures for model criticism. Journal of the American Statistical Association, 91:753–762, 1996. S. Chib. Estimation and comparison of multiple changepoint models. Journal of Econometrics, 86:221 – 241, 1998. F. Critchley, P. Marriott, and M. Salmon. Preferred point geometry and the local differential geometry of the KullbackLeibler divergence. The Annals of Statistics, 22:1587–1602, 1994. A.P. Dawid. Present position and potential developments: some personal views. statistical theory: the prequential approach. Journal of the Royal Statistical Society A, 147:278–292, 1984. A. Delorme, S. Makeig, M. FabreThorpe, and T. J. Sejnowski. From single trial EEG to brain area dynamics. Neurocomputing, 44:1057–1064, 2002. P. Diaconis and D. Ylvisaker. Conjugate priors for exponential families. The Annals of Statistics, 7:269–281, 1979. A. Doucet, N. De Freitas, and N.J. Gordon. Sequential Monte Carlo Methods in Practice. SpringerVerlag, New York, 2001. U.T. Eden and E.N. Brown. Continuoustime filters for state estimation from point process models of neural data. Statistica Sinica, 2008. U.T. Eden, L.M. Frank, R. Barbieri, V. Solo, and E.N. Brown. Dynamic analysis of neural encoding by point process adaptive filtering. Neural Computation, 16: 971–998, 2004. P. Fearnhead and Z. Liu. OnLine Inference for Multiple Change Points. Journal of the Royal Statistical Society B, 69:589–605, 2007. D. Ferger. Nonparametric tests for nonstandard changepoint problems. The Annals of Statistics, 23:1848–1861, 1995. S. E. Fienberg. Stochastic models for single neuron firing trains: a survey. Biomet rics, 30:399–427, 1974. S. Fr¨uhwirthShnatter. Bayesian model discrimination and Bayes factors for linear gaussian statespace models. Journal of the Royal Statistical Society B, 1:237 – 246, 1995. S. Fr¨uhwirthShnatter. Markov chain monte carlo estimation of classical and dynamic switching and mixture models. Journal of the American Statistical Asso ciation, 96:194 – 209, 2001. S. Fr¨uhwirthShnatter. Finite Mixture and Markov Switching Models. Springer, 2006. D. Gamerman. A dynamic approach to the statistical analysis of point processes. Biometrika, 79:39–50, 1992. S. Geisser and W.F. Eddy. A predictive approach to model selection. Journal of the American Statistical Association, 74:153 – 160, 1979. A.E. Gelfand and A. F. M. Smith. Samplingbased approaches to calculating marginal densities. Journal of the American Statistical Association, 85:398–409, 1990. A. Gelman, X.L. Meng, and H.S. Stern. Posterior predictive assessment of model fitness via realized discrepancies. Statistica Sinica, 6:733–807, 1996. Z. Ghahramani and G.E. Hinton. Variational learning for switching statespace models. Neural Computation, 12:831 – 864, 2000. C. Goutis and C. Robert. Model choice in generalised linear models: A Bayesian approach via KullbackLeibler projections. Biometrika, 85:29–37, 1998. E. Guti´errezPe˜na. Moments for the canonical parameter of an exponential family under a conjugate distribution. Biometrika, 84:727–732, 1997. I. Guttman. The use of the concept of a future observation in goodnessoffit problems. Journal of the Royal Statistical Society B, 29:83 – 100, 1967. P. Hall. On KullbackLeibler loss and density estimation. The Annals of Statistics, 15:1491–1519, 1987. J.D. Hamilton. Time Series Analysis. Princeton University Press, 1994. J.D. Hamilton. Analysis of time series subject to changes in regime. Journal of Econometrics, 45:39 – 70, 1990. C. Han and B.P. Carlin. Markov chain Monte Carlo methods for computing Bayes factors: A comparative review. Journal of the American Statistical Association, 96:1122 – 1132, 2001. W. H¨ardle, H. L¨utkepohl, and R. Chen. A review of nonparametric time series analysis. International Statistical Review, 65:49 – 72, 1997. P.J. Harrison and C.F. Stevens. Bayesian forecasting. Journal of the Royal Statis tical Society B, 38:205–247, 1976. T. Hastie. A closer look at the deviance. The American Statistician, 41:16–20, 1987. S. Iyengar. The analysis of multiple neural spike trains. In: Advances in Method ological and Applied Aspects of probability and Statistics; N. Bolakrishnan editor; Gordon and Breack, pages 507–524, 2001. M. Jain, M. Elhilali, N. Vaswami, J. Fritz, and S. Shamma. A particle filter for tracking adaptive neural responses in auditory cortex. Submitted for publication, 2007. R.E. Kalman. A new approach to linear filtering and prediction problems. Journal of Basic Engineering, 82:35–45, 1960. R.E. Kass, V. Ventura, and E. Brown. Statistical issues in the analysis of neuronal data. Journal of Neurophysiology, 94:8–25, 2005. K.M. Kendrick, A.P. da Costa, A.E. Leigh, M.R. Hinton, and J.W. Peirce. Sheep don’t forget a face. Nature, 414:165–166, 2001. C.J. Kim. Dynamic linear models with Markovswitching. Journal of Econometrics, 60:1 – 22, 1994. S. Koyama, L. Castellanos PerezBolde, and R.E. Kass. Approximate Methods for StateSpace Models: The LaplaceGaussian Filter. Submitted for publication, 2008. P.M. Kuhnert, K. Mergesen, and P. Tesar. Bridging the gap between different statistical approaches: an integrated framework for modelling. International Statistical Review, 71:335 – 368, 2003. S. Kullback. Information theory and Statistics. Dover; New York, 1997. S. Kullback and R.A. Leibler. On information and sufficiency. The Annals of Mathematical Statistics, 22:79–86, 1951. D. Lindley. On a measure of the information provided by an experiment. The Annals of Mathematical Statistics, 27:986–1005, 1956. C.R. Loader. Change point estimation using nonparametric regression. The Annals of Statistics, 24:1667–1678, 1996. S. Makeig, M. Westerfield, T. P. Jung, S. Enghoff, J. Towsend, E. Courchesne, and T. J. Sejnowski. Dynamic brain sources of visual evoked responses. Science, 295: 690–694, 2002. P. Marjoram, J. Molitor, V. Plagnol, and S. Tavar´e. Markov Chain Monte Carlo without likelihoods. Proceedings of the National Academy of Science, 100:15324 – 15328, 2003. R.E. McCulloch. Information and the likelihood function in exponential families. The American Statistician, 42:73–75, 1988. R.E. McCulloch and R.S. Tsay. Statistical analysis of economic time series via Markov switching models. Journal of Time Series Analysis, 15:523 – 539, 1994. X.L. Meng. Posterior predictive pvalues. The Annals of Statistics, 22:1142–1160, 1994. A. Mira and S. Petrone. Bayesian hierarchical nonparametric inference for change point problems. Bayesian Statistics, 5:693–703, 1996. H.G. Muller. Changepoints in nonparametric regression analysis. The Annals of Statistics, 20:737–761, 1992. T. O’Hagan and J. Forster. Kendall’s Advanced Theory of Statistics, volume 2B. Arnold, 1999. M. Okatan, M.A. Wilson, and E.N. Brown. Analyzing functional connectivity using a network likelihood model of ensemble neural spiking activity. Neural computation, 17:1927–1961, 2005. E.S. Page. A test for a change in a parameter occurring at an unknown point. Biometrika, 42:523–527, 1955. R.P.N. Rao. Hierarchical Bayesian inference in networks of spiking neurons. Ad vances in NIPS; MIT press, 17, 2005. F. Rigat, M. de Gunst, and J. ven Pelt. Bayesian modelling and analysis of spatiotemporal neuronal networks. Bayesian Analysis, 1:733–764, 2006. C.P. Robert, G. Celeux, and J. Diebolt. Bayesian estimation of hidden Markov chains: a stochastic implementation. Statistics and Probability Letters, 16:77 – 83, 1993. P.M. Robinson. Nonparametric estimation for time series models. Journal of Time Series Analysis, 4:185 – 208, 1983. D.B. Rubin. Bayesianly justifiable and relevant frequency calclulations for the applied Statistician. Annals of Statistics, 12:1151 – 1172, 1984. A. San Martini and F. Spezzaferri. A predictive model selection criterion. Journal of the Royal Statistical Society B, 46:296–383, 1984. R.H. Shumway and D.S. Stoffer. Dynamic linear models with switching. Journal of the American Statistical Association, 86:763–769, 1991. A. F.M. Smith and G.O. Roberts. Bayesian computations via the Gibbs sampler and related Markov chain Monte Carlo methods. Journal of the Royal Statistical Society B, 55:3–23, 1993. A.C. Smith, M.F. Loren, S. Wyrth, M. Yanike, D. Hu, Y. Kubota, A.M. Graybiel, W.A. Suzuki, and E.M. Brwon. Dynamic analysis of learning in behavioural experiments. Journal of Neuroscience, 24:447–461, 2004. A.F.M. Smith. A Bayesian approach to inference about a changepoint in a sequence of random variables. Biometrika, 62:407–416, 1975. J.Q. Smith. Nonlinear state space models with partially specified distributions on states. Journal of Forecasting, 9:137 – 149, 1990. J.Q. Smith. A comparison of the characteristics of some Bayesian forecasting models. International Statistical Reviews, 60:75 – 85, 1992. D.J. Spiegelhalter, N.G. Best, P.B. Carlin, and A. van der Linde. Bayesianmeasures of model complexity and fit. Journal of the Royal Statistical Society B, 64:583– 639, 2002. L. Srinivansan, U.T. Eden, A.S. Willsky, and E.N. Brown. A statespace analysis for reconstruction of goaldirected movements using neural signals. Neural Computation, 18:2465–2494, 2006. D.A. Stephens. Bayesian retrospective multiplechangepoint identification. Applied Statistics, 43:159–178, 1994. M. Stone. Application of a measure of information to the design and comparison of regression experiments. The Annals of Mathematical Statistics, 30:55–70, 1959. L. Tierney. Markov chains for exploring posterior distributions. The Annals of Statistics, 22:1701–1762, 1994. L. Tierney and J.B. Kadane. Accurate Approximations for Posterior Moments and Marginal Densities. Journal of the American Statistical Association, 81:84 – 86, 1986. W. Truccolo, U.T. Eden, M.R. Fellows, J.P. Donoghue, and E.N. Brown. A point process framework for relating neural spiking activity to spiking history, neural ensemble and extrinsic covariate effects. Journal of Neurophysiology, 93:1074– 1089, 2005. M. West. Bayesian model monitoring. Journal of the Royal Statistical Society B, 48:70–78, 1986. M. West and P.J. Harrison. Bayesian forecasting and dynamic models. Springer; New York, second edition, 1997. M. West and P.J Harrison. Monitoring and adaptation in Bayesian forecasting models. Journal of the American Statistical Association, 81:741–750, 1986a. M. West and T.J. Harrison. Monitoring and adaptation in Bayesian forecasting models. Journal of the American Statistical Association, 81:741–50, 1986b. M. West, P.J. Harrison, and H.S. Migon. Dynamic generalised linear models and Bayesian forecasting. Journal of the American Statistical Association, 80:73–83, 1985. S. Wirth, M. Yanike, M.F. Loren, A.C. Smith, E.M. Brwon, and W.A. Suzuki. Single neurons in the monkey hyppocampus and learning of new associations. Science, 300:1578–1584, 2003. 
URI:  http://wrap.warwick.ac.uk/id/eprint/35539 
Actions (login required)
View Item 