1、MCMC Estimation for Random Effect Modelling The MLwiN experienceDr William J. BrowneSchool of Mathematical SciencesUniversity of NottinghamContents Random effect modelling, MCMC and MLwiN. Methods comparison Guatemalan child health example. Extendibility of MCMC algorithms: Cross classified and mult
2、iple membership models. Artificial insemination and Danish chicken examples. Further Extensions.Random effect models Models that account for the underlying structure in the dataset. Originally developed for nested structures (multilevel models), for example in education, pupils nested within schools
3、. An extension of linear modelling with the inclusion of random effects.A typical 2-level model is Here i indexes pupils and j indexes schools.), 0(), 0(2210eijujijjijijNeNueuxyMLwiN Software package designed specifically for fitting multilevel models. Developed by a team led by Harvey Goldstein and
4、 Jon Rasbash at the Institute of Education in London over past 15 years or so. Earlier incarnations ML2, ML3, MLN. Originally contained classical IGLS estimation methods for fitting models. MLwiN launched in 1998 also included MCMC estimation. My role in team was as developer of MCMC functionality i
5、n MLwiN during 4.5 years at the IOE.Estimation Methods for Multilevel ModelsDue to additional random effects no simple matrix formulae exist for finding estimates in multilevel models.Two alternative approaches exist:1.Iterative algorithms e.g. IGLS, RIGLS, EM in HLM that alternate between estimatin
6、g fixed and random effects until convergence. Can produce ML and REML estimates.2.Simulation-based Bayesian methods e.g. MCMC that attempt to draw samples from the posterior distribution of the model.MCMC Algorithm Consider the 2-level model MCMC algorithms work in a Bayesian framework and so we nee
7、d to add prior distributions for the unknown parameters. Here there are 4 sets of unknown parameters: We will add prior distributions ), 0(), 0(2210eijujijjijijNeNueuxy22,euu)(),(),(22eupppMCMC Algorithm (2)The algorithm for this model then involves simulating in turn from the 4 sets of conditional
8、distributions. Such an algorithm is known as Gibbs Sampling. MLwiN uses Gibbs sampling for all normal response models.Firstly we set starting values for each group of unknown parameters, Then sample from the following conditional distributions, firstly To get .)0(2)0(2)0()0(,euu),|(2)0(2)0()0(euuyp)
9、1(MCMC Algorithm (3)We next sample fromto get , thento get , then finallyTo get . We have then updated all of the unknowns in the model. The process is then simply repeated many times, each time using the previously generated parameter values to generate the next set),|(2)0(2)0()1(euyup)1(u),|(2)0()
10、1()1(2euuyp2)1(u),|(2)1()1()1(2ueuyp2)1(eBurn-in and estimatesBurn-in: It is general practice to throw away the first n values to allow the Markov chain to approach its equilibrium distribution namely the joint posterior distribution of interest. These iterations are known as the burn-in.Finding Est
11、imates: We continue generating values at the end of the burn-in for another m iterations. These m values are then average to give point estimates of the parameter of interest. Posterior standard deviations and other summary measures can also be obtained from the chains.Methods for non-normal respons
12、es When the response variable is Binomial or Poisson then different algorithms are required. IGLS/RIGLS methods give quasilikelihood estimates e.g. MQL, PQL. MCMC algorithms including Metropolis Hastings sampling and Adaptive Rejection sampling are possible. Numerical Quadrature can give ML estimate
13、s but is not without problems. So why use MCMC? Often gives better estimates for non-normal responses. Gives full posterior distribution so interval estimates for derived quantities are easy to produce. Can easily be extended to more complex problems. Potential downside 1: Prior distributions requir
14、ed for all unknown parameters. Potential downside 2: MCMC estimation is much slower than the IGLS algorithm.The Guatemalan Child Health dataset.This consists of a subsample of 2,449 respondents from the 1987 National Survey of Maternal and Child Helath, with a 3-level structure of births within moth
15、ers within communities.The subsample consists of all women from the chosen communities who had some form of prenatal care during pregnancy. The response variable is whether this prenatal care was modern (physician or trained nurse) or not.Rodriguez and Goldman (1995) use the structure of this datase
16、t to consider how well quasi-likelihood methods compare with considering the dataset without the multilevel structure and fitting a standard logistic regression.They perform this by constructing simulated datasets based on the original structure but with known true values for the fixed effects and v
17、ariance parameters.They consider the MQL method and show that the estimates of the fixed effects produced by MQL are worse than the estimates produced by standard logistic regression disregarding the multilevel structure!The Guatemalan Child Health dataset.Goldstein and Rasbash (1996) consider the s
18、ame problem but use the PQL method. They show that the results produced by PQL 2nd order estimation are far better than for MQL but still biased.The model in this situation is In this formulation i,j and k index the level 1, 2 and 3 units respectively.The variables x1,x2 and x3 are composite scales
19、at each level because the original model contained many covariates at each level.Browne and Draper (2004) considered the hybrid Metropolis-Gibbs method in MLwiN and two possible variance priors (Gamma-1(,) and Uniform.)., 0( and ), 0( where )(logit with)(Bernouilli223322110vkujkkjkkjkijkijkijkijkNvN
20、uvuxxxppySimulation ResultsThe following gives point estimates (MCSE) for 4 methods and 500 simulated datasets.Parameter (True)MQL1PQL2GammaUniform0 (0.65)0.474 (0.01)0.612 (0.01)0.638 (0.01)0.655 (0.01)1 (1.00)0.741 (0.01)0.945 (0.01)0.991 (0.01)1.015 (0.01)2 (1.00)0.753 (0.01)0.958 (0.01)1.006 (0.
21、01)1.031 (0.01)3 (1.00)0.727 (0.01)0.942 (0.01)0.982 (0.01)1.007 (0.01)2v (1.00)0.550 (0.01)0.888 (0.01)1.023 (0.01)1.108 (0.01)2u (1.00)0.026 (0.01)0.568 (0.01)0.964 (0.02)1.130 (0.02)Simulation ResultsThe following gives interval coverage probabilities (90%/95%) for 4 methods and 500 simulated dat
22、asets.Parameter (True)MQL1PQL2GammaUniform0 (0.65)67.6/76.886.2/92.086.8/93.288.6/93.61 (1.00)56.2/68.690.4/96.292.8/96.492.2/96.42 (1.00)13.2/17.684.6/90.888.4/92.688.6/92.83 (1.00)59.0/69.685.2/89.886.2/92.288.6/93.62v (1.00)0.6/2.470.2/77.689.4/94.487.8/92.22u (1.00)0.0/0.021.2/26.884.2/88.688.0/
23、93.0Summary of simulationsThe Bayesian approach yields excellent bias and coverage results.For the fixed effects, MQL performs badly but the other 3 methods all do well.For the random effects, MQL and PQL both perform badly but MCMC with both priors is much better.Note that this is an extreme scenar
24、io with small levels 1 in level 2 yet high level 2 variance and in other examples MQL/PQL will not be so bad.Extension 1: Cross-classified modelsFor example, schools by neighbourhoods. Schools will draw pupils from many different neighbourhoods and the pupils of a neighbourhood will go to several sc
25、hools. No pure hierarchy can be found and pupils are said to be contained within a cross-classification of schools by neighbourhoods : nbhd 1nbhd 2Nbhd 3School 1xxxSchool 2xxSchool 3xxxSchool 4xxxxSchool S1 S2 S3 S4Pupil P1 P2 P3 P4 P5 P6 P7 P8 P9 P10 P11 P12Nbhd N1 N2 N3NotationWith hierarchical mo
26、dels we use a subscript notation that has one subscript per level and nesting is implied reading from the left. For example, subscript pattern ijk denotes the ith level 1 unit within the jth level 2 unit within the kth level 3 unit.If models become cross-classified we use the term classification ins
27、tead of level. With notation that has one subscript per classification, that captures the relationship between classifications, notation can become very cumbersome. We propose an alternative notation introduced in Browne et al. (2001) that only has a single subscript no matter how many classificatio
28、ns are in the model.Single subscript notationSchool S1 S2 S3 S4Pupil P1 P2 P3 P4 P5 P6 P7 P8 P9 P10 P11 P12Nbhd N1 N2 N3inbhd(i)sch(i)111221311422512622723833934102411341234) 1 ()3()()2()(0iischinbhdieuuyWe write the model as1)3(4)2(30111)3(1)2(101euuyeuuyWhere classification 2 is neighbourhood and
29、classification 3 is school. Classification 1 always corresponds to the classification at which the response measurements are made, in this case patients. For pupils 1 and 11 equation (1) becomes:Classification diagramsSchoolPupilNeighbourhoodSchoolPupilNeighbourhoodNested structure where schools are
30、 contained within neighbourhoodsCross-classified structure where pupils from a school come from many neighbourhoods and pupils from a neighbourhood attend several schools.In the single subscript notation we lose information about the relationship(crossed or nested) between classifications. A useful
31、way of conveying this information is with the classification diagram. Which has one node per classification and nodes linked by arrows have a nested relationship and unlinked nodes have a crossed relationship.Example : Artificial insemination by donor Women w1 w2 w3 Cycles c1 c2 c3 c4 c1 c2 c3 c4 c1
32、 c2 c3 c4 Donations d1 d2 d1 d2 d3 d1 d2 Donors m1 m2 m3 1901 women279 donors 1328 donations12100 ovulatory cyclesresponse is whether conception occurs in a given cycle In terms of a unit diagram:DonorWomanCycleDonationOr a classification diagram:Model for artificial insemination data), 0(), 0(), 0(
33、)()logit(), 1 (2)4()4()(2)3()3()(2)2()2()()4()()3()()2()(iuidonoruidonationuiwomanidonoridonationiwomaniiiNuNuNuuuuXBinomialyWe can write the model as2)4(u012345672)2(u2) 3(uParameterDescriptionEstimate(se)intercept-4.04(2.30)azoospermia *0.22(0.11)semen quality0.19(0.03)womens age35-0.30(0.14)sperm
34、 count0.20(0.07)sperm motility0.02(0.06)insemination to early-0.72(0.19)insemination to late-0.27(0.10)women variance1.02(0.21)donation variance0.644(0.21)donor variance0.338(0.07)Results:Note cross-classified models can be fitted in IGLS but are far easier to fit using MCMC estimation.Extension 2:
35、Multiple membership models When level 1 units are members of more than one higher level unit we describe a model for such data as a multiple membership model.For example, Pupils change schools/classes and each school/class has an effect on pupil outcomes. Patients are seen by more than one nurse dur
36、ing the course of their treatment.Notation), 0()2(), 0()(22)2()2()()2()2(,eiujinursejijjiiiNeNueuwXByNote that nurse(i) now indexes the set of nurses that treat patient i and w(2)i,j is a weighting factor relating patient i to nurse j. For example, with four patients and three nurses, we may have th
37、e following weights: n1(j=1)n2(j=2)n3(j=3)p1(i=1)0.500.5p2(i=2)100p3(i=3)00.50.5p4(i=4)0.50.504)2(2)2(1443)2(3)2(2332)2(1221)2(3)2(1115 . 05 . 0)(5 . 05 . 0)(1)(5 . 05 . 0)(euuXByeuuXByeuXByeuuXByHere patient 1 was seen by nurse 1 and 3 but not nurse 2 and so on. If we substitute the values of w(2)i
38、,j , i and j. from the table into (2) we get the series of equations :Classification diagrams for multiple membership relationshipsDouble arrows indicate a multiple membership relationship between classifications.patientnurseWe can mix multiple membership, crossed and hierarchical structures in a si
39、ngle model.patientnursehospitalGP practiceHere patients are multiple members of nurses, nurses are nested within hospitals and GP practice is crossed with both nurse and hospital. Example involving nesting, crossing and multiple membership Danish chickensProduction hierarchy10,127 child flocks 725 h
40、ouses 304 farmsBreeding hierarchy10,127 child flocks200 parent flocks farm f1 f2 Houses h1 h2 h1 h2 Child flocks c1 c2 c3 c1 c2 c3. c1 c2 c3. c1 c2 c3. Parent flock p1 p2 p3 p4 p5. Child flockHouseFarmParent flockAs a unit diagram:As a classification diagram:Model and results), 0(), 0(), 0()()logit(
41、), 1 (2)4()4()(2)3()3()(2)2()2()(.)4()()3()()2()2(,iuifarmuihouseujiflockpjiifarmihousejjiiiiNuNuNueuuuwXBBinomialy0123452)2(u2)3(u2)4(uParameterDescriptionEstimate(se)intercept-2.322(0.213)1996-1.239(0.162)1997-1.165(0.187)hatchery 2-1.733(0.255)hatchery 3-0.211(0.252)hatchery 4-1.062(0.388)parent
42、flock variance0.895(0.179)house variance0.208(0.108)farm variance0.927(0.197)Results:Note multiple membership models can be fitted in IGLS and this model/dataset represents roughly the most complex model that the method can handle.Such models are far easier to fit using MCMC estimation.Further Exten
43、sions / Work in progress1. Multilevel factor models2. Response variables at different levels3. Missing data and multiple imputation4. ESRC grant: Sample size calculations, MCMC efficiency & Model identifiability5. Wellcome Fellowship grant for Martin GreenMultilevel factor analysis modelling In samp
44、le surveys there are often many responses for each individual. Techniques like factor analysis are often used to identify underlying latent traits amongst these responses. Multilevel factor analysis allows factor analysis modelling to identify factors at various levels/classifications in the dataset
45、 so we can identify shared latent traits as well as individual level traits. Due to the nature of MCMC algorithms by adding a step to allow for multilevel factor models in MLwiN, cross-classified models can also be fitted without any additional programming! See Goldstein and Browne (2002,2005) for m
46、ore detail.Responses at different levels In a medical survey some responses may refer to patients in a hospital while others may refer to the hospital itself. Models that combine these responses can be fitted using the IGLS algorithm in MLwiN and shouldnt pose any problems to MCMC estimation. The Ce
47、ntre for Multilevel modelling in Bristol are investigating such models as part of their LEMMA node in the ESRC research methods program. I am a named collaborator for the Lemma project. They are also looking at MCMC algorithms for latent growth models. Missing data and multiple imputation Missing da
48、ta is proliferate in survey research. One popular approach to dealing with missing data is multiple imputation (Rubin 1987) where several imputed datasets are created and then the model of interest is fitted to each dataset and the estimates combined. Using a multivariate normal response multilevel
49、model to generate the imputations using MCMC in MLwiN is described in chapter 17 of Browne (2003) James Carpenter (LSHTM) has begun work on macros in MLwiN that automate the multiple imputation procedure. Sample size calculations Another issue in data collection is how big a sample do we need to col
50、lect? Such sample size calculations have simple formulae if we can assume that an independent sample can be generated. If however we wish to account for the data structure in the calculation then things are more complex. One possibility is a simulation-based approach similar to that used in the mode