Part A Probability is recommended for this course, but is not essential. If you are not doing Part A Probability then you should make sure that you are familiar with Prelims work on Probability, and you may also need to familiarise yourself with a couple of lectures' worth of material from Part A Probability. Should you be interested in taking courses involving statistics in Parts B or C, then it would be strongly advisable to take Part A Probability.
Building on the first year course, this course develops statistics for mathematicians, emphasising both its underlying mathematical structure and its application to the logical interpretation of scientific data. Advances in theoretical statistics are generally driven by the need to analyse new and interesting data which come from all walks of life.
Neil Laws
At the end of the course students should have an understanding of: the use of probability plots to investigate plausible probability models for a set of data; maximum likelihood estimation and large sample properties of maximum likelihood estimators; hypothesis tests and confidence intervals (and the relationship between them). They should have a corresponding understanding of similar concepts in Bayesian inference.
Order statistics, probability plots.
Estimation: observed and expected information, statement of large sample properties of maximum likelihood estimators in the regular case, methods for calculating maximum likelihood estimates, large sample distribution of sample estimators using the delta method.
Hypothesis testing: simple and composite hypotheses, size, power and p-values, Neyman-Pearson lemma, distribution theory for testing means and variances in the normal model, generalized likelihood ratio, statement of its large sample distribution under the null hypothesis, analysis of count data.
Confidence intervals: exact intervals, approximate intervals using large sample theory, relationship to hypothesis testing.
Probability and Bayesian Inference. Posterior and prior probability densities. Constructing priors including conjugate priors, subjective priors, Jeffreys priors. Bayes estimators and credible intervals. Statement of asymptotic normality of the posterior. Model choice via posterior probabilities and Bayes factors.
Examples: statistical techniques will be illustrated with relevant datasets in the lectures.