## Content

### Review of the theory in inference for means

The theory from Inference for means that is required for this module is now briefly reviewed.

Throughout, we are concerned with inference about a population mean. This is only one of many contexts in which inferences are obtained, but it is a very important one.

We consider a random variable \(X\) with unknown mean \(\mu\); this characterises a population of interest, so that \(\mu\) is the mean of the population.

A random sample "on \(X\)" of size \(n\) is defined to be \(n\) random variables \(X_1, X_2, \ldots, X_n\), which are mutually independent, and have the same distribution as \(X\).

The distribution of \(X\) is the underlying or "parent" distribution, producing the random sample.

Recall the important features of such a random sample:

- Any single element of the random sample, \(X_i\), has a distribution that is the same as the distribution of \(X\). So the chance that \(X_i\) takes any particular value is determined by the shape and pattern of the distribution of \(X\).
- There is variation between different random samples of size \(n\) from the same underlying population distribution.
- If we take a very large random sample from \(X\), and draw a histogram of the sample, the shape of the histogram will tend to resemble the shape of the distribution of \(X\).
- If \(n\) is small, the sample may be appear to be consistent with a number of different parent distributions.
- Independence between the \(X_i\)s is a crucial feature: if the \(X_i\)s are not independent then the features we discuss here may not apply, and often will not apply.

We define the sample mean \(\bar{X} = \frac{\sum_{i=1}^{n}{X_i}}{n}\).

From an actual random sample of size of the random variable \(X\), we have an actual observation, \(\bar{x} = \frac{\sum{x}}{n}\), of the sample mean, called a point estimate of \(\mu\).

We distinguish notationally and conceptually between the the random variable \(\bar{X}\) and its corresponding observed value \(\bar{x}\), we refer to the first of these as the estimator \(\bar{X}\), and the observed value as the estimate \(\bar{x}\).

In summary: the sample mean \(\bar{X}\) is a random variable, with its own distribution.

Two general results for the distribution of \(\bar{X}\) were proved in the Inference for means module. For a sample mean \(\bar{X}\) based on a random sample of size \(n\) on \(X\),

- \(\mathbb{E}(\bar{X}) = \mu\);
- \(\mbox{var}(\bar{X}) = \frac{\sigma^2}{n}\), and \(\mbox{sd}(\bar{X}) = \tfrac{\sigma}{\sqrt{n}}\).

This means that the distribution of \(\bar{X}\) is centred around the unknown \(\mu\), and the spread of the distribution gets smaller as the sample size \(n\) increases. Both of these are desirable features. And these two results are true regardless of the shape of the parent distribution (of \(X\)), and for any sample size \(n\).

What about the shape of the distribution of \(\bar{X}\)?

Firstly, there is a special case. When the parent distribution of \(X\) is itself Normal, the distribution of \(\bar{X}\) is itself Normal; specifically, \(\bar{X}\stackrel{\mathrm{d}}{=} \mbox{N}(\mu,\tfrac{\sigma^2}{n})\).

Exercise 1

This exercise revises material in Inference for means.

Suppose that when adults go on a particular weight loss treatment, the amount of weight they lose, \(X\), has a Normal distribution with mean \(\mu\) kg and standard deviation 4 kg. That is, \(X\stackrel{\mathrm{d}}{=} \mbox{N}(\mu,4^2)\); remember that the second figure in the brackets is the variance, not the standard deviation, so we sometimes express it in this way as a reminder.

A random sample of \(n\) people receiving this treatment is obtained. Find the probability that the sample mean, \(\bar{X}\), is within 1 kg of the true mean, \(\mu\), for samples of the following sizes:

- \(n = 5\);
- \(n = 20\);
- \(n = 50\).

When the parent distribution of \(X\) is not a Normal distribution, there is no general result for the distribution of the sample mean.

However, due to the remarkable Central Limit Theorem, the distribution of the sample mean from any parent distribution becomes closer and closer to a Normal distribution, as the sample size increases. Readers of this module who are not familiar with this should read the Inference for means module, where this is dealt with extensively, considering samples from uniform distributions, exponential distributions and a strange, non-standard distribution. Because of its importance, we restate the Central Limit Theorem here:

For large samples, the distribution of the sample mean is approximately Normal. If we have a random sample of size \(n\) from a parent distribution with mean \(\mu\) and variance \(\sigma^2\), then as \(n\) grows large the distribution of \(\bar{X}\), the sample mean, tends to a Normal distribution with mean \(\mu\) and variance \(\tfrac{\sigma^2}{n}\).

As the averages from any shape of distribution tend to have a Normal distribution, provided the sample size is large enough, we do not need information about the parent distribution of the data to describe the properties of the distribution of sample means.

Finally, we note the useful standardisation that occurs and can be used to find probabilities for sample means.

For a random sample of size \(n\) from a Normal distribution,

$$\frac{\bar{X} - \mu}{\sigma/\sqrt{n}} \stackrel{\mathrm{d}}{=} \mbox{N}(0,1).$$Under the specific conditions of sampling from a Normal distribution, and only then, this result holds for any value of \(n\).

The Central Limit Theorem gives the result when the distribution of \(X\) is not a Normal distribution: for a random sample of size \(n\) from any distribution with a finite mean \(\mu\) and finite variance \(\sigma^2\), for large \(n\),

$$\frac{\bar{X} - \mu}{\sigma/\sqrt{n}} \stackrel{\mathrm{d}}{\approx} \mbox{N}(0,1).$$In fact, as we saw in the module Inference for means, we can even go one step further. For large \(n\), the sample standard deviation can be substituted for \(\sigma\), and $$\frac{\bar{X} - \mu}{S/\sqrt{n}} \stackrel{\mathrm{d}}{\approx} \mbox{N}(0,1),$$

where \(S\) is the sample standard deviation.