University of Bristol
Wellcome Trust
Recommended by:
Society of Biology
PEEP for Physics & Ethics at GCSE
 

Having Confidence: Sampling, Testing and Statistical Significance


Sampling

Having confidence in a scientist’s data depends on the size of the sample the data was collected from and any margin of error reported. For instance, the number of oil slicks observed in the North Sea per hour of monitoring by overflying fell from 0.082 in 1998 to 0.0775 in 1999. This could be advertised as a dramatic reduction of 5.5% in a year but, we also need to know the total number of hours flown to judge if the sample used in the calculation is good enough. And if the measurements are accurate to, say for example, ± 3%, the observed decrease may not exist at all.

Sampling can take place over time (repeatedly checking algae levels in river), within a set area (checking a sample of tissue on a microscope slide for cell changes associated with cancer) or amongst a group of people (all having had a particular type of vaccination).

The bigger the sample used, the more likely the results collected are representative of the population as a whole and the smaller the sampling error.

Systematic or random.

Systematic errors occur when your apparatus or method of observation is faulty or inadequate and you consistently fail to measure part of the population or events being observed. They can be due to either poor sample design or a measurement error such as a poorly calibrated sensor. Random errors are part and parcel of sampling, they are due to chance variation in the population or events being measured and cannot be eliminated,only reduced by taking a larger sample.

Testing

We become more confident about scientists’ explanations if they lead to predictions (especially novel or unexpected ones) which can be tested. An explanation that agrees with observations resulting from such tests, or if a device or procedure based upon it works as expected, is generally accepted. But to what degree? Statisticians have calculated confidence intervals that scientists can use to indicate the level of confidence others can have in their results.

True / False ?


A confidence level of 95% indicates that there is a 5% chance that the explanation has been wrongly accepted.

Suppose a scientist announced that her new technique for in-vitro fertilisation was successful 60% of the time. She qualifies this 60% with a 95% confidence level, as 5% of 60 is 3 this indicates that the true result is 60 ± 3 %, i.e. it is very likely to be between 57% and 63%.

Another way of expressing confidence mathematically is to indicate the probability of the experimental result being due to chance. In this case the probability is shown as a proportion of one rather than as a percentage.

e.g. p<0.05 – the probability that the results are due to chance is less than 5%.

 

 Significance

 

What's your opinion?

Average rating

Not yet rated

Read comments

speech bubble  No comments yet. Why not be the first person to add one?