Julia Simkus is a graduate of Princeton University with a Bachelor of Arts in Psychology. She is currently studying for a Master's Degree in Counseling for Mental Health and Wellness in September 2023. Julia's research has been published in peer reviewed journals.
Reviewed by
&Editor-in-Chief for Simply Psychology
BSc (Hons) Psychology, MRes, PhD, University of Manchester
Saul McLeod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.
When you take samples from a population and calculate the means of the samples, these means will be arranged into a distribution around the true population mean.
The standard deviation of this distribution of sampling means is known as the standard error.
Standard error estimates how accurately the mean of any given sample represents the true mean of the population.
A larger standard error indicates that the means are more spread out, and thus it is more likely that your sample mean is an inaccurate representation of the true population mean.
On the other hand, a smaller standard error indicates that the means are closer together. Thus it is more likely that your sample mean is an accurate representation of the true population mean.
The standard error increases when the standard deviation increases. Standard error decreases when sample size increases because having more data yields less variation in your results.
SE = standard error of the sample
σ = sample standard deviation
n = number of samples
Standard error is calculated by dividing the standard deviation of the sample by the square root of the sample size.
The values in your sample are 52, 60, 55, and 65.
The standard error is a statistical term that measures the accuracy with which a sample distribution represents a population by using the standard deviation of the sample mean.
Determining a “good” standard error can be context-dependent. As a general rule, a smaller standard error is better because it suggests your sample mean is a reliable estimate of the population mean. However, what constitutes as “small” can depend on the scale of your data and the size of your sample.
The standard error measures how spread out the means of different samples would be if you were to perform your study or experiment many times. A lower SE would indicate that most sample means cluster tightly around the population mean, while a higher SE indicates that the sample means are spread out over a wider range.
It’s used to construct confidence intervals for the mean and hypothesis testing.
We use the standard error to indicate the uncertainty around the estimate of the mean measurement. It tells us how well our sample data represents the whole population. This is useful when we want to calculate a confidence interval.
Standard error and standard deviation are both measures of variability, but the standard deviation is a descriptive statistic that can be calculated from sample data, while standard error is an inferential statistic that can only be estimated.
Standard deviation tells us how concentrated the data is around the mean. It describes variability within a single sample. On the other hand, the standard error tells us how the mean itself is distributed.
It estimates the variability across multiple samples of a population. The formula for standard error calculates the standard deviation divided by the square root of the sample size.
Altman, D. G., & Bland, J. M. (2005). Standard deviations and standard errors. Bmj, 331 (7521), 903.
Zwillinger, D. (2018). CRC standard mathematical tables and formulas. chapman and hall/CRC.