IQ standard deviation

Intellectual Quotient (IQ) is defined by the Oxford English Dictionary as: a number representing a person’s reasoning ability (measured using problem-solving tests) as compared to the statistical norm or average for their age, taken as 100.

This definition is simplistic in the sense that it ignores that origins of the IQ test. In the late 1890s, the Frenchman Alfred Binet devised a test based largely on verbal reasoning, that was requested to be developed by the Paris ministry of education in order to help schools ‘weed out’ children with a lower intellectual ability as the thinking at the time was that these children slowed down ‘normal’ or even ‘bright’ children during the academic year.

The goals of the ministry of education were not noble as there were no special plans for the children of lesser intellectual ability. These poor children were simply removed from the classroom.

In spite of the ugliness of the situation, the request by the Paris minister for a test of intellectual ability represented Binet’s window of opportunity to publish a test that he had developed several years earlier but which had never received any public recognition. Binet recognized that children of different ages would have different intellectual abilities. Binet’s thinking was that intellectual or mental ability (MA) for any given chronological age group (CA) could be ascertained by having the children of a particular age group perform a series of mental tasks and problems and determine an ‘average performance’ for that particular age group. His clever system led to an intellectual quotient which was calculated by taking the test taker’s MA and dividing into that person’s CA (and hence the intelligence quotient). At its beginning, the formula for calculating IQ was as follows: IQ = MA/CA. As can be seen from this equation, if the person’s MA was equal to that person’s CA, then the ratio would be equal to 1. Binet then multiplied the result by 100 to get an integer. This is how the average IQ score came to be 100. If a child’s MA age was in excess of his CA (i.e. the child was intelligent for his age group and was able to perform tasks at the same level as older children), then this particular child would have an IQ that was greater than 100 (the opposite also holding true). So if a 10 year old (CA of 10) was able to perform that same tasks as an average 12 year old (MA=12), then that child’s IQ score would be calculated as follows: IQ = MA/CA = 12/10 = 1.2 x 100 = 120. Although this calculation method was clever and simple to comprehend, it would actually break down under plenty of circumstances. In particular, what happens if a 75 year old (CA = 75) had the same MA as a 15 year old? Well according to this rudimentary IQ calculation, that person’s IQ would be 500! In fact, it was determined that adult IQs were reached in individuals of 16 years of age, although fluid intelligence is known to decline after our late 20s.

IQ standard scores and IQ standard deviation

In the 1930s, the calculation of IQ was revolutionized by American David Weschler, who along with a mathematician friend found that scores could be more robustly standardized by adjusting the raw scores achieved on a particular IQ test by the IQ standard deviation of the test. It would eventually be established that IQ scores were normally distributed and the properties of a standard normal distribution as such that the entire population distribution of IQ scores could be explained with two variables: the mean (or average) or the IQ standard deviation of the test. The introduction of IQ standard deviation eliminated the problems highlighted above in the MA/CA example.

The purest way of determining what the standard deviation of the population of IQ scores would be to construct an IQ test and to administer it to an entire population and to mathematically calculate the standard deviation of the test results. This is in fact what test publishers look for when they publish a test (i.e. for the author of the test to have normed the test by attempting to administer it across a sample of test takers that are a truly random sample). There are plenty of different IQ tests and many have a different standard deviation. For instance, Cattell culture fair IIIa has a standard deviation of 16, while the Weschler Adult Intelligence Scale (WAIS) tests have a IQ standard deviation of 15 points. The Cattell verbal scale has a standard deviation of 24 points.

iq-brain.com fluid intelligence tests (click here) have an IQ standard deviation of 16 points.