IQ standard deviation

IQ standard deviation
IQ standard deviation ranges from 15 to 24 on the most popular tests

Intellectual Quotient (IQ) is defined by the Oxford English Dictionary as: a number representing a person’s reasoning ability (measured using problem-solving tests) as compared to the statistical norm or average for their age, taken as 100.

This definition is simplistic in the sense that it ignores that origins of the IQ test. In the late 1890s, the Frenchman Alfred Binet devised a test based largely on verbal reasoning, that was requested to be developed by the Paris ministry of education in order to help schools ‘weed out’ children with a lower intellectual ability as the thinking at the time was that these children slowed down ‘normal’ or even ‘bright’ children during the academic year.

The goals of the ministry of education were not noble as there were no special plans for the children of lesser intellectual ability. These poor children were simply removed from the classroom.

In spite of the ugliness of the situation, the request by the Paris minister for a test of intellectual ability represented Binet’s window of opportunity to publish a test that he had developed several years earlier but which had never received any public recognition. Binet recognized that children of different ages would have different intellectual abilities. Binet’s thinking was that intellectual or mental ability (MA) for any given chronological age group (CA) could be ascertained by having the children of a particular age group perform a series of mental tasks and problems and determine an ‘average performance’ for that particular age group. His clever system led to an intellectual quotient which was calculated by taking the test taker’s MA and dividing into that person’s CA (and hence the intelligence quotient). At its beginning, the formula for calculating IQ was as follows: IQ = MA/CA. As can be seen from this equation, if the person’s MA was equal to that person’s CA, then the ratio would be equal to 1. Binet then multiplied the result by 100 to get an integer. This is how the average IQ score came to be 100. If a child’s MA age was in excess of his CA (i.e. the child was intelligent for his age group and was able to perform tasks at the same level as older children), then this particular child would have an IQ that was greater than 100 (the opposite also holding true). So if a 10 year old (CA of 10) was able to perform that same tasks as an average 12 year old (MA=12), then that child’s IQ score would be calculated as follows: IQ = MA/CA = 12/10 = 1.2 x 100 = 120. Although this calculation method was clever and simple to comprehend, it would actually break down under plenty of circumstances. In particular, what happens if a 75 year old (CA = 75) had the same MA as a 15 year old? Well according to this rudimentary IQ calculation, that person’s IQ would be 500! In fact, it was determined that adult IQs were reached in individuals of 16 years of age, although fluid intelligence is known to decline after our late 20s.

IQ standard scores and IQ standard deviation

In the 1930s, the calculation of IQ was revolutionized by American David Weschler, who along with a mathematician friend found that scores could be more robustly standardized by adjusting the raw scores achieved on a particular IQ test by the IQ standard deviation of the test. It would eventually be established that IQ scores were normally distributed and the properties of a standard normal distribution as such that the entire population distribution of IQ scores could be explained with two variables: the mean (or average) or the IQ standard deviation of the test.  The introduction of IQ standard deviation eliminated the problems highlighted above in the MA/CA example.

The purest way of determining what the standard deviation of the population of IQ scores would be to construct an IQ test and to administer it to an entire population and to mathematically calculate the standard deviation of the test results. This is in fact what test publishers look for when they publish a test (i.e. for the author of the test to have normed the test by attempting to administer it across a sample of test takers that are a truly random sample). There are plenty of different IQ tests and many have a different standard deviation. For instance, Cattell culture fair IIIa has a standard deviation of 16, while the Weschler Adult Intelligence Scale (WAIS) tests have a IQ standard deviation of 15 points. The Cattell verbal scale has a standard deviation of 24 points.

IQ-Brain.com’s fluid intelligence tests (click here) have an IQ standard deviation of 16 points.

Highest IQ – the measurement challenges

Highest IQ - measurement is challenging
Highest IQ measurement challenges

This articles highlights some of the challenges associated with measuring human intelligence and in particular those associated with the measurement of the highest IQ. First, unlike measuring height, weight and even human speed on a 100 meter dash, measuring IQ (and particularly the highest IQ) is a difficult endeavor. Alfred Binet – one of the pioneering forefathers of modern IQ tests – pointed out that measuring human intelligence was not as clear cut as measuring other human traits and characteristics, and that it was therefore necessary to accept a degree of error in the measurement of IQ. This assertion was one of Binet’s finest.

Another important concept in this debate is the diversity of IQ tests and associated scales. There are several different and well-respected IQ tests. Examples of well-known tests include Cattell Culture-fair IIIb, Weschler Adult Intelligence Scales (WAIS), Stanford-Binet, Woodcock Johnson, Raven’s Progressive Matrices to name a few. These individual tests have different constructs and are grounded in different although perhaps related theories on human cognition and might therefore measure different things (i.e. verbal or crystallized IQ vs. performance or fluid intelligence). Some tests are better at measuring certain types of human intelligence than others, while others do not test certain types of intelligence at all. For instance, Binet IV is one of the only well-known tests to have a quantitative reasoning scale. And subsequent revisions of the same test may change the measurement focus. Individuals may score more highly in some sections of the same test, while individuals may also score more highly on one test vs another.

To make matters more complicated, several of the above-mentioned tests use the same mean score of 100 but employ a different standard deviations which makes the comparison of test scores meaningless unless these scores are adjusted for the standard deviation of the test in question resulting in the score being converted into a percentile. For instance, the standard deviation for the Woodcock-Johnson Test of Cognitive Abilities is 16. So a score of 132 (i.e. two standard deviations above the mean score of 100) would place a test taker in the top 2% of the population.  The Cattell verbal tests on the other hand, have a standard deviation of 24, which means that a score of 148 (i.e. again two standard deviations above the mean of 100) corresponds to a score in the top 2% of test takers.

Finally, IQ tests will normally have a ceiling (i.e. Cattell verbal has a ceiling for 161 for adults which corresponds to a result in the 99.48% of the population – or one in 192 people). But someone scoring 141 on the Woodcock-Johnson test would be in exactly the same percentile. This latter point makes estimating the highest IQ in the world very difficult as estimates are then required on top of the estimations and errors that are inherent in IQ testing.

Highest IQ among geniuses of history

The interest in the highest IQ is really an interest about human genius and achievement. Would anyone care about someone with an alleged IQ of 300 (assuming this was even possible to measure) if that individual were incapable of doing anything other than scoring highly on every possible intelligence test? Probably not. So looking for the highest IQ is really about identifying human genius. You don’t need a psychologist to establish that great historical figures such as Da Vinci, Mozart, Beethoven or Graham Bell were geniuses. But highly fine-tuned IQ tests were not in existence at the time to measure their IQs the same way that Al Gore is said to have tested at 134  (but was the standard deviation 15, 16 or 24?).  So again, we are within the realm of estimates on top of estimates.

I will re-visit this topic in future posts. Meanwhile, you can have your fluid intelligence tested here.

IQ testing is nothing new

Mozart was subjected to IQ testing at age 8Some critics have either attempted to dismiss IQ testing as being baseless or lacking scientific measurement. What these people often forget is that IQ testing is not a new concept and has been around for millennia.

Although the blood relatives of the modern-day IQ test came to life in the late 1890s, cognitive ability tests can be traced back to 4000 B.C. In Asia, it is believed that the Chinese Emperor gave proficiency tests to his officials every third year.

This Chinese tradition continued as 1,000 years later, the Chan Dynasty was reported to have administered proficiency tests to would-be officials, which on the face of it appears like a sensible practice.

Over 700 years later in in Europe, Mozart had been ordered to be tested by King George the 3rd, with the test having been administered by philosopher Daines Barrington around 1763. Mozart had composed his first symphony by age 8.

IQ testing has always been around in one form or another

So human beings have always been interested in the measurement of intelligence, but it was not until Frenchman Alfred Binet came along that cognitive ability tests would receive a major scientific upgrade. Grand children of Binet’s tests are still in existence today, over a century later. But Binet’s greatest contribution to the science of IQ testing are not his questions per se, but rather the introduction of the concept of error in testing. That is, unlike height or body weight, or even strength, the measurement of human intelligence cannot be done with pinpoint accuracy, which is why an individual’s global IQ score must not be viewed as immutable or absolute, but rather a snapshot of someone’s intellectual ability at a particular point in time.

To get a snapshot of your IQ, click here.