');
The Unz Review: An Alternative Media Selection
A Collection of Interesting, Important, and Controversial Perspectives Largely Excluded from the American Mainstream Media
Email This Page to Someone

 Remember My Information



=>
Print Archives3 Items • Total Print Archives • Readable Only
Books
Nothing found
 TeasersJames Thompson Blogview

Bookmark Toggle AllToCAdd to LibraryRemove from Library • BShow CommentNext New CommentNext New ReplyRead More
ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
AgreeDisagreeLOLTroll
These buttons register your public Agreement, Disagreement, Troll, or LOL with the selected comment. They are ONLY available to recent, frequent commenters who have saved their Name+Email using the 'Remember My Information' checkbox, and may also ONLY be used three times during any eight hour period.
Ignore Commenter Follow Commenter
🔊 Listen RSS

I have no idea what you will be thinking or doing on 12th December, but efforts are being made to determine how UK citizens will vote on that day. Why the fuss? A rational approach to elections is to read the party manifestos, judge the personal and societal impact of the proposals, calculate the probability of the promises being kept, and then vote in advance by postal ballot. That done, the good citizen has no need of reading the news, watching debates, or even waiting in a queue on election day. Certainly, the rational citizen has no need of opinion polls: they are not relevant to informed decision-making.

In a further show of rationality, the wise elector will not stay up on election night, since the event cannot be influenced by watching it. Wiser still, the elector should not check the results for a few days, to avoid wasting time with dangling chads, recounts and other frivolities.

Sadly, the current culture of instant gratification requires a daily feed of speculation about how the election is going, that is, how other people are thinking about which way they might vote on election day, and whether they are likely to vote at all. That provides hours of comment. Then, in a meta-analytic frenzy, commentators discuss how much the opinion polls themselves are causing citizens to change their intended behaviours, as they realize that they are in a minority or, conversely, in such a majority that they might be giving too much power to the most popular party. Others, feeling exposed at being revealed to be in a minority, switch to the most popular party, just as others are abandoning it.

Opinion polls took off in the US in the 1920s. Gradually pollsters learned about the effects of selection bias. If you only poll the readers of your newspaper, you leave out the majority who don’t read it, and certainly those that don’t read any newspapers. Sampling is crucial. However good your sample, you can never include those who don’t want to be sampled. Different avenues of approach lead to different types of voter. Telephones (remember those?) only catch older people at home, and that may be only 20% of home numbers dialed.

To keep calm and avoid election fever, I have been reading a book on statistics. The best statistics books are those one does not admit reading. They explain complex matters in simple terms, and one is forever grateful, without admitting it.

David Spiegelhalter. The Art of Statistics: Learning from data. Random House, 2019.

This is a good book, with helpful explanations which concentrate on key concepts, not specific formulae. Brian Everitt, one of the developers of cluster analysis, always said to me that it was a deficiency of statistics that when you asked a statistical question you got a number instead of an answer. “Yes” or “no” are generally the answers one is searching for.

Far from bringing solace, the book discussed the problem of using opinion polls to predict election results, using the UK election of 2017 as a prime example of the shortcomings of conventional techniques, which failed to spot a late surge for the Labour party, leading to a hung Parliament, and a precarious working majority derived from a cobbled together Conservative coalition.

Can statisticians do better in 2019?

Spiegelhalter’s book is worth reading because he poses interesting questions, and answers them without numbers (or at least, without too many complicated numbers in the first instance). His aim is to get you to think straight about problems, and to solve them in a systematic way, leaving the number and calculation issues till later. Think hard, plan carefully, and then you can let the (properly selected and presented) numbers do the talking. OK, numbers don’t talk, but if you have thought things through, then you can explain the findings in ordinary language.

For example, has a nice family doctor been murdering his patients? How does one estimate normal death rates in medical practices? What counts as an excess? Anything else worth measuring? Is time of the day the death occurs something worth counting?

Spiegelhalter shows in a simple figure (page 5) that Dr Harold Shipman’s victims disproportionately died in the afternoons, when he did his home visits, and administered a lethal opiate overdose to at least 215 elderly patients. As Spiegelhalter dryly observes: “The pattern does not require sophisticated statistical analysis.”

Spiegelhalter’s approach is immensely sensible. He shows that statistics require careful thinking, and only then some number crunching, followed by an honest depiction of the findings. He is a good guide to statistics, particularly for those who panic at the sight of mathematical notation. He is good at explaining (yet again) the difference between relative and absolute risk; the distorting effects of question framing (in the UK 57% supported “giving 16 and 17 year olds the right to vote” but only 37% agreed with the logically identical proposal to “reduce the voting age from 18 to 16”; the distorting effects of telephone polls which do not declare what proportion of telephone numbers dialled never answered; he explains that a causal link does not mean that every single person shows an effect (many people smoke and don’t get cancer, but smoking causes more people who smoke to get cancer than those who don’t smoke, and some non-smokers get cancer); don’t rely on a single study, instead review all studies systematically; some potential causes can be called “lurking factors” but actual causes follow the Bradford Hill criteria (page 115): effect size so large it cannot be explained by plausible confounding; appropriate temporal or spatial proximity in that cause precedes effect and effect occurs after a plausible interval, or at the same site; effect increases as exposure increases, and reduces upon reduction of the dose; there is a plausible mechanism of action; the effect fits in with what is known already; the effect replicates; and the effect is found in similar but not identical studies.

He is excellent at explaining regression to the mean (two thirds of the apparently beneficial effect of speed cameras are due to regression to the mean); at discussing the bias/variance trade-off (over-fitting predictors to correct “bias” and reflect local circumstances at the cost of lower reliability); and in accusing algorithms of prejudice when predicting whether criminals will re-offend, notions of justice are being favoured over predictive accuracy.

There is much to recommend in this book. He covers a wide range of statistical issue with clarity, particularly on probability, so Chapters 8 and 9 are worth buying the book for. I will probably refer further to this book in subsequent posts.

What does this mean for the UK election on 12 December? I will try to explain. The current YouGov snapshot shows the following voting intentions, and what they are likely to mean in terms of parliamentary seats.

UK elections are carried out in 650 constituencies, and use the “first past the post” system, in which the winner is the one with most votes, and becomes the Member of Parliament, and all the other votes contribute nothing to the election. Harsh but effective, like the race between spermatozoa.

 
🔊 Listen RSS

For some years now I have made occasional mention of a survey conducted in May 2013 to March 2014 to find out what intelligence researchers thought about racial differences in intelligence. Now the paper has been published, so in academic terms the work actually exists, and can be quoted and commented upon. I can remember looking at the draft of this survey, suggesting it be made shorter, which is what I always say. Long surveys lead to low return rates. When it finally came out, I’m not sure exactly how I answered it, but will give my best recollections in this post. Academia makes a tradition out of slowness: the last survey on this matter was in 1988. By this reckoning the next survey will be in 2052. No need to rush things.

Survey of expert opinion on intelligence: Intelligence research, experts’ background, controversial issues, and the media. Heiner Rindermann, David Becker, Thomas R. Coyle. Intelligence 78 (2020) 101406
https://doi.org/10.1016/j.intell.2019.101406

Abstract
Experts (N max = 102 answering) on intelligence completed a survey about IQ research, controversies, and the media. The survey was conducted in 2013 and 2014 using the Internet-based Expert Questionnaire on Cognitive Ability (EQCA). In the current study, we examined the background of the experts (e.g., nationality, gender, religion, and political orientation) and their positions on intelligence research, controversial issues, and the media. Most experts were male (83%) and from Western countries (90%). Political affiliations ranged from the left (liberal, 54%) to the right (conservative, 24%), with more extreme responses within the left-liberal spectrum. Experts rated the media and public debates as far below adequate. Experts with a left (liberal, progressive) political orientation were more likely to have positive views of the media (around r= |.30|). In contrast, compared to female and left (liberal) experts, male and right (conservative) experts were more likely to endorse the validity of IQ testing (correlations with gender, politics: r= .55, .41), the g factor theory of intelligence (r= .18, .34), and the impact of genes on US Black-White differences (r= .50, .48). The paper compares the results to those of prior expert surveys and discusses the role of experts’ backgrounds, with a focus on political orientation and gender. An underrepresentation of viewpoints associated with experts’ background characteristics (i.e., political views, gender) may distort research findings and should be addressed in higher education policy.

As you can see, the paper confronts the politics/attitudes nexus head on. The popular view is that political orientations determine attitudes to scientific findings: find the author’s politics and you can predict and also discount their opinions. On the contrary, author’s observations of life may determine their politics, and in fairness you can equally argue that once you find the author’s observations and experiences you should discount their politics.

Who were the experts?

The survey was sent to authors who published at least one article after 2010 in journals covering cognitive ability. The journals included Intelligence, Cognitive Psychology, Contemporary Educational Psychology, New Ideas in Psychology, and Learning and Individual Differences. In addition, members of the International Society for Intelligence Research (ISIR) were invited (from December 2013 toJanuary 2014) to complete the EQCA, and an announcement was published on the website of the International Society for the Study of Individual Differences (ISSID).

A total of 265 responses were received, which produced a response rate of 19.71% from those approached for an opinion. This is not very good. The survey was long, which may have put people off, and since it asked about contentious matters, experts may have felt it was best avoided. As you may note later below, less than half of those who replied answered the item on racial differences in intelligence. They may have worried that the personal responses would leak out in some way.

The respondents have a claim to expertise. Their academic work was better than the scholarly average, so they probably know their subject. They were somewhat Left inclined, and this had an impact on questions like the contribution genetics makes to black-white differences. 16% of experts reported a 100% environmental explanation, whereas 6% reported a 100% genetic explanation. This group leans left in general, and is more extremely left on this particular issue. (For the record, I find it hard to argue for either of these extreme positions. My recollection is that I was in the 50:50 camp). Psychologists are generally very Left inclined, and intelligence researchers somewhat Left inclined.

According to Duarte et al. (2015, their Fig. 1), the leftward tilt in psychology emerged over the last three decades, leading to a 14:1 ratio of left (progressive, democratic) to right (conservative, republican) psychology faculty. More recent data show an even larger disparity (16.8:1, Langbert, 2018). The leftward drift is reinforced by a liberal bias among journalists (e.g., Groseclose & Milyo, 2005; Kuypers, 2002; Lichter, Rothman, & Lichter, 1986) and in Wikipedia (e.g., Greenstein & Zhu, 2012, 2018). In addition, there have been increasing disruptions and attacks against scientists with a perceived right orientation at university talks (e.g., Duarte et al., 2015; HXA Executive Team et al., 2018; Inbar & Lammers, 2012; Jussim, 2018). Student groups have interrupted lectures, courses, and invited talks, and in some cases violently attacked scientists and scholars with a perceived right orientation (e.g., Charles Murray; Arm, 2016; Beinart, 2017). Finally, these events parallel a growing political divide between progressive and conservative factions in the US and other countries (Pew Research Center, 2017, p. 7f.). In the Pew survey, the gap between Democrats and Republicans in the US grew (in 10 political domains) from an average of 14.9% in 1994 to 35.8% in 2017, an increase of 20.9%. 20.8% of this increase (or 99.5% of the growth) was due to a shift to the left by Democrats, whereas 0.1% was due to a shift to the right by Republicans.

Of course, if our science is worth anything, none of this should matter. Leftists should follow the facts, and where the weight of evidence supports a conclusion, they should back that conclusion. Rightists should likewise follow the evidence. As James Flynn says, science should be allowed to do its work.

Respondents are not very religious, but are socially liberal, and lean Left.


Rinderman et al. worry that

 
See the patterns
🔊 Listen RSS

The concept of general intelligence does not always gain general acceptance. It seems too general, and thus unable to explain the myriad sparkles of individual minds. Multiple intelligence, some people aver, is a better thing to have: a disparate tool set, not merely a single tool which has to be deployed whatever the circumstances.

Not so. The great utility of general intelligence is its generality. It can solve, or at least partially solve, the very large range of existential problems which beset our ancestors. It can help solve the current problems which beset us now, none of which were pressing on us in that particular form over the many generations in which our brains developed.

Indeed, having specialized skills would be a risky strategy for any life form, because whatever triumph that specialist mind conferred in a very specific niche, it would be hopeless if the niche disappeared, and it had to survive in an unfamiliar environment. For many specialized brains that would be a death sentence. Seen from an evolutionary perspective, a case can be made for prioritizing general problem-solving ability above all else. Specialized skills are limiting: they are too refined for the rough and tumble of ordinary existence. Better a Jeep (General Purpose vehicle) than a low-slung racing car if you want to travel across the rough roads of the world. The latest developments in artificial intelligence are based on making that intelligence general, not specific.

I have talked about artificial intelligence before, in the distant old age of 2016. That was when the best game we had in town was Alpha Go. How ancient that seems now. It was programmed to do things, using the game-winning strategies developed by the programmers.

http://www.unz.com/jthompson/artificial-general-intelligence-von/

Now we have Alpha Zero. It has been given an improved but still very simple brain, a few dozen layers deep, rather than the mere 3 layers of former years. Of course, there are not actual neurones or axons. These are concepts which serve to organize the way the programs run, and this is done on whole sets of servers, the way that most complex big-data problems are handled. It is this form of quasi-neural organisation which allows deep learning networks to operate. I see them as correlation accumulators, being conditioned by the reward of winning into deriving the strategies which promote winning, and learning their craft by perpetual competition. Call it speeded intellectual evolution: thousands and thousands of games being won or lost (generations flourishing or perishing) which lead to a well-conditioned, super-smart survivor, ready to take on the world.

These changes in the depth of learning make a big difference. Just give Alpha Zero the rules of a game, and it dominates that game, even though (or perhaps because) it has zero domain knowledge about that game. It has been stripped of human wisdom. It is an ignorant but fast-learning student. And it dominates all games, once it has been given the rules. Zero knowledge, but an ability to learn. Finally, a blank slate.

What will all this mean for us? Some citizens are waiting for the Singularity, also known as the Second Coming. Forget it, says Hassabis. Artificial intelligence will be a tool we use: fine in some settings, not so good in others. For example, artificial intelligence is very good at looking at retinal scans and detecting anomalies which require further investigation. The artificial intelligence programs of old would have flagged up the particular scan for further investigation, and left it at that. Now the program flags up the scan, and also identifies the features which led to it being selected for investigation. The best human expert can look at the suggestion, and decide whether the artificial intelligence program has got it right. The expert now has an even better tool than before.

Can artificial intelligence be mis-used? Yes. So can a filing cabinet. Do you remember those? Code breakers with filing cabinets helped win the Second World War for the Allies. That code-breaking required very crude artificial intelligence tasked with doing just one job: calculating whether an encoded message could have been created by a particular rotor setting of an enemy Enigma machine. By rejecting, say, 240 impossible settings a few possible ones could be studied in more detail. After breaking the code then real intelligence took over, and the knowledge stored in filing cabinets made sense of the messages.

Demis was interviewed by Jim Al-Khalili on The Life Scientific, a BBC radio program.
I hope this link works for you, though it may be UK only.

https://www.bbc.co.uk/programmes/m0009zbj

 
• Category: Science • Tags: Artificial Intelligence 
Response to Birney, Raff, Rutherford, & Scally
🔊 Listen RSS

It is good to have an essay which sets out a point of view clearly, so Ewan Birney’s 24th October blogpost (Ewan Birney, Jennifer Raff, Adam Rutherford, Aylwyn Scally) is welcome. A summary of this sort gives discussions of racial differences a focal point.

Race, genetics and pseudoscience: an explainer
http://ewanbirney.com/2019/10/race-genetics-and-pseudoscience-an-explainer.html

It is not up to me, but I wonder if a more balanced title would have been: Race, genetics and science? Birney et al. give a general introduction to genetic discoveries and then refer to “darker currents” affecting current research:

A small number of researchers, mostly well outside of the scientific mainstream, have seized upon some of the new findings and methods in human genetics, and are part of a social-media cottage-industry that disseminates and amplifies low-quality or distorted science, sometimes in the form of scientific papers, sometimes as internet memes – under the guise of euphemisms such as ‘race realism’ or ‘human biodiversity’. Their arguments, which focus on racial groupings and often on the alleged genetically-based intelligence differences between them, have the semblance of science, with technical-seeming tables, graphs, and charts. But they’re misleading in several important ways. The aim of this article is to provide an accessible guide for scientists, journalists, and the general public for understanding, criticising and pushing back against these arguments.

Strong stuff. These misleading cottage dwellers are low grade people, it seems! So, the Birney et al. paper is not so much an “explainer” as an attack on a position with which they do not agree. No problem with that, but these authors seek to clothe themselves in the robes of righteousness, rarely a good stance in scientific debate.

Birney et al. argue that the standard descriptions of races:

(the) crude categorisations used colloquially (black, white, East Asian etc.) were not reflected in actual patterns of genetic variation, meaning that differences and similarities in DNA between people did not perfectly match the traditional racial terms. The conclusion drawn from this observation is that race is therefore a socially constructed system, where we effectively agree on these terms, rather than their existing as essential or objective biological categories.

The key phrase is “did not perfectly match”. A high re-definition. In fact, many people find that the new genetics is a pretty good match with the continental racial groupings in general use. As witness, the authors concede as much, and also use these terms when making various points in their arguments. There are genetic groupings in different continents. It seems a great leap to say that because the match with genetic research is good, but not perfect, we can reject objective biological categories based on DNA. Many researchers regard DNA classifications as an improvement on the older ones: there is plenty of overlap, but DNA provides finer detail, which increases our understanding of group differences. The authors will have none of this. They argue:

Even though geography has been an important influence on human evolution, and geographical landmasses broadly align with the folk taxonomies of race, patterns of human genetic variation are much more complex, and reflect the long demographic history of humankind.

Forgive me if I give a cheer here: this seems a most welcome concession: if DNA provides descriptions which “broadly align” with the old ones, why not accept this major point of agreement, shake hands, and move on? Apparently, we cannot do so because there is more genetic diversity within Africa than anywhere else. Africans are “just as different from each other as Africans are to non-Africans”.

I probably misunderstood Cavalli-Sforza and his genetic trees, and imagined that these divergent African groups in Africa were all at a significant genetic distance from those Africans who went walkabout out of Africa, and eventually became non-Africans. Can we ignore the finding that some African groups are genetically close and have a common ancestry; and that groups who later splinter away and leave Africa go on to develop differently on other continents? Furthermore, to understand what this diversity implies we should have population totals for each of the genetic African groups, and some evidence that they differ in intelligence. Currently, African country levels of IQ are pretty similar.Tribal differences would be interesting.

the real history of Homo sapiens is more like an overgrown thicket than a stately branching tree. Much of the population structure that we see today in ancestry testing results dates back only to a few thousand years or less. For example, the majority of European genomes are a mixture of at least three major groups within the last 10,000 years: the early hunter-gatherers who first populated the continent, a second wave of ancestry from the Near East associated with the spread of farming; and a third contribution from north Eurasia during the Bronze Age (2000–500 BCE).

So, genetic trees aren’t acceptable. Furthermore, I don’t see why a few thousand years is such a problem. Two thousand years, at one generation per 28 years, is 71 generations. If it is the 10,000 years of agriculture that is the time base, that provides 357 generations for the breeder’s equation to play out. Even 16 generations of very hard selection can bring about big changes in the relative proportions of different alleles. In fact, as luck would have it, a very recent publication shows that if brighter people move to London from coal mining areas and the less bright stay mining coal, then even in the short space of the British Industrial Revolution there will be evidence that those who go walkabout start differing from the local populations they leave behind.

Genetic correlates of social stratification in Great Britain. Published: 21 October 2019
https://www.nature.com/articles/s41562-019-0757-5

Anyway, can’t we just display racial results using a Principal Components Analysis? Apparently not. The authors caution that data collection might be biased by:

existing cultural, anthropological or political groupings. If samples are collected based on pre-defined groupings, it’s entirely unsurprising that the analyses of these samples will return results that identify such groupings. This does not tell us that such taxonomies are inherent in human biology.

I can see that if geneticists only collect data say, from Protestants and not Catholics, or Sunni and not Shia, or even from Republicans and not Democrats, this could be a problem, but everyone lives in cultural and political groupings, researchers included, so why not take broad samples (like UK Biobank) across the world and then see if religious and political groupings show any genetic discriminators.

The authors are also against refining the concept of “race” to include new findings, and recommend it be dropped altogether, in favour of “populations”. No problem. They can use “populations”. Although different people may prefer different descriptors, if a name-change eases tensions, adopt it.

The authors then go on to make a general point about science:

It is often suggested that geneticists who emphasise the biological invalidity of race are under the thumb of political correctness, forced to suppress their real opinions in order to maintain their positions in the academy. Such accusations are unfounded and betray a lack of understanding of what motivates science.
[]

 
🔊 Listen RSS

Last Friday I asked this question of Andrew Roberts, whose one volume biography “Churchill: Walking with Destiny” has been described as the best single-volume life of Churchill ever written. It is marvellous to be able to question someone who had actually read all Churchill’s school reports. On a broader front, Roberts had the benefit of recently released documents, including being the first Churchill biographer to be given unfettered access to the whole of the Queen’s late father King George VI’s wartime diaries. What did the King know, you may wonder? The whole lot, it seems. Churchill had an audience with the King every week during the entire war, and told him everything that was on his mind that week, including every single secret: the plans for D day, secret actions abroad, everything. The King wrote it all down in his diaries. Why was Churchill willing to disburse himself thus, when they were not exactly soul mates on many matters, including the Abdication? Churchill said he spoke to the King because he was the only man in Britain who was not after his job. He also judged, correctly, that the King would keep his trap shut.

I asked Andrew Roberts the question because so many people, possibly to attack the notion of intelligence or scholastic ability being predictive of later life success, revel in the notion that Churchill was a dunce at school. The moral of that story, it would seem, is that dunces rise as far as swots, so damn the swots, and exams count for nothing.

Was Churchill really no good at school?

Answer: he was in the top third of his class in all subjects, and towards the top in History and English. He was also a rebel, which caused him trouble, but was to stand him in good stead in his later political life.

Roberts writes (pages 16 following):

It is rare for anyone to depict themselves as less intelligent than they genuinely are, but Churchill did so in his biography “My Early Life” in 1930, which needs to be read in the context of his colourful self-mythologizing rather than as strictly accurate history. His school reports utterly belie his claims to have been an academic dunce. Those for St George’s Preparatory School in Ascot, which he entered just before his eight birthday in 1882, record him in six successive terms as having come in the top half or top third of the class.

Churchill was regularly beaten as St George’s, but this was not because of his work – his History results were always “good” “very good” or “exceedingly good” – but because his headmaster was a sadist described by one alumnus as “an unconscious sodomite” who enjoyed beating young boys on their bare bottoms until they bled. Ostensibly the reason for these fortnightly beatings derived from Churchill’s bad conduct, which was described as “very naughty” “still troublesome” “exceedingly bad” “very disgraceful” and so on. “He cannot be trusted to behave himself anywhere” wrote the headmaster, but “He has very good abilities.”

Churchill’s stay at St Georges was one long feud with authority. Churchill’s very good abilities included an excellent memory. He learned his Latin first declensions by heart.

His capacity for memorizing huge amounts of prose and verse stayed with him for life, and would continue to astonish contemporaries well into old age. Many were the occasions that he would quote reams of poetry or songs or speeches half a century after having learned them.

He was drawn to long Shakespeare soliloquies, but also to much of the repertoires of popular music hall performers. At his next school, in Hove, Churchill read voraciously, especially epic tales of heroic, often imperial, adventures. He came first in Classics, third in French, fourth in English, and near or at the bottom of the entire school for conduct. He remained unpunctual all his life.

Churchill claimed to have not learned any Latin or Greek at Harrow, but his school reports show that that was untrue. Furthermore, at fourteen he got a prize for reciting without error 1,200 lines of Macaulay’s Lays of Ancient Rome. He could quote whole scenes of Shakespeare’s plays and had no hesitation in correcting his masters if they misquoted.

Why did Churchill underplay his abilities? The answer is simple: if you boast about your abilities people will hate you; if you claim to be a fool they will be charmed by your modesty and by the abilities they detect in you. Always help the voter believe himself to be brighter than he is.

Does this false modesty explain why palpably clever people often claim to be not much different from the average? Probably. Whenever some famous figure in science tries to cheer us all up by confessing that they failed at school, or developed very late, I wonder if they are simply showing that they are clever enough to realize that the clever thing to do is to avoid being judged “too clever by half”.

In the spirit of empirical enquiry, we should request their entire series of school reports, and any further test results and higher education achievements. Without those we have no need to believe stories simply designed to make us feel good.

 
• Category: Science 
🔊 Listen RSS

It is generally agreed that the Wechsler tests are one of the best measures of intelligence, and can be considered the gold standard. That is hardly surprising, because they cover 10 subtests and take over an hour, sometimes an hour and a half, for a clinical psychologist to administer. This gives the examiner plenty of opportunity to see the fine grain of individual responses, to probe within the limits allowed by the manuals to make sure that the person has every chance to reveal what they know, and to observe the way in which the person handles objects on non-verbal tasks. Watching block design is a window into how a person thinks. The examiner can also notice when an explanation has been misunderstood and when attention is wandering, and can stop the test and continue after a break or on another occasion. The results are presented together with a written evaluation of how the person approached the individual tests, identifying strengths and weaknesses, and often suggesting areas where subsequent testing might show higher scores.

Wechsler put together the decathlon of tests on the pragmatic basis of having examined how particular tests functioned, and paid attention to the verbal versus non-verbal dichotomy, as well as complex and simple, speed versus untimed, thinking on the hoof versus testing for acquired mental skills. It does a pretty good job. More pragmatically, having 10 tests (can be up to 15 if subsidiary tests are included) gives both examiner and person something to ponder about. Originally the 5 verbal tests were added together to give a Verbal IQ, and the other 5 a Performance IQ estimate. Later that moved to 4 factors based on 2 or sometimes 3 tests each, which was less reliable, but allowed more discussion about different skills, and the supposed discrepancies between those skills. I think it is over-factored at the moment, and attempted new subtests often get dropped at the next revision.

Given that there is a lot of debate about the appropriateness of intelligence testing of Africans, it is particularly interesting to look at Wechsler results to see if their finer detail about different skills can cast light on the general pattern of African mental abilities.

A cross-cultural comparison between South African and British students on the Wechsler Adult Intelligence Scales Third Edition (WAIS-III). Kate Cockcroft, Tracy Alloway, Evan Copello and Robyn Milligan. Front. Psychol., 13 March 2015 | https://doi.org/10.3389/fpsyg.2015.00297

https://www.frontiersin.org/articles/10.3389/fpsyg.2015.00297/full

There is debate regarding the appropriate use of Western cognitive measures with individuals from very diverse backgrounds to that of the norm population. Given the dated research in this area and the considerable socio-economic changes that South Africa has witnessed over the past 20 years, this paper reports on the use of the Wechsler Adult Intelligence Scale Third Edition (WAIS-III), the most commonly used measure of intelligence, with an English second language, multilingual, low socio-economic group of black, South African university students. Their performance on the WAIS-III was compared to that of a predominantly white, British, monolingual, higher socio-economic group. A multi-group confirmatory factor analysis showed that the WAIS-III lacks measurement invariance between the two groups, suggesting that it may be tapping different constructs in each group. The UK group significantly outperformed the SA group on the knowledge-based verbal, and some non-verbal subtests, while the SA group performed significantly better on measures of Processing Speed (PS). The groups did not differ significantly on the Matrix Reasoning subtest and on those working memory subtests with minimal reliance on language, which appear to be the least culturally biased. Group differences were investigated further in a set of principal components analyses, which revealed that the WAIS-III scores loaded differently for the UK and SA groups. While the SA group appeared to treat the Processing Speed subtests differently to those measuring perceptual organization and non-verbal reasoning, the UK group seemed to approach all of these subtests similarly. These results have important implications for the cognitive assessment of individuals from culturally, linguistically, and socio-economically diverse circumstances.

The first thing to note is that the authors found that the factor structure of the WAIS results were different in the black South Africans compared to the white British. The caution is that the whites were not only white, but rich; and the black South Africans not only black, but poor. The sample sizes are rather small for factor analytic studies, but in the very strict interpretation of measurement invariance these two genetic groups should be seen by the authors as having an underlyingly different structure of intelligence. I think that the “measurement invariance” requirement is too harsh for all but large groups of subjects, and if we really apply it universally we end up being unable to discuss any group differences at all.

Also of importance is that there were few South Africans in the sample, only 107 as opposed to 349 for the British sample. Against that, far more data have been obtained on each person than is the case in group tests. There should be some bonus points for that, and for collecting Wechsler results in Africa, which are in short supply. Indeed, the authors gave 13 subtests, which is very good. However, factor studies on 107 people are not very likely to produce stable results.

The Africans were not rich:

All of the SA participants came from low socio-economic circumstances. The majority (82%) resided in rural areas, in a basic brick house with running water and electricity. Hardly any (98%) had washing machines, microwave ovens, or tumble-dryers. Less than 1% of families owned a motor vehicle or personal computer.

However, those conditions were similar to British life in the 1950s, at which time intelligence test scores were roughly IQ 100. On the other hand, educational provision then was probably much better than current day South Africa.

There is a 0.44 effect size on Performance IQ and a massive 1.53 effect size on Verbal IQ. These are big differences, given that both samples are university students. Another approach is to look at the results and list the South African subtest scores from strong to weak:

Matrix Reasoning 10.68
Information 10.19
Digit Symbol Coding 9.75
Symbol Search 9.71
Digit Span 9.35
Letter-Number Sequencing 9.17
Comprehension 8.99
Vocabulary 8.92
Block Design 8.67
Arithmetic 8.66
Similarities 8.12
Picture Completion 8.05

This is an interesting hierarchy, in that the very culture-loaded “Information” subtest (composed of very general, general knowledge questions) is a strength, not a weakness. Amusingly, in the US context it used to be considered too culture-loaded to be included in measures of group differences.

Continuing with the discussion about the results the authors say:

There was no evidence of cultural biases in the Matrix Reasoning subtest or in the WM subtests that had minimal reliance on language. (2) All of the verbal and most non-verbal subtests, as well as the PS subtests, showed evidence of cross-cultural differences. (3) The SA and UK samples’ scores revealed different factor structures.

 
• Category: Science • Tags: Africans, IQ, South Africa 
🔊 Listen RSS

It takes a certain courage to title a paper: Genetic “General Intelligence,” Objectively Determined and Measured.

Javier de la Fuente, Gail Davies, Andrew D. Grotzinger, Elliot M. Tucker-Drob, Ian J. Deary

doi: https://doi.org/10.1101/766600

Objectively? Is such language permissible in contemporary science? Should we not instead be cautiously shuffling towards seven types of ambiguity, hedged in with eight layers of limitations? Who are these wild types willing to risk all in search of glory? In fact, a look at the names shows that this group have established an excellent track record, so they have in all probability chosen their words carefully, with the facts on their side.

First, a digression. Since 1969 there have been factions eager to denigrate intelligence, saying that its measures are based on arbitrary mental tasks, gathered together in the statistical artefact of a make-believe common factor, which is based on precisely nothing. How can one possible counter this all-encompassing dismissal of the psychometric project?

One approach would be to link a common factor to a genetic substrate, and anchor it in the genome.

It has been known for 115 years that, in humans, diverse cognitive traits are positively intercorrelated; this forms the basis for the general factor of intelligence (g). We directly test for a genetic basis for g using data from seven different cognitive tests (N = 11,263 to N = 331,679) and genome-wide autosomal single nucleotide polymorphisms. A genetic g factor accounts for 58.4% (SE = 4.8%) of the genetic variance in the cognitive traits, with trait-specific genetic factors accounting for the remaining 41.6%. We distill genetic loci broadly relevant for many cognitive traits (g) from loci associated with only individual cognitive traits. These results elucidate the etiological basis for a long-known yet poorly-understood phenomenon, revealing a fundamental dimension of genetic sharing across diverse cognitive traits.

The authors go on to explain that tests vary in the amount of general or specific factors required for their successful completion. Some variance in each test is shared with all other tests (g) and some is specific to each test (s). Hundreds of studies show that the g factor replicates, and accounts for 40% of test variance. Twin studies show that general intelligence is strongly heritable, suggesting an overlapping genetic architecture. However, the GWAS approach does not distinguish between “g” and “s”. The authors try to search for “g” directly, using a multivariate molecular genetics approach to the hierarchy of intelligence, g at the top, cognitive domains second, and individual tests at the bottom.

They used UK Biobank, blessed be its name, and seven cognitive tests:

Reaction Time (n = 330,024; perceptual motor speed), Matrix Pattern Recognition (n = 11,356; nonverbal reasoning), Verbal Numerical Reasoning (VNR; n = 171,304; verbal and numeric problem solving; the test is called ‘Fluid intelligence’ in UK Biobank), Symbol Digit Substitution (n = 87,741; information processing speed), Pairs Matching Test (n = 331,679; episodic memory), Tower Rearranging (n = 11,263; executive functioning), and Trail Making Test – B (Trails-B; n = 78,547; executive functioning). A positive manifold of phenotypic correlations was observed across the seven cognitive traits.

The authors then investigate the genetic contribution of g to variation in each of the cognitive tests. Genetic correlation is simply the correlation between the genetic contributors to each of the measured abilities. It is correlation at the level of genes, not test scores. If the brain is made up of modules, then one would expect such genetic correlations to be low. On the other hand, a brain largely based on general ability would have strong correlations. In fact, the genetic correlations range from .14 to .87, with a mean of .53 and the first principal component accounted for a total of 62.17% of the genetic variance. The genetics of intelligence is largely g based, it would seem.

Further work identifies the tests that are most g loaded:

Trails-B (95.30% genetic g; 4.70% genetic s), Tower (72.80% genetic g; 27.20% genetic s), Symbol Digit (69.10% genetic g; 30.90% genetic s), and Matrices (68.20% genetic g; 31.80% genetic s). Verbal Numerical Reasoning (51.40% genetic g; 48.60% genetic s) and Memory (42.40% genetic g; 57.60% genetic s) are more evenly split. Reaction Time has the majority of its genetic influence from a genetic s (9.50% genetic g; 90.50% genetic s). We emphasize one important implication of these results, i.e. that genetic analyses of some of these individual traits will largely reveal results relevant to g rather than to the specific abilities thought to be required to perform the test.

Reaction time is somewhat of an outlier from the genetic point of view, as might be expected by the very simple, knee-jerk nature of the task.

Anyway, which locations on the genome are contributors to g? Getting an answer is important, since a GWAS hit could be generalizable to a broad universe of cognitive traits, or specific to a particular task, and knowing which makes a difference. In the explanation below, Q is a measure of heterogeneity, opposite to g.

Miami plot of unique, independent hits for genetic g (top) and Q (bottom). Q is a heterogeneity statistic that indexes whether a SNP evinces patterns of associations with the cognitive traits that departs from the pattern that would be expected if it were to act on the traits via genetic g. The solid grey horizontal lines are the genome-wide significance threshold (p < 5×10−8) and the dotted grey horizontal lines are the suggestive threshold (p < 1 ×10−5). The following genome wide significant loci are highlighted: Red triangles : g loci unique of univariate loci. Blue triangles : g loci in common with univariate loci. Green circles : univariate loci not in common with g loci. Yellow triangles : g loci in common with Q loci. Yellow diamonds : Q loci unique of g loci.

Overall, we identified 30 genome-wide significant (p < 5×10−8) loci for genetic g, 23 of which were common with the univariate GWAS of the individual cognitive traits that served as the basis for our multivariate analysis. We identified, in total, 24 genome-wide significant loci for Q, 3 of which were significantly associated with genetic g (and therefore likely to be relevant to more specific cognitive traits, and false discoveries on g) and 15 of which were significantly associated with at least one individual cognitive trait in the test-specific GWASs.

Although it was not intended to be part of the study, seven new locations for memory were found, and some of those locations have been associated with Schizophrenia, anti-saccade response, linguistic errors, hand grip strength and bone mineral density.

 
• Category: Science • Tags: Behavior Genetics, General Intelligence, IQ 
🔊 Listen RSS


It cannot have been easy to be the first reporter on the tragic scene at Aberfan. A vast slagheap of colliery spoil above a small Welsh mining village gave way in 1966, roaring down the valley and flattening the school and killing 116 children, and also destroying houses, killing 28 adults in all. When John Humphrys arrived the miners, having heard the dreadful news, had come up from the pits to dig frantically in the ruins, in search of their children. The slagheap was a known danger which had been left unresolved, and it was natural that for ever afterwards John had no time for evasive, self-excusing authority figures. Far from being disordered, he became a post-traumatic interrogator, clearly standing up for those whose fears and pleas had been ignored.

This morning he finished his 32 years as the chief interviewer in a morning radio program. Just think of that: the best show in town is on “steam radio”. What was the attraction? In British life, important people watch TV sparingly, and not at all in the mornings. Radio is the acceptable morning companion for them, because they can have breakfast, read the papers and even be driven into work while the program is on. The Today program is compulsory listening and a treasured mouthpiece for the movers and shakers: Prime Ministers, Government Ministers, Members of Parliament, Archbishops, Rabbis, Government Chief scientists, Nobel Laureates, novelists, playwrights, and assorted talkative worthies. They even, on occasion, invited in the odd psychologist.

Why talk about the program to a world-wide audience many of whom know nothing about him? Because truth matters. Many journalists have neither the energy nor the talent to hunt for the story behind the official story. Others claim to have that very thing, but don’t have enough evidence to back it up. What Today achieved was a forensic examination of the high and mighty, who were often revealed as low and crafty, their smarm disarmed, their schemes exposed. If there was blood on the floor, it was in a good cause. The best interviews kept people in their kitchens, the morning commute postponed, or stuck in their cars outside the office, waiting for the final explosion as a great edifice of conceit toppled.

John developed a reputation as a fearsome, adversarial and relentless questioner, quick to pounce on chinks in the well-practiced political operator’s armour so as to land the killer question which impaled them forever: unable to respond truthfully to a fundamental enquiry. Politicians wanted to avoid him, and at the same time wanted most of all to get the better of him, which they rarely did.
His trick was basically simple: with the help of the all night Today team he came in at 4 am to read all the day’s newspapers, many of the relevant government papers and statements, and to collect the clippings of the previous promises, clarifications, evasions and protestations uttered by the great and good so as to be able to skewer any of them who tried to re-write the past.

Sometimes he overdid it, and became an interrupter rather than a questioner, but that was mostly excusable because he had heard the evasive excuses thousands of times before. The Today program became his platform for investigative journalism, and he moulded it to his character. It became his program. He was as big as it. It became the premier program in Britain, and could pull in the top leaders and thinkers from all over the world. To be invited was mostly a blessing. To be evaluated beforehand was sometimes more demanding than the interview itself. The researcher, inevitably a woman, would begin her telephone call in a warm and mildly seductive manner, making you feel important and worth listening to. Then, under the guise of getting a few notes for John, she would then switch in to a “are you worth speaking to” interview. The cull was brutal: you had to be quick on your feet, able to speak more than coherently, and had to expose your preferred positions, your quips, quotes and treasured answers in the hope of getting on the short list. Sometimes they would come back with a gentle excuse: the story had moved on. In disappointment one might find the next morning that the story had indeed moved on, but to another interviewee.

Waiting in the green room was almost as much fun as being interviewed. I began conversations with notable figures I would otherwise never have had access to, and continued the conversations after we had each been grilled. Fun to find that an ex-Chancellor had the same opinions on quantitative easing, or that a scientist was able to quickly settle a number of issues first hand.

I considered John Humphrys to be a mate of mine, on the very slender basis that on my something like 12-18 Today program interviews he was often the interviewer, and since my appearances were usually after 8.30 am I would stay on to talk with him. We had both lost a big chunk of our pensions when the Equitable Life Assurance Society (the oldest in the world) went bust, so we discussed the various bits of advice we had received as to whether we should cash in the slender remains or hand over what was left to some other company. At other times I would jokingly brief him on what the PR agents of famous actors had advised them to say, as they waited in the green room before seeing him. He hated PR agents above all else. I met actors and actresses, ex Chancellors of the Exchequer and the more oratorical Members of Parliament, such as Tony Benn.

The program showed the power of the spoken word, with the advantage of being able to focus on the content and tone of what was being said, without the distractions of the speaker’s appearance or facial expression. It required concentration, but repaid attention.

Of course, the BBC has a house style, and some preferences. Try as it might to be fair, that is always in question. The Today program is the ultimate meeting place of the chattering classes, and sometimes no more than studiously correct with the whispering classes, who are prone to lamentable populism and blunt opinions. There are rules about these things.

I was almost always called in on non-political stories, often associated with trauma and its treatment, and plays and books on those themes, and occasionally on broader issues. Little controversy there. Nonetheless, I did get interviewed on the Rotherham grooming gangs a few years ago. That side of life, in which those without a voice were cruelly mistreated, was one which John investigated with compassion. He was able to get the painful testimony of people who did not have PR agents, and little facility with public speaking. He managed to help them speak their private thoughts.

In my view was wary and critical of the Movers and Shakers, and more on the side of the moved and shaken.

 
• Category: Culture/Society • Tags: Britain 
🔊 Listen RSS

There is a popular genre of commentary which wishes to show that bright people make as many errors as less bright people, perhaps as a consequence of divine retribution. “Einstein made an error in maths which was spotted by a bus conductor” lifts the hearts of some readers. Of course, bright people make errors. Do they do so at a higher, lower, or the same rate as everyone else?

Can we test this in a way that favours the average citizen? How about ignoring high finance and stock picking and just concentrating on basic aspects of household financial decision-making, the nitty gritty of so many of our lives? Can we also avoid the charge that some have levelled against the correlation of intelligence and earnings and wealth, that some people just don’t want to be rich? Let us restrict the focus to seeing whether people, whatever their income, can avoid costly financial mistakes. I assume that even those who don’t wish to pursue inordinate wealth, but instead want to live happily on an average income, still wish to handle their meagre emoluments in a sensible fashion, avoiding hare-brained incompetence and fiscal irresponsibility.

Agarwal and Mazumder think this matter is worth exploring.

Sumit Agarwal and Bhashkar Mazumder. Cognitive Abilities and Household Financial Decision Making. American Economic Journal: Applied Economics 2013, 5(1): 193–207http://dx.doi.org/10.1257/app.5.1.193

https://drive.google.com/file/d/1tYKD-6nZnIJd9H2Pt2-bXWFCcIkEthWQ/view?usp=sharing

We analyze the effects of cognitive abilities on two examples of consumer financial decisions where suboptimal behavior is well defined. The first example features the optimal use of credit cards for convenience transactions after a balance transfer and the second involves a financial mistake on a home equity loan application. We find that consumers with higher overall test scores, and specifically those with higher math scores, are substantially less likely to make a financial mistake. These mistakes are generally not associated with non-math test scores.

A 1 standard deviation increase in the composite AFQT score is associated with a 24 percentage point increase in the probability that a consumer will discover the optimal balance transfer strategy and an 11 percentage point decrease in the likelihood of making a rate-changing mistake in the home loan application process. Interestingly, we find that verbal scores are not at all associated with balance transfer mistakes and are much less strongly associated with rate-changing mistakes.

This was based on military personnel on whom there were results available on the Armed Forces Qualifying Test, then linked to data from a credit card finance company, so these are real data, able to pick up sub-optimal choices, more commonly known as mistakes.

The “balance-transfer mistake” is to be fooled by a “teaser” low APR rate on a new card into using the new card rather than the old one during the transfer period. This is because while the old purchase balance is transferred at the new low rate, new purchases on the new card are at a new much higher rate. Sneaky. Some will learn from their mistake as the monthly bills come in with surprisingly high interest rates, and have a “eureka” moment. Others will take more time to learn their mistake.

Lower ability people get fooled more often, brighter people learn about their mistake more often, and the brightest (those above 70th percentile) never make the mistake in the first place, so there is a hierarchy of “eureka” moments.

The next issue the authors consider is the mistake of picking the wrong rate when requesting a loan based on a home valuation. This is another opportunity for the lender to crank up the interest rate, but in this case the rates are visible before the loan is taken out, so it is possible to turn down a poor offer, and try another lender.

Again, the mistake (which increases the cost of the loan by 2.7%) is made at lower ability levels, and not in those above the 70th percentile.

Interestingly, given that the 2008 financial crisis was sometimes portrayed as being particularly unfair to black borrowers, the race difference in this study is in the opposite direction, so long as one controls for intelligence.

Interestingly, we find that the effect on being black is actually positive conditional on AFQT scores and education. This finding is interesting in light of the theoretical model developed by Lang and Manove (2011) who argue that blacks of similar ability to whites may need to signal their productivity to employers by acquiring more education. They cite studies suggesting that blacks are not rewarded the same as whites in the labor market for equivalent AFQT scores. It is possible that the increased likelihood of discovering the optimal balance transfer strategy among blacks who have the same measured ability as whites, reflects their greater investments along other dimensions of human capital.

Being intelligent implies an ability to learn quickly, so it is interesting to plot out how long it takes people to recognize their financial errors, and move to an optimal strategy. High scorers avoid any errors (they can see the error in advance, and avoid it) while those of lower ability take 5 months to correct their mistakes.

To illustrate the effects of AFQT scores on the speed at which individuals learn, we plot in Figure 3 the unadjusted mean AFQT scores for borrowers based on how many months it took them to discover the optimal strategy. The chart shows that AFQT is monotonically decreasing in the number of months it takes borrowers to learn. We estimate that a 1 standard deviation increase in AFQT scores is associated with a 1.5 month reduction in the time it takes to achieve optimal behavior speed. This analysis suggests that cognitive skills also affect the “intensive” margin of optimal financial decision-making behavior.

These two real-life financial errors show the importance of cognitive ability, and may explain a wider range of bad choices regarding finance.

In any case, we think that our analysis likely only touches the tip of the iceberg in terms of the effects of poor financial decision making, due to low cognitive ability, on individual and social welfare. It is highly plausible that similar types of financial mistakes have played a role in explaining loan default, foreclosures, and bankruptcies. In a highly complementary paper to ours, Gerardi, Goette, and Meier (2010) find a strong association between numerical ability and mortgage delinquency and default during the recent financial crisis. Future research may shed more light on the quantitative importance of cognitive ability.

In summary, I think that there are plentiful studies showing that intelligence tests produce scores which are predictive of a wide variety of important real-life measures: earnings, savings and managing loans.

 
• Category: Science • Tags: IQ 
🔊 Listen RSS

“All happy families are alike” declaimed Tolstoy, so as to then add the equally unsubstantiated coda: “each unhappy family is unhappy in its own way”.

Readers may say: “So true, so very true”, but that would be in the literary sense, in that if it sounds profound it is judged to be so. Like all novelists, Tolstoy was not upon oath. It was enough that his observations be thought profound for them to be valued as such. Empirical support was not required. The truth about families may be different: unhappy families might be made alike by their troubles, while happy families might be free to divert themselves and become unalike in their own disparate individual ways.

Sociologists often regard families as powerhouses of social privilege, able to provide children unmerited advantages in the form of money, experiences, tuition and social connections. In this theory rich families are like powerful artillery guns, shooting their children further forward than the families of equally meritorious poor children, giving them fame, fortune and a headstart in the race for social advancement.

What emerges if we take an empirical approach to family success? Charles Murray (1998) looked at the NLSY79 data set, seeing to what extent intelligence test results explained later earnings levels.

https://analyseeconomique.wordpress.com/2012/11/10/income-inequality-and-iq-by-charles-murray/

The most recent calendar year with income data is 1993. All dollar figures are stated in 1993 dollars. The measure of IQ is the Armed Forces Qualification Test, 1989 scoring version, normalized for each year’s birth cohort to an IQ metric with a mean of 100 and a standard deviation of 15 (NLSY subjects were born from 1957 through 1964)
.
Children were put into 5 groups for analytical purposes. In the IQ metric, this means break points at scores of approximately 80, 90, 110, and 120.

The Very Bright start slowly, most probably because they are at college gaining degrees which will help them get higher incomes in the long-term, as shown by the after 1982. Everyone gets age-related salary increases, but the two lowest groups reach a plateau very quickly.

This is the pattern for total family income, which includes welfare payments and spouse’s earnings.

The effect of including welfare payments and spouse’s income (the two most common types of income added to total family income) is to narrow the proportional gaps among cognitive classes while tending to widen the raw dollar gaps. The regularity of the statistical relationship is similar for both measures. The bivariate correlation of IQ to income in this population of adults in their late twenties to mid-thirties was .37 for earned income and .38 for total family income.

Those who take a largely sociological perspective might still want to argue that social forces determine both earnings and intelligence, such that social class is the hidden but fundamental factor. In fact, putting socio-economic status in the regression equation (Beta .10) does not make it more powerful than the effects of IQ (Beta .31).

An extra IQ point is associated with an extra $462 in wages independently of parental SES. However, it is still possible to argue that there are some unmeasured aspects of growing up in that particular family that ensure that family life (social class) is the main driver, and that the socio-economic status measures do not capture those unspecified factors.

Charles Murray took a look at this by using the simple technique of comparing one sibling in a family with another sibling. That is, he compares siblings who had grown up in the same home, with the same parents, but who had different IQs. If families are the engines of privilege that sociologists assume, each sibling will have an equal chance of being propelled forwards into further privilege and higher earnings.

Murray’s method was to pick a sibling in the average range (the “normals” IQs 90-109), and then find the IQ results for another sibling in that same family. By the way, these are biological siblings living with both biological parents. “Families”, they used to be called.

What does this method reveal? If families really are the engines siblings will be pretty much alike in their achievements, intelligence scores (which some aver are no more than measures of social class); in their educational achievements (which some aver are heavily manipulated by the social class of parents), and higher degrees (which some aver are very heavily manipulated by the social class and wealth of parents). All of these translate into the ability to command higher wages.

To start with intelligence, just look at the wide range of intellectual levels to be found in normal families. Yes, most of the siblings are in the normal range of intelligence, but there is evidently considerable regression to the mean. 199 out of 2148 (9.3%) of these much loved, pampered children, despite being read to every evening, and exposed to the uplifting parental level of discourse, are in the very dull range. Another 421 of these children (19.6%) are below average, something which never happens in Lake Wogebon. On the brighter side, 15.2% are brighter, and 6% are very bright.

In summary, the family is not a very efficient engine of social manipulation as regards intelligence. These average children have drifted down somewhat, and on this reading it could be because of measurement error or a genetic regression effect, but they have not all been propelled forwards by social advantage. Try as they might, parents cannot pass on all of their normalcy to all their children. Something has caused these siblings to vary, and it is unlikely to be something which is being manipulated within the family.

Will years of education show a strong family effect?

Not really. The picture is very much like that for intelligence. There are more siblings (475) with below average years of education than with above average years of education (375).

Murray observes:

Same household, same parents, different IQs – and markedly different educational careers. The typical Normal had 1.6 years more education than his Very Dull sibling and 1.9 years less education than his Very Bright sibling. These differences in mean years of education translate into wide differences in the probability of getting a college degree.

Murray looks at the effects of degrees, of occupational privilege and eventually looks at what intelligence differences mean for earned income, the subject of our current interests.

In a very telling passage, Murray reflects on these results:

 
• Category: Science • Tags: IQ, Iq and Wealth, Wealth inequality 
James Thompson
About James Thompson

James Thompson has lectured in Psychology at the University of London all his working life. His first publication and conference presentation was a critique of Jensen’s 1969 paper, with Arthur Jensen in the audience. He also taught Arthur how to use an English public telephone. Many topics have taken up his attention since then, but mostly he comments on intelligence research.