Predicting the future of your marriage, algorithmically

Not in fact an algorithm, but never mind

Humans like to think that they know what’s coming, but algorithms usually come out on top

Some things you can’t describe in numbers, we are told. You can’t reduce the beauty of romantic love to a list on a spreadsheet; you can’t express the pleasure of a good glass of wine with a formula; you can’t replace the knowledge gained through a lifetime of experience with some reductionist algorithm.

What rubbish. Of course you can. And as we reported yesterday, people have tried: the creators of the dating website OK Cupid claim that an algorithm based on the answers to three apparently daft questions (“Do you like horror movies?” “Have you ever travelled around another country alone?” and “Wouldn’t it be fun to chuck it all and go live on a sailboat?”) are predictive of whether a couple will stay together.

Humanity loves to guess the future. Unfortunately, we’re not very good at it. That doesn’t stop a whole industry – several industries – of people trying to do it anyway, but on the whole, we’re nowhere near as good as we think. Pundits tell us what will happen next in complicated geopolitical crises; stock pickers will tell us which companies’ shares will go up and which will go down; Westminster insiders tell us who will win the next election. It’s usually based on ineffables such as gut feeling. And it’s usually wrong, or at least not much better than chance; memorably, Philip Tetlock, a psychologist at the University of Pennsylvania, asked a series of experts to predict various events in world politics, followed the results in a major study for a quarter of a century, and found that, on average, the experts were (in his words) less accurate than “a dart-throwing chimpanzee”.

So people have tried to do better. And one way that they’ve done that is to take forecasts out of the hands of experts, and indeed humans, altogether. We’re silly creatures, humans, given to bias and self-delusion and post-hoc justifications for our mistakes – but an algorithm is not.

One well-known example was assessing the value of a wine vintage. Bordeaux wines tend not to taste very good for quite a long time after they’re bottled; they’re acidic, astringent, and it takes some years for them to become drinkable. But they’re also not equal: some years the wine will develop into a great vintage, at other times into mediocre plonk. Being able to tell, at the time of bottling, which years will be best would be worth millions to vintners, who could stock up cheaply and then resell them when the price goes up. Experts tried to predict this by tasting from the barrel, and judging from the year’s weather.

But, then, in the Eighties, an economist and wine enthusiast called Orley Ashenfelter thought he could do better, with an algorithm. If you’re interested, the algorithm was: ∆ price = -12.15 + (β1 * Winter rainfall) + (β2 * Average summer temperature) + (β3 * Harvest rainfall) + (β4 * Age of Vintage). Pundits were furious and appalled: Robert M Parker, America’s best-known wine expert, called it “ludicrous and absurd”. But it wildly outperformed even the best expert predictions.

In all sorts of complex areas, algorithms do better than humans at telling the future. In his book Thinking, Fast and Slow, the psychologist Daniel Kahneman discusses a study of counsellors at a university, who predicted the grades of their students at the end of the year. They had access to the students’ high-school grades, to several aptitude tests and to a four-page personal statement, as well as a 45-minute interview. They were compared with an algorithm that had access to one aptitude test and the school grades. The algorithm did better than the counsellors nearly every time.

The same result was found for all manner of complex, uncertain things. Whether a criminal will reoffend, how long cancer patients will live, who will win a football match, how much of a credit risk someone is, the value of stock. “In every case, the accuracy of experts was matched or exceeded by a simple algorithm,” says Kahneman; so, “whenever we can replace human judgment by a formula, we should at least consider it”.

That’s not to say all algorithms are equal, of course; the world is full of nonsense equations and formulas for the perfect pancake or the most depressing day of the year, written by charlatans for PR purposes. And if I were to guess, I would imagine that the sailboat-based algorithm is probably not all that reliable.

But romantic love certainly is susceptible to the cold analysis of data. Robyn Dawes, a statistician, put together an even simpler model, and showed that it was an extremely good augur of whether a marriage would work. It is: “frequency of lovemaking, minus frequency of quarrels”. If you put in the numbers and come up with a negative figure, you’re in trouble.

We don’t like this, in general. We like to think that we’re better than some robotic input-output machine at predicting the future of the field we’re experts in, let alone the future of our marriage. But we’re not, and it would be best if we just admitted it.

More on science

Bananas, the fruit that changed the world
The recipe for the internet’s future? Chips with everything
If you could test for Alzheimer’s, would you want to?

Leave a comment