Aeon

How monopoly was invented to demonstrate the evils of capitalism

'Buy land – they aren't making it any more,' quipped Mark Twain. It's a maxim that would certainly serve you well in a game of Monopoly, the bestselling board game that has taught generations of children to buy up property, stack it with hotels, and charge fellow players sky-high rents for the privilege of accidentally landing there.

The game's little-known inventor, Elizabeth Magie, would no doubt have made herself go directly to jail if she'd lived to know just how influential today's twisted version of her game has turned out to be. Why? Because it encourages its players to celebrate exactly the opposite values to those she intended to champion.

Born in 1866, Magie was an outspoken rebel against the norms and politics of her times. She was unmarried into her 40s, independent and proud of it, and made her point with a publicity stunt. Taking out a newspaper advertisement, she offered herself as a 'young woman American slave' for sale to the highest bidder. Her aim, she told shocked readers, was to highlight the subordinate position of women in society. 'We are not machines,' she said. 'Girls have minds, desires, hopes and ambition.'

In addition to confronting gender politics, Magie decided to take on the capitalist system of property ownership – this time not through a publicity stunt but in the form of a board game. The inspiration began with a book that her father, the anti-monopolist politician James Magie, had handed to her. In the pages of Henry George's classic, Progress and Poverty (1879), she encountered his conviction that 'the equal right of all men to use the land is as clear as their equal right to breathe the air – it is a right proclaimed by the fact of their existence'.

Travelling around America in the 1870s, George had witnessed persistent destitution amid growing wealth, and he believed it was largely the inequity of land ownership that bound these two forces – poverty and progress – together. So instead of following Twain by encouraging his fellow citizens to buy land, he called on the state to tax it. On what grounds? Because much of land's value comes not from what is built on the plot but from nature's gift of water or minerals that might lie beneath its surface, or from the communally created value of its surroundings: nearby roads and railways; a thriving economy, a safe neighbourhood; good local schools and hospitals. And he argued that the tax receipts should be invested on behalf of all.

Determined to prove the merit of George's proposal, Magie invented and in 1904 patented what she called the Landlord's Game. Laid out on the board as a circuit (which was a novelty at the time), it was populated with streets and landmarks for sale. The key innovation of her game, however, lay in the two sets of rules that she wrote for playing it.

Under the 'Prosperity' set of rules, every player gained each time someone acquired a new property (designed to reflect George's policy of taxing the value of land), and the game was won (by all!) when the player who had started out with the least money had doubled it. Under the 'Monopolist' set of rules, in contrast, players got ahead by acquiring properties and collecting rent from all those who were unfortunate enough to land there – and whoever managed to bankrupt the rest emerged as the sole winner (sound a little familiar?).

The purpose of the dual sets of rules, said Magie, was for players to experience a 'practical demonstration of the present system of land grabbing with all its usual outcomes and consequences' and hence to understand how different approaches to property ownership can lead to vastly different social outcomes. 'It might well have been called “The Game of Life",' remarked Magie, 'as it contains all the elements of success and failure in the real world, and the object is the same as the human race in general seems to have, ie, the accumulation of wealth.'

The game was soon a hit among Left-wing intellectuals, on college campuses including the Wharton School, Harvard and Columbia, and also among Quaker communities, some of which modified the rules and redrew the board with street names from Atlantic City. Among the players of this Quaker adaptation was an unemployed man called Charles Darrow, who later sold such a modified version to the games company Parker Brothers as his own.

Once the game's true origins came to light, Parker Brothers bought up Magie's patent, but then re-launched the board game simply as Monopoly, and provided the eager public with just one set of rules: those that celebrate the triumph of one over all. Worse, they marketed it along with the claim that the game's inventor was Darrow, who they said had dreamed it up in the 1930s, sold it to Parker Brothers, and become a millionaire. It was a rags-to-riches fabrication that ironically exemplified Monopoly's implicit values: chase wealth and crush your opponents if you want to come out on top.

So next time someone invites you to join a game of Monopoly, here's a thought. As you set out piles for the Chance and Community Chest cards, establish a third pile for Land-Value Tax, to which every property owner must contribute each time they charge rent to a fellow player. How high should that land tax be? And how should the resulting tax receipts be distributed? Such questions will no doubt lead to fiery debate around the Monopoly board – but then that is exactly what Magie had always hoped for.Aeon counter – do not remove

Kate Raworth is a senior visiting research associate at Oxford University's Environmental Change Institute and a senior associate at the Cambridge Institute for Sustainability Leadership. She is the author of Doughnut Economics: Seven Ways to Think Like a 21st-Century Economist (2017). She lives in Oxford.

Algorithms associating appearance and criminality have a dark past

'Phrenology' has an old-fashioned ring to it. It sounds like it belongs in a history book, filed somewhere between bloodletting and velocipedes. We'd like to think that judging people's worth based on the size and shape of their skull is a practice that's well behind us. However, phrenology is once again rearing its lumpy head.

In recent years, machine-learning algorithms have promised governments and private companies the power to glean all sorts of information from people's appearance. Several startups now claim to be able to use artificial intelligence (AI) to help employers detect the personality traits of job candidates based on their facial expressions. In China, the government has pioneered the use of surveillance cameras that identify and track ethnic minorities. Meanwhile, reports have emerged of schools installing camera systems that automatically sanction children for not paying attention, based on facial movements and microexpressions such as eyebrow twitches.

Perhaps most notoriously, a few years ago, AI researchers Xiaolin Wu and Xi Zhang claimed to have trained an algorithm to identify criminals based on the shape of their faces, with an accuracy of 89.5 per cent. They didn't go so far as to endorse some of the ideas about physiognomy and character that circulated in the 19th century, notably from the work of the Italian criminologist Cesare Lombroso: that criminals are underevolved, subhuman beasts, recognisable from their sloping foreheads and hawk-like noses. However, the recent study's seemingly high-tech attempt to pick out facial features associated with criminality borrows directly from the 'photographic composite method' developed by the Victorian jack-of-all-trades Francis Galton – which involved overlaying the faces of multiple people in a certain category to find the features indicative of qualities like health, disease, beauty and criminality.

Technology commentators have panned these facial-recognition technologies as 'literal phrenology'; they've also linked it to eugenics, the pseudoscience of improving the human race by encouraging people deemed the fittest to reproduce. (Galton himself coined the term 'eugenics', describing it in 1883 as 'all influences that tend in however remote a degree to give to the more suitable races or strains of blood a better chance of prevailing speedily over the less suitable than they otherwise would have had'.)

In some cases, the explicit goal of these technologies is to deny opportunities to those deemed unfit; in others, it might not be the goal, but it's a predictable result. Yet when we dismiss algorithms by labelling them as phrenology, what exactly is the problem we're trying to point out? Are we saying that these methods are scientifically flawed and that they don't really work – or are we saying that it's morally wrong to use them regardless?

There is a long and tangled history to the way 'phrenology' has been used as a withering insult. Philosophical and scientific criticisms of the endeavour have always been intertwined, though their entanglement has changed over time. In the 19th century, phrenology's detractors objected to the fact that phrenology attempted to pinpoint the location of different mental functions in different parts of the brain – a move that was seen as heretical, since it called into question Christian ideas about the unity of the soul. Interestingly, though, trying to discover a person's character and intellect based on the size and shape of their head wasn't perceived as a serious moral issue. Today, by contrast, the idea of localising mental functions is fairly uncontroversial. Scientists might no longer think that destructiveness is seated above the right ear, but the notion that cognitive functions can be localised in particular brain circuits is a standard assumption in mainstream neuroscience.

Phrenology had its share of empirical criticism in the 19th century, too. Debates raged about which functions resided where, and whether skull measurements were a reliable way of determining what's going on in the brain. The most influential empirical criticism of old phrenology, though, came from the French physician Jean Pierre Flourens's studies based on damaging the brains of rabbits and pigeons – from which he concluded that mental functions are distributed, rather than localised. (These results were later discredited.) The fact that phrenology was rejected for reasons that most contemporary observers would no longer accept makes it only more difficult to figure out what we're targeting when we use 'phrenology' as a slur today.

Both 'old' and 'new' phrenology have been critiqued for their sloppy methods. In the recent AI study of criminality, the data were taken from two very different sources: mugshots of convicts, versus pictures from work websites for nonconvicts. That fact alone could account for the algorithm's ability to detect a difference between the groups. In a new preface to the paper, the researchers also admitted that taking court convictions as synonymous with criminality was a 'serious oversight'. Yet equating convictions with criminality seems to register with the authors mainly as an empirical flaw: using mugshots of convicted criminals, but not of the ones who got away introduces a statistical bias. They said they were 'deeply baffled' at the public outrage in reaction to a paper that was intended 'for pure academic discussions'.

From Wu and Zhang (2016)

Notably, the researchers don't comment on the fact that conviction itself depends on the impressions that police, judges and juries form of the suspect – making a person's 'criminal' appearance a confounding variable. They also fail to mention how the intense policing of particular communities, and inequality of access to legal representation, skews the dataset. In their response to criticism, the authors don't back down on the assumption that 'being a criminal requires a host of abnormal (outlier) personal traits'. Indeed, their framing suggests that criminality is an innate characteristic, rather than a response to social conditions such as poverty or abuse. Part of what makes their dataset questionable on empirical grounds is that who gets labelled 'criminal' is hardly value-neutral.

One of the strongest moral objections to using facial recognition to detect criminality is that it stigmatises people who are already overpoliced. The authors say that their tool should not be used in law-enforcement, but cite only statistical arguments about why it ought not to be deployed. They note that the false-positive rate (50 per cent) would be very high, but take no notice of what that means in human terms. Those false positives would be individuals whose faces resemble people who have been convicted in the past. Given the racial and other biases that exist in the criminal justice system, such algorithms would end up overestimating criminality among marginalised communities.

The most contentious question seems to be whether reinventing physiognomy is fair game for the purposes of 'pure academic discussion'. One could object on empirical grounds: eugenicists of the past such as Galton and Lombroso ultimately failed to find facial features that predisposed a person to criminality. That's because there are no such connections to be found. Likewise, psychologists studying the heritability of intelligence, such as Cyril Burt and Philippe Rushton, had to play fast and loose with their data to manufacture correlations between skull size, race and IQ. If there were anything to discover, presumably the many people who have tried over the years wouldn't have come up dry.

The problem with reinventing physiognomy is not merely that it has been tried without success before. Researchers who persist in looking for cold fusion after the scientific consensus has moved on also face criticism for chasing unicorns – but disapproval of cold fusion falls far short of opprobrium. At worst, they are seen as wasting their time. The difference is that the potential harms of cold fusion research are much more limited. In contrast, some commentators argue that facial recognition should be regulated as tightly as plutonium, because it has so few nonharmful uses. When the dead-end project you want to resurrect was invented for the purpose of propping up colonial and class structures – and when the only thing it's capable of measuring is the racism inherent in those structures – it's hard to justify trying it one more time, just for curiosity's sake.

However, calling facial-recognition research 'phrenology' without explaining what is at stake probably isn't the most effective strategy for communicating the force of the complaint. For scientists to take their moral responsibilities seriously, they need to be aware of the harms that might result from their research. Spelling out more clearly what's wrong with the work labelled 'phrenology' will hopefully have more of an impact than simply throwing the name around as an insult.Aeon counter – do not remove

Catherine Stinson

This article was originally published at Aeon and has been republished under Creative Commons.

Private gain must no longer be allowed to elbow out the public good

Adam Smith had an elegant idea when addressing the notorious difficulty that humans face in trying to be smart, efficient and moral. In The Wealth of Nations (1776), he maintained that the baker bakes bread not out of benevolence, but out of self-interest. No doubt, public benefits can result when people pursue what comes easiest: self-interest.

Keep reading... Show less

Sooner or later we all face death. Will a sense of meaning help us?

‘Despite all our medical advances,’ my friend Jason used to quip, ‘the mortality rate has remained constant – one per person.’

Keep reading... Show less

Algorithms associating appearance and criminality have a dark past

‘Phrenology’ has an old-fashioned ring to it. It sounds like it belongs in a history book, filed somewhere between bloodletting and velocipedes. We’d like to think that judging people’s worth based on the size and shape of their skull is a practice that’s well behind us. However, phrenology is once again rearing its lumpy head.

Keep reading... Show less

How should we react to climate change pessimism?

‘We’re doomed’: a common refrain in casual conversation about climate change. It signals an awareness that we cannot, strictly speaking, avert climate change. It is already here. All we can hope for is to minimise climate change by keeping global average temperature changes to less than 1.5°C above pre-industrial levels in order to avoid rending consequences to global civilisation. It is still physically possible, says the Intergovernmental Panel on Climate Change in a 2018 special report – but ‘realising 1.5°C-consistent pathways would require rapid and systemic changes on unprecedented scales’.

Keep reading... Show less

What Viktor Frankl’s logotherapy can offer in the Anthropocene

With our collapsing democracies and imploding biosphere, it’s no wonder that people despair. The Austrian psychoanalyst and Holocaust survivor Viktor Frankl presciently described such sentiments in his book Man’s Search for Meaning (1946). He wrote of something that ‘so many patients complain [about] today, namely, the feeling of the total and ultimate meaninglessness of their lives’. A nihilistic wisdom emerges when staring down the apocalypse. There’s something predictable in our current pandemics, from addiction to belief in pseudoscientific theories, for in Frankl’s analysis, ‘An abnormal reaction to an abnormal situation is normal behaviour.’ When scientists worry that humanity might have just one generation left, we can agree that ours is an abnormal situation. Which is why Man’s Search for Meaning is the work to return to in these humid days of the Anthropocene.

Keep reading... Show less

Why we need even stronger evidence for our best psychotherapies

by Alexander Williams & John Sakaluk

Keep reading... Show less

Find something morally sickening? Take a ginger pill

If I were to say that I’m thinking about having sex with my stepbrother, I guess you’d tell me to think again: sex with a sibling or even a stepsibling is just plain wrong – it’s not a morally acceptable action. The reason I’m posing this hypothetical proposition is because it’s worth considering why we find this kind of behaviour so wrong. Is this judgment based on a rationally derived principle about maximising good and minimising harm? Surely sex with my sibling would harm our relationship, not to mention the rest of our family’s relationship with each of us. Or is the moral judgment here based simply on the fact that sibling sex makes us more than a little queasy? In other words, are our moral beliefs merely gut feelings – quite literally stemming from our body’s tendency to become repulsed by certain human behaviours?

Keep reading... Show less

Why are pop songs getting sadder than they used to be?

Are popular songs today happier or sadder than they were 50 years ago? In recent years, the availability of large digital datasets online and the relative ease of processing them means that we can now give precise and informed answers to questions such as this. A straightforward way to measure the emotional content of a text is just to count how many emotion words are present. How many times are negative-emotion words – ‘pain’, ‘hate’ or ‘sorrow’ – used? How many times are words associated with positive emotions – ‘love’, ‘joy’ or ‘happy’ – used? As simple as it sounds, this method works pretty well, given certain conditions (eg, the longer the available text is, the better the estimate of mood). This is a possible technique for what is called ‘sentiment analysis’. Sentiment analysis is often applied to social media posts, or contemporary political messages, but it can also be applied to longer timescales, such as decades of newspaper articles or centuries of literary works.

Keep reading... Show less

Given how little effect you can have, is it rational to vote?

For far too long, the accepted wisdom among scholars of politics has been that the interests of the individual and the interests of society are not in harmony when it comes to voting. The American economist Anthony Downs, in his foundational book An Economic Theory of Democracy (1957), argued that a truly rational individual, who knows that her vote is highly unlikely to tip the outcome in favour of her preferred candidate, should not bother to cast a ballot. On this view of human rationality, an independent action that carries no instrumental value for the person who acts is essentially foolish, justifiable only by the sense of pride or communion with others that it creates in her.

Keep reading... Show less

How William James encourages us to believe in the possible

In college, I developed a mysterious illness. I experienced myself as happy, yet in the afternoons I would cry for two hours. Although the obvious interpretation was depression, to me it was all about lunch. Food exhausted me and made me sad. I tried skipping breakfast and lunch, and snacking on cottage cheese and milk chocolate bars. Then carrots.

Keep reading... Show less

How Confucius loses face in China’s new surveillance regime

While conceived of and functioning differently in diverse contexts, ‘face’ describes a phenomenon that exists in every human society. Its most basic sense concerns the public presentation and perception of the self. Someone who has face possesses something of positive social value that arises from social approval of a person’s status, action or state of being; someone who loses face has suffered a loss in social value concerning her status, behaviour or state of being. In addition to public perception, ‘face’ has an internal psychological aspect as well: it captures one’s self-image and evaluation of oneself in regard to shared ethical standards and social hierarchies, expectations and norms.

Keep reading... Show less

Your happy emotions are not necessarily what they appear to be

‘Be happy!’ Mary Wollstonecraft exhorted her estranged lover and tormentor, Gilbert Imlay, in late 1795. What did she mean? It had been only days since she had been fished from the Thames, having failed in a bid to drown herself. Scorned, shamed and diminished in her view of herself in the world, Wollstonecraft had chosen death. Here too she was thwarted, ‘inhumanly brought back to life and misery’. Imlay’s philandering was the source of her ills, and she told him as much. Why, then, wish him to be happy? Was this forgiveness? Hardly. Wollstonecraft knew Imlay’s new mistress was ‘the only thing sacred’ in his eyes, and that her death would not quell his ‘enjoyment’.

Keep reading... Show less

Professor of history of philosophy explains why adversarial criticism is antithetical to truth

Philosophical discussions, whether in a professional setting or at the bar, frequently consist of calling out mistakes in whatever has been proposed: ‘This is all very well, but …’ This adversarial style is often celebrated as truth-conducive. Eliminating false assumptions seems to leave us with truth in the marketplace of ideas. Although this is a fairly pervasive practice (even I am practising it right now), I doubt that it is a particularly good approach to philosophical discussions. The lack of progress in adversarial philosophical exchange might rest on a simple but problematic division of labour: in professional settings such as talks, seminars and papers, we standardly criticise others’, rather than our own, views. At the same time, we clearly risk our reputation much more when proposing an idea rather than criticising it. This systematically disadvantages proponents of (new) ideas.

Keep reading... Show less

Why the robots of the future won't look anything like we assumed

by Jamie Paik

Keep reading... Show less

The jokes always saved us: Humor in the time of Stalin

Stalinism. The word conjures dozens of associations, and ‘funny’ isn’t usually one of them. The ‘S-word’ is now synonymous with brutal and all-encompassing state control that left no room for laughter or any form of dissent. And yet, countless diaries, memoirs and even the state’s own archives reveal that people continued to crack jokes about the often terrible lives they were forced to live in the shadow of the Gulag.

Keep reading... Show less

Personality is not only about who you are. It's also where you are

In the field of psychology, the image is canon: a child sitting in front of a marshmallow, resisting the temptation to eat it. If she musters up the willpower to resist long enough, she’ll be rewarded when the experimenter returns with a second marshmallow. Using this ‘marshmallow test’, the Austrian-born psychologist Walter Mischel demonstrated that children who could resist immediate gratification and wait for a second marshmallow went on to greater achievements in life. They did better in school, had better SAT scores, and even managed their stress more skilfully.

Keep reading... Show less

A philosopher explains why philosophy is worth doing

by David Egan

Keep reading... Show less

A cognitive neuroscientist explains the science behind 'trigger warnings' — and why they don't actually work

Imagine you’re a lecturer teaching a celebrated novel that features violent scenes – say, F Scott Fitzgerald’s The Great Gatsby (1925). It transpires that one of your students has themselves been a victim of violence and now, thanks to your words, they are reliving their trauma. Could you, should you, have done more to protect this person?

Keep reading... Show less

The meaning of life? A Darwinian existentialist has his answers

I was raised as a Quaker, but around the age of 20 my faith faded. It would be easiest to say that this was because I took up philosophy – my lifelong occupation as a teacher and scholar. This is not true. More accurately, I joke that having had one headmaster in this life, I’ll be damned if I want another in the next. I was convinced back then that, by the age of 70, I would be getting back onside with the Powers That Be. But faith did not then return and, as I approach 80, is nowhere on the horizon. I feel more at peace with myself than ever before. It’s not that I don’t care about the meaning or purpose of life – I am a philosopher! Nor does my sense of peace mean that I am complacent or that I have delusions about my achievements and successes. Rather, I feel that deep contentment that religious people tell us is the gift or reward for proper living.

Keep reading... Show less

Living with ADHD: How I learned to make distraction work for me

Even today, 20 years after my childhood diagnosis of attention deficit hyperactivity disorder (ADHD), I am still keenly aware of how my attention wavers, lapses or holds differently from that of most people. I’m prone to experiencing ‘blank’ patches in conversation, when I suddenly realise I have no recollection of the past 30 or so seconds of what’s been said, as if someone has skipped forward through the video feed of my life (occasionally, I resort to ‘masking’, or feigning comprehension – which is embarrassing). When watching television, I struggle not to move, often rising to pace and fidget, and I dread being the ‘owner’ of complicated documents and spreadsheets, as I’m very likely to miss some crucial detail.

Keep reading... Show less

To make laziness work for you, put some effort into it

We are being lazy if there’s something that we ought to do but are reluctant to do because of the effort involved. We do it badly, or do something less strenuous or less boring, or just remain idle. In other words, we are being lazy if our motivation to spare ourselves effort trumps our motivation to do the right or best or expected thing – assuming, of course, we know what that is.

Keep reading... Show less

A bioethicist explains why older people should be allowed to change their legal age

Let’s say that on average you are in better shape than other people of your age. You are more able than them: quicker, sprightlier, livelier. You feel and identify as younger than your official age. However, despite all your youthful energy, you are also discriminated against because of your greater age. You cannot get a job – or, if you do, you might earn less than some of your younger coworkers simply due to your advanced years. The question is, should you be allowed to change your ‘official’ age in order to avoid this discrimination and to better match how you identify and feel?

Keep reading... Show less

This is the secret to fast-tracking a life well-lived

For the Ancient Greeks, virtue wasn’t a goal in and of itself, but rather a route to a life well lived. By being honest and generous, embodying diligence and fortitude, showing restraint and kindness, a person would flourish – coming to live a life filled with meaning and finding an enduring, as opposed to ephemeral, happiness. Today, that view hasn’t much changed. While we hear plenty of stories of celebrities, politicians and even our neighbours finding fleeting pleasure through self-gratification, dishonesty or hubris, we can also see the ‘other shoe’ eventually drop, leading to despair, social rejection or worse.

Keep reading... Show less

The fast track to a life well lived is feeling grateful

For the Ancient Greeks, virtue wasn’t a goal in and of itself, but rather a route to a life well lived. By being honest and generous, embodying diligence and fortitude, showing restraint and kindness, a person would flourish – coming to live a life filled with meaning and finding an enduring, as opposed to ephemeral, happiness. Today, that view hasn’t much changed. While we hear plenty of stories of celebrities, politicians and even our neighbours finding fleeting pleasure through self-gratification, dishonesty or hubris, we can also see the ‘other shoe’ eventually drop, leading to despair, social rejection or worse.

Keep reading... Show less

Holy relics and celebrity mementos put heaven within reach

In 2006, a tiny brown pebble, equivalent to a raisin, sold at auction for $25,000. This inconsequential artifact was, in fact, William Shatner’s kidney stone. The US actor had persuaded doctors to return the grisly relic to him following surgery, so he could auction it for charity. It was bought by an online casino that added it to its collection of oddities, which include a grilled cheese sandwich graced with an image of the Virgin Mary. Stranger still, this extraordinary interest in curiosa is not an uncommon occurrence, neither today nor throughout history.

Keep reading... Show less

Simone de Beauvoir didn’t believe in being ‘a strong woman.' Here's why

In The Second Sex (1949), Simone de Beauvoir argued that women were at a disadvantage in a society where they grew up under ‘a multiplicity of incompatible myths’ about women. Instead of being encouraged to dream their own dreams and pursue meaningful projects for their lives, Beauvoir argued that the ‘myths’ proposed to women, whether in literature or history, science or psychoanalysis, encouraged them to believe that to be a woman was to be for others – and especially for men. Throughout childhood, girls were fed a steady diet of stories that led them to believe that to succeed as a woman was to succeed at love – and that to succeed at other things would make them less lovable.

Keep reading... Show less
BRAND NEW STORIES
@2022 - AlterNet Media Inc. All Rights Reserved. - "Poynter" fonts provided by fontsempire.com.