Wellbeing, humanity and being shouty

November 23, 2011 by · 1 Comment
Filed under: Social Brain 

This afternoon I dropped in to one session of an interesting two day conference called Play’s the Thing, which was aiming to give space to the discussion of creative approaches to wellbeing. My impression was that the event had been really rich and diverse, and the whole programme looked fascinating.

The session I went to featured an excellent, energetic and intellectually dense talk from Steve Fuller, who recently spoke here at the RSA, followed by a panel discussion in which Buddhist psychotherapist and writer Gay Watson, new media pioneer Bill Thompson and the RSA’s Jonathan Rowson responded to what Steve had to say. In the twenty minute whirlwind of his talk, Steve illustrated that we are at a crossroads in the way we define what it is to be human.

He presented three sketches of what humanity might look like in the future – one in which we enhance ourselves to the hilt with brain boosting drugs and technological advances, another in which we minimise the gap between ourselves and the natural world, and a third in which we abandon our carbon based bodies in favour of a virtual existence.

In drawing attention to some of the challenges that come as virtual life becomes a feature of embodied life, Steve recounted the tale of a couple who ‘met’ in the cybernetic world of Second Life. They went on to meet and marry in the ‘real world’. When the Second Life husband went on to have a Second Life affair, and the real world wife got wind of this, the virtual infidelity was cited and accepted as grounds for a real world divorce.

This precipitated a really interesting discussion about whether this simulated infidelity was any different in nature from the example of a Victorian couple, who had never met in person, exchanging love letters. For some reason I’m not quite sure of, this discussion became rather frenzied, with a member of the audience shouting ‘hey!’ in protest about being interrupted/ misunderstood by Steve Fuller, and the whole exchange being more than a little fiery.

Very interesting stuff, but I have to say, I found it slightly disquieting; it didn’t do much for my wellbeing.

In praise of swearing

November 22, 2011 by · 3 Comments
Filed under: Social Brain 

Swearing has been all over the headlines in the last few days, what with someone saying the word ‘sod’ on Strictly Come Dancing, a judge overturning the conviction of a bloke who told police to ‘F off’ while being searched, and Rihanna wearing shocking shoes on prime time TV at the weekend.

In his excellent comment piece in the Guardian, Mark Lawson examines the shifting cultural role of swearing, and draws attention the importance of intent and the power to cause offence. The point of swearing is to deliberately employ a taboo term in order to add dramatic, emphatic or insulting heft to whatever is being said. If swearing is a regular component of one’s vernacular, and therefore not truly taboo, then surely it loses its force and isn’t really swearing any more. Lawson notes that there is now a “class of cursers who literally don’t know they’re doing it”.

There’s something about his turn of phrase which gives the impression of this “class of cursers” being unsavoury and probably generally objectionable. That’ll be me then…

There’s something about his turn of phrase which gives the impression of this “class of cursers” being unsavoury and probably generally objectionable. That’ll be me then…

When I was at school, I got myself into trouble a number of times through not really understanding the rules of swearing. At the age of eleven, after managing to complete a cross country run without having an asthma attack, my PE teacher encouragingly asked me how I was. Feeling pretty pleased with myself and grinning, I replied, “I’m knackered!” I was genuinely stunned that this led to an explosion of furious admonishment from my teacher, plus the deep humiliation of an after school detention. I genuinely had no idea that the word I’d used was an expletive – as far as I knew it was as good a synonym as any to describe exhaustion. To be honest, I’m still not entirely convinced that the word ‘knackered’ really has an offensive etymology…

It seemed like a real injustice to me at the time, and I remember trying to explain to the teacher that I couldn’t have intended any offence because I didn’t even realise that the word I’d used was an obscenity. It didn’t help my case, and probably just made me come across as irritatingly precocious as well as foulmouthed. However, I do have a great deal of sympathy for Denzel Harvey, the young man who was fined £50 for exclaiming, “what the f*ck?” while being searched by police. I think the judge was absolutely right in his conclusion that the police officers who were being sworn at were unlikely to have been the victims of harassment, alarm, or distress as a result.

Quoted in the Telegraph, chairman of the Metropolitan Police Federation, Peter Smyth said, “I’m not saying that police officers are going to go and hide in the corner and cry if someone tells them to F off, but verbal abuse is not acceptable and this is wrong message to be sending out”.

I think he’s missing the point slightly, and what’s going on here is rather more nuanced. Reflecting on this case earlier today, Ellie Bloggs argues that it’s not the words that matter, but the tone and the degree of malice with which they are uttered. She points out that a bit of nonchalant swearing is nowhere near as offensive and threatening as a torrent of swearword-free vitriol. Unfortunately, the judge in this case wasn’t able to capture this nuance explicitly in his statement, although his judgement indicates an implicit understanding of it.

Personally, I’m not easily offended by swearing, but I do think it’s a shame that so many swearwords have become subsumed into everyday speech, not because I think they’re filthy and offensive, but rather because they lose their power. And what’s the f*cking point in swearing if it’s not going to get a reaction?

Thinking, fast and slow

November 15, 2011 by · Leave a Comment
Filed under: Social Brain 

This evening, a lucky audience will have the privilege of listening to Nobel laureate Daniel Kahneman in conversation with Richard Layard at an event hosted by LSE. They will be discussing Kahneman’s new book, Thinking, Fast and Slow, which distils the author’s lifetime of work on the triumphs and pitfalls of conscious and unconscious thinking.

Kahneman is widely regarded as one of the world’s most influential psychologists, and his ideas have shaped the work of many other important thinkers, including experimental psychologist Steven Pinker and behavioural economist Dan Ariely.  In his new book, Kahneman explains the two systems that drive the way we think and make decisions – on the one hand what he calls System One, the fast, intuitive and emotional system, and on the other System Two, a slower, more deliberative and logical system. I’m looking forward to reading it, but until I have, I can’t offer my own appraisal.

There’s been a flurry of recent reviews, all of which suggest that I’m in for a treat. William Easterly’s review in the Financial Times pronounces the book a masterpiece. Easterly is ebullient about Kahneman’s choice to be upfront about the fact that ‘experts’ are as prone to making mistakes as anyone else, including him. Knowing that we are irrational in our decision making doesn’t in itself free us from falling into the same traps as everyone else. Easterly describes having to fight off the preying hands of friends and family members in order to get the book read, and says that it is ‘compulsively readable’.

Oliver Burkeman in the Guardian, is also clearly impressed. In his interview with Burkeman, Kahneman is keen to make clear that this is not a self-help book; reading it will not change the way you think. However, having a deeper awareness of how our minds work can only be a good thing, and with attention, it seems we may be able to learn when to trust our intuition and how to harness the benefits of slow thinking.

So, which system of thinking will drive my decision as to whether to buy it now, or wait for the paperback?

The medical model of mental illness: we’re not convinced

November 11, 2011 by · 4 Comments
Filed under: Social Brain 

A great systematic review has been published in this month’s British Journal of Psychiatry. It has the slightly less than tabloid-friendly title Biogenetic Explanations and Public Acceptance of Mental Illness: Systematic Review of Population Studies, but behind the dense title is a really useful and important piece of work.

Matthias Angermeyer and his colleagues examined 33 studies which looked at the public’s beliefs about the causes of mental illness, in order to find out whether there is a relationship between those beliefs and the degree of tolerance people show towards people with experience of mental illness. This is important, not least because the shape of anti-stigma education and campaigning is determined by the causal model on which it is based.

Historically, the dominant model for public anti-stigma campaigning has been built on the foundations of the biogenetic model of mental illness, in which it is assumed that mental illness comes about primarily as a result of biochemical or genetic deviations.

Anti-stigma efforts have led to simple messages being devised, which are designed to get people to leave their prejudices behind. Under the biogenetic model, the types of messages you end up with are ‘mental illness is an illness just like any other,’ and ‘mental illness is treated with medication’.

Angermeyer’s systematic review concludes that biogenetic explanations for mental illness are correlated with less tolerance of people with mental illness amongst the general public, and therefore, basing anti-stigma work on biogenetically based causal models is an inappropriate means of countering stigma.

if you stop and think about it, it’s no wonder that the public are unconvinced by messages like ‘mental illness is just like any other illness’

This is not at all surprising to me, and you if stop and think about it, it’s no wonder that the public are unconvinced by messages like ‘mental illness is just like any other illness’. The reality is that mental illness(es) are not very much like physical illness(es). We need only to think about the way mental and physical illnesses are diagnosed to realise this.

In general medicine, diagnosis typically proceeds through the identification of signs which indicate the presence of disease. In the case of diabetes, for example, it is possible to determine whether the patient has the condition by measuring their blood glucose level. The patient may have been experiencing symptoms such as feeling thirsty and tired. These symptoms, although they do indicate the possible presence of the illness are not sufficient for a diagnosis of diabetes – the physician relies upon the results of a blood test (a sign) to make a confident diagnosis.

Psychiatric diagnosis does not work like this. Although it is assumed that there is a biological dimension to mental illness, there are no definitive physical indicators of mental illnesses which categorically and objectively confirm the presence or absence of a mental disorder. It isn’t possible to determine, say through measuring their serotonin level, whether a person is suffering from depression; nor is it possible to diagnose psychosis through carrying out a blood test or x-ray. Instead, psychiatric diagnoses are made by way of observation or reporting of ‘symptoms,’ which are nearly always subjective judgements about what people say and do.

The truth is that, whilst it seems there may be some biological and genetic factors in mental illness, the science is not sufficiently advanced to be able to be clear about what they are and how they act. Not only that, but, to a much greater extent than with physical illness, the social and political dimensions in the construction of mental illness are controversial.

Therefore, oversimplified, biogenetically based anti-stigma initiatives are destined to fail because they don’t acknowledge or attend to the true complexity of mental illness. They do little to engage with people’s genuine uncertainty about why mental illnesses come about, and their legitimate fears about the sometimes worrying ways in which mental illnesses affect people’s behaviour.

One of the reasons why anti-stigma work has so far tended to insist on keeping biogenetic explanations at its heart may be to do with psychiatry’s need to assert its scientific credentials in line with other medical specialisms. I particularly applaud Matthias Angermeyer and his colleagues for drawing attention to this possibility in their concluding remarks. In asking whether the insistence on neuroscientific emphases in public education about mental illness is really in the interests of patients, they show a refreshing humility, which should be welcomed by psychiatrists, scientists, and patients.

Pinker: “The Moral Sense has done more harm than good”

November 1, 2011 by · 17 Comments
Filed under: Social Brain 

A quick reflection on the Steven Pinker event that just finished.

He looked great. Sharp pinstripe suit, impressive mane of curly silver hair, and a poppy, as if his message that the world has become more peaceful wasn’t enough.

I was glad to see he struggled ever so slightly with his power point slides, which tempered the ambient envy in the room.

Highlights for me were being reminded of the great Voltaire quote: “Those who can be made to believe absurdities can be made to commit atrocities.”

I also enjoyed the idea that “violence is now a problem to be solved, not a conquest to be won.”

And I liked the reference to Kant’s essay on Perpetual peace, where he argued that three things would reduce violence: trade, democracy and international community.

Perhaps the best point was his claim – in response to a question about morality not being the cause of reduced violence – that the moral sense has done more harm than good. He backed this by saying that most homicides are justified on moral grounds, and that most aggressors think of their cause as morally justified.

I asked a question, which amounted to: If you define violence as human on human activity, then the argument flows beautifully and your data seems to back it. But if you give a broader definition of violence, including forms of ‘structural violence‘ in social and economic systems, violence against other species in the form of factory farming and violence against nature in the form of environmental degradation, it is not so clear that we have become less violent.

His answer was basically that these things are not really violence as such, and he slightly ridiculed the environmental point by comparing killing somebody to polluting a stream, which is rather different from entire islands disappearing and their population being displaced, or Darfur being the first of many climate change wars.

Had Matthew not asked for questions to be brief, I would have linked my question back to Kant. If you reframe violence not as direct human on human contact, but on the way our exploitative instincts manifest in the economy, towards other species and towards the planet, is it not the case that democracy, trade and international community may be responsible for the increase in violence, of a form that threatens our way of life? This idea of the world as a ‘resource to be used’ rather than something to stand in reciprocal relation toresonates with McGilchrist’s argument about the increasing dominance of a left hemisphere perspective on the world.

But then I listen to myself, and wonder if I am one of those people Pinker was talking about when he said that, for social critics, good news is bad news.

Maybe I am, but if the decline of violence is to be a measure of the success of modernity, as Pinker wants, then surely we need to give it its broadest possible definition?

Is it even possible that our violent impulses are being projected away from each other, and towards impersonal systems and structures that cannot retaliate?

Transforming Behaviour Change

October 17, 2011 by · 2 Comments
Filed under: Social Brain 

An Astrologer once told me that I have an exalted sun in my house of enemies. This positive feature of my chart meant that bad feelings towards me are ‘burnt up’ by the heat of the sun, for which I am grateful. I have never really understood the basis on which astrology might conceivably make sense on even the most charitable interpretation, so I had to take his word for the rationale, but I appreciated the thought, which resonated with my experience.

The lack of enemies in my life means that I lack expertise in confrontation, and I am never quite sure what to do with hostility when it is directed towards me. Sometimes I feel disproportionately shocked and wounded, but mostly I am just bemused.

While chairing a recent RSA event in our Great Room, documented in the Guardian (where I was surprised to see my first name is ‘Donald’) by Carole Jahme, I had to endure some intemperate language from a distinguished guest after I tried unsuccessfully to involve the audience with just fifteen minutes of the lunch hour remaining. This exchange was regrettable, and while I could perhaps have timed and phrased my comments in ways less likely to provoke a hostile response, the sourness of the reaction was bizarre, and there wasn’t much to be learned from the experience.

A more significant case of hostility came a few months ago, when I wrote a Guardian CIF piece in response to The House of Lords report on Behaviour Change(which, alas, received very little coverage because it coincided with Rupert Murdoch giving evidence in Parliament). If you look at the final comment, by Cornelius Lysergic, you will see ‘the old familiar suggestion’:

“F*** off with your change our behaviour s***. Just f*** off.’

That was my favourite comment by far. As I have written before with respect to those who challenge the very basis for RSA’s work on Social Brain it is always interesting to read the ‘enemy’, because they sharpen your sense of purpose. In this case the challenge could not be clearer: Why bother with behaviour change at all?

when you consider the planet’s most pressing challenges, on debt, on energy, on population, on ageing, on stress, on obesity, on terrorism, you find that most are either at root, or in part, behavioural.

The simple answer is that when you consider the planet’s most pressing challenges, on debt, on energy, on population, on ageing, on stress, on obesity, on terrorism, you find that most are either at root, or in part, behavioural. Governments have known this for a long time, but they have only recently realised that traditional policy levers relating to tax and regulation are not always enough to change behaviour in the requisite ways, and that some ‘behavioural insight’ – a pleasing phrase that probably has a limited shelf life – is required.

The Institute for Government addressed this challenge, and their Mindspace report appears to be very popular at every level of Government. For instance, I recently attended a meeting of the DCLG Behavioural Research Network where all major government departments gave a short presentation on how they are applying some of the principles in Mindspace, and more.

But the Government being interested will not allay the concerns of people like Cornelius Lysergic, indeed it will positively reinforce them. So given that it seems we need to change our behaviour, is there a way to make the idea of ‘behaviour change’ less top-down, less about elites manipulating the masses, less behaviourist and more human, less like something done to people(or pigeons) and more about doing things with them?

We think so, and in a few days we will be publishing a report called Transforming Behaviour Change: Beyond Nudge and Neuromania, which will explain how and why.  The challenge is to turn behaviour change into something people are encouraged to do themselves, based on knowledge of their own cognitive resources and frailties. A further challenge is to move away from what Aditya Chakrabortty called ‘cute technocratic solutions to mostly minor problems’ and focus on what we call ‘adaptive challenges‘. We need a richer conception of behaviour that is neither reductionist nor exclusively behaviourist, and recognises the need for individuals and groups to have more understanding of their own behaviour, including how it relates to values and attitudes.

Watch this space for details of that, and an ongoing account of the ideas emerging from our work.

Sympathy for the psychiatrist

September 26, 2011 by · 1 Comment
Filed under: Social Brain 

Robert Whitaker writes in the current RSA journal that psychotropic medication is less safe and effective than is commonly believed. The use of antidepressant and antipsychotic medication is widespread, and Whitaker presents troubling evidence that taking these drugs can increase the frequency of relapse and reduce the chances of getting well and staying well. He presents evidence that patients who never take medication fare much better in the long term than those who accept medication early on. Whitaker is not alone in drawing attention to the unstable foundations of psychiatric prescribing.

The psychologists Richard Bentall and Joanna Moncrieff have both urged us recognise that the notion that psychiatric drugs correct imbalanced brain chemistry is a myth. It is a myth that has taken hold in the public consciousness to a very great extent.  Moncrieff, like Whitaker, explains that these drugs in fact do no such thing. The drugs act on our brains in such a way as to make them function differently and in doing so change the way we feel. Moncrieff explains that these medications produce drug-induced states which mask or suppress emotional problems. She does not say that we should stop using them, but suggests instead that patients should consider whether they want to use them with greater awareness of how they work.

This is tricky territory for non-experts like me to negotiate. The public misconception that mental illness is caused by brain chemistry being out of kilter is one problem. The poor long term outcomes for patients prescribed psychiatric medication is another. The fact that some patients’ severe suffering is ameliorated in the short term by the use of such drugs cannot be denied. It is clear that the issues raised by conflicting evidence in this field are very concerning, and the voices drawing attention to them are becoming louder and more numerous.

The overall effect is of an attack on the orthodox methods of contemporary Western psychiatry. The principle labour of psychiatrists has become to diagnose and prescribe, and medication is the first line treatment for the majority of patients who come under their care. As evidence builds that such medication is neither a cure, nor always a benign intervention, there is a danger that psychiatrists get demonised as unthinking peddlers of poison. A couple of weeks ago I wrote about the problem of the stigma of mental illness. It might not make me popular, but I do have some sympathy for the psychiatrists and wonder whether they are also becoming stigmatised.

The overall effect is of an attack on the orthodox methods of contemporary Western psychiatry.

It is important to remember that psychiatry is a relatively young discipline. There remains an awful lot that is yet to be discovered about what really causes mental illness, and some of the exciting action in the psychiatric field is in neurobiological and genetic research. But it is also in the psychosocial arena, as the importance of social connectivity, mindfulness and physical exercise become ever more apparent. Although it is clear that the drugs being routinely prescribed are in some ways rather clumsy, don’t work for everyone, and bring with them unwanted side effects, it is not the case that they are exclusively bad, or that they are dished out in bad faith.

The evidence that the drugs might worsen long term outcomes is worrying, but we must remember that this is relatively new evidence and that it takes time to acquire and properly analyse. However, there is a danger that psychiatry has already painted itself into a corner whereby it is only capable of regarding mental illness as a set of neurobiological components with the driving aim being to separate and identify them, and then develop the correct psychopharmacological intervention. So while it is an exciting time for psychiatry, it is imperative that psychiatrists take this opportunity to extricate themselves from the clutches of Big Pharma, and open themselves up to the possibility that drug-based treatment should no longer be the first port of call. If they do, then I can envisage a future in which patients are sufficiently informed and reflective to confidently demand to be supported through episodes of mental illness without medication, and psychiatrists become more holistic, discerning and flexible in their approach to treating their patients.

 

 

 

Mental illness: the last taboo?

September 13, 2011 by · 28 Comments
Filed under: Social Brain 

People with mental health problems are the last minority group against whom it’s socially acceptable to discriminate. Sometimes this discrimination comes about accidentally or covertly, Lisa Appignanesi’s recent piece in the Guardian being a case in point. Appignanesi writes that the mental illness ‘industry’ is medicalising normality to a greater extent than ever before. She raises the question of whether the apparent increased prevalence of mental illness is genuinely down to a rising toll of suffering, or whether we have collectively learned to complain more. Appignanesi suggests that the more evidence there is about the increase in mental disorder in the public domain, the more likely we are to label our own problems of living as requiring the attention of a doctor. She goes on to suggest that attending reading groups or going running might do more for sufferers of depression than taking medication and questions the usefulness of psychiatric classification in helping people deal with the problems of their lives.

While I’m sure Appignanesi does not intend to cause offense to people with serious mental health problems, there is a dangerously stigmatising undercurrent to her argument. A distillation of the points she makes might roughly translate as “There’s nothing much wrong with you, you don’t need any pills, pull yourself together.” This might be a useful message for someone who’s struggling slightly with a mild case of the blues, and has the wherewithal and capacity to make a few positive changes in their life. But, for someone with a seriously debilitating mental illness, it is a potentially very damaging message.

‘Mental illness’ is no more a discrete entity than is ‘physical illness’, and no physician would deign to lump diabetes in with cancer when trying to understand patients’ ways of dealing with their illness.

A serious problem which Appignanesi does not attend to, is that the category of ‘mental illness’ is extremely dense. ‘Mental illness’ is no more a discrete entity than is ‘physical illness’, and no physician would deign to lump diabetes in with cancer when trying to understand patients’ ways of dealing with their illness. So, when we talk about mental illness, we might be referring to depression, anorexia, schizophrenia, or any of the other 300 or so disorders in the DSM. Within any one of those diagnostic categories lies a huge variation of patient experience and no two cases of any one of these conditions is ever the same. Just as we all have different pain thresholds, we all have differing levels of resilience to mental distress. But, whatever your threshold, there is a level of serious mental suffering which is as intolerable as the most excruciating physical pain.  Within the classification of depression, there exists a whole spectrum of experience ranging from unpleasant but bearable gloom which allows one to continue functioning, right down to crippling despair which makes it impossible to get dressed in the morning or go to sleep at night. For those at the dark end of the spectrum, attending a reading group or going for a run are utterly inconceivable activities, and no substitute for proper medical intervention.

Appignanesi is caustic about the use of antidepressants, and it seems to me that this might be because she has in her mind people who are just a bit down in the dumps rather than those who have a serious mental health difficulty. The ‘definite lift’ Appignanesi tells us participating a reading group provides would certainly not have helped Sandra, a woman I met some years ago, who at that moment was desperately waiting for her annual ECT treatment. She told me that ECT was her lifeline, the only thing that lifted her depression sufficiently to make her life liveable, and that without it she would have killed herself ‘several times over’.

I think the point Appignanesi is really trying to make is that it has become very easy for pretty much anyone to walk into the doctors, have a bit of a moan, and leave with a diagnosis of depression and prescription for Prozac.

I think the point Appignanesi is really trying to make is that it has become very easy for pretty much anyone to walk into the doctors, have a bit of a moan, and leave with a diagnosis of depression and prescription for Prozac. Cultural factors have made it possible for mental illness to be a lifestyle choice. If you can’t be bothered to exercise, eat well, engage in wholesome activities like reading groups, you don’t have to take responsibility anymore because you can just opt for the convenient excuse that you’re ill. Once your GP has agreed that you’re ill, you can slip into the role of patient, and passively wait for treatments to work and experts to make you better. The overlapping agendas of pharmaceutical companies, the health service, and government have come together to feed this situation.

normalising mental illness is a far more urgent priority for social progress than is preventing the medicalisation of normality

Although Appignanesi’s attack of the usefulness of psychiatric classification is understandable, what we need to understand is that there is a difference between everyday, normal suffering and serious mental illness which requires specialist intervention. It is true that deciding on the cut-off point at which normal suffering becomes mental illness can only be determined using subjective means, and that the boundary is inevitably arbitrary. I agree with Appignanesi that there is something crazy about a world in which literally any kind of idiosyncrasy can be identified as a symptom of mental illness, and that there is a complex range of reasons which explain the apparent increase in prevalence of mental disorder. But, we need to exercise caution when drawing attention to these problems because there are real dangers associated with arguing against the medicalization of ‘normality.’ Firstly, that people who are really suffering and genuinely need help are not taken seriously, and secondly that the advantages that come with understanding that mental health is on a spectrum which we all occupy, are lost. Or in other words, that the stigma of mental illness is encouraged. People with mental health problems are routinely discriminated against at all levels, and normalising mental illness is a far more urgent priority for social progress than is preventing the medicalisation of normality.

 

 

 

 

Job! RSA seeks social brain to join Social Brain.

Senior Researcher, Social Brain

Salary: £29,000 pa

Contract Type: Permanent, applications for both full and part-time hours are being considered for this post

Location: London, WC2N 6EZ

The Social Brain project is a core part of our research identity at the RSA, underpinning our view of human capability, and informing our approach to behaviour change.

An opportunity has arisen for a creative and skilled researcher to join our team. Working closely with the Associate Director of Social Brain in this newly created role, you will undertake and manage research, analysis and reporting on major strands of Social Brain work. You will also assist with fundraising and be responsible for horizon scanning and maintaining an engaging online presence for the project.

You will have the opportunity to contribute to the future scope of this innovative project by assisting with its development into a wider programme of work. 

The ideal candidate will have an active interest in brains and behaviour, an analytical mind, and experience of successful fund raising. You will have a confident approach to your work and strong interpersonal skills, enabling you to communicate and engage effectively with a range of different people and audiences.

For over 250 years the RSA has been a cradle of enlightenment thinking and a force for social progress.  Our approach is multi-disciplinary, politically independent and combines cutting edge research and policy development with practical action.  This work is supported by our 27,000 Fellows around the world.

Download full job description

To apply for this role please submit the following to recruitment@rsa.org.uk:

  • Your CV
  • Covering letter explaining how you fit the requirements of this role and the RSA’s broader mission
  • Your preference regarding working full or part time. If part-time, please state how many days or hours you would ideally like to work

Shhh – don’t mention the nudges

January 7, 2011 by · 12 Comments
Filed under: Social Brain, Social Economy 

Nudging, as Jonathan Rowson points out in a recent post on this blog, is already the flavour of the month and looks like being at the top of the menu for the rest of 2011. The government has recently announced that in the coming year we will be ‘nudged’ towards paying our taxes, quitting smoking, insulating our houses and signing up to be an organ donor. The media is lavishing attention on the idea. And the term is gaining such traction that it’s being misapplied to behaviour change measures which are rather more ‘shove’ than ‘nudge’, such as the decision to increase tax on high-strength beer and reduce it on low-alcohol brews.

At the moment, all this publicity and attention seems a bit ironic, given that nudges are meant to be minor interventions which operate unnoticed in the background. It’s perhaps unsurprising, given this is a new idea – in UK policy terms, at least. But for a number of reasons, it risks causing problems in the long run.

If nudges are to succeed, it’s surely better that we don’t recognise them for what they are and what they are trying to do.

First, there’s the point I’ve just made: if nudges are meant to go unnoticed, will they work if we are looking out for them? One of the arguments made in favour of nudges is that they are the antithesis of public approaches to behaviour change, like didactic communication, education and regulation. Apparently, in the past we have ignored, misinterpreted or reacted against these measures. We seem to have an innate antipathy to being told what to do, but because we are not very good at making behavioural choices that are in our best interests for ourselves, we have been making poor decisions in contexts ranging from healthy eating to financial planning.

Nudges are designed to circumvent this active rejection of good advice, and overcome our inability to choose well, by changing the environments in which we make subconscious decisions and thereby influencing our actions. Essentially, they work by making us passive reactors to suggestion rather than active decision makers responding to stimulus.

If nudges are to succeed, then, it’s surely better that we don’t recognise them for what they are and what they are trying to do. Otherwise we might be tempted to ignore or react against them, just as we have with direct communication. HMRC’s plan to nudge people into paying their tax by rewording its tax letters might be more effective if we respond to the suggestive wording without thinking about it than if we are looking out for it when we open the letter. So perhaps they should just go ahead and do it without telling us all about it.

Nudges are more paracetamol than radiotherapy – they might have an impact on the surface and around the edges, but they won’t address the causes of more serious and long-term problems.

Second, the current focus on nudges attracts the vocal attention of cynics and sceptics, many of whom are arguing that there is something underhand about nudging, that it is just another form of the ‘nanny state’, and/or that it involves ‘playing with people’s brains’. (There’s a wonderful example here, which includes a total misunderstanding of the RSA’s Social Brain project.) It seems to me that much of this criticism stems from a lack of understanding of the idea of ‘choice architecture’ which should underpin nudges – a sensible theory that is not exactly Big Brother and the Thought Police. Still, the negative commentary sounds good, and can’t help.

Third, all this attention risks giving the impression that nudges are the government’s sole response to the problems facing society today. There’s certainly a place for them, but there’s no way they can address deep-seated issues such as obesity, social isolation and binge drinking on their own. They’re more paracetamol than radiotherapy – they might have an impact on the surface and around the edges, but they won’t address the causes of more serious and long-term problems.

I can see why nudges are attractive at the moment – they’re cheap and light-touch, which is just what the government wants. But while they’re useful, they’re clearly not a panacea, and giving the impression that they are risks undermining support for them.

Nudging seems to me to be a good idea, and certainly worth a try. So perhaps the government should stay quiet about what it is planning, and just get on with nudging. If it works, they can tell us all about it afterwards.

Oh, and if I come across another blog post titled ‘Nudge, nudge, wink, wink’ I think I’ll scream!

« Newer PostsOlder Posts »