A quick reflection on the Steven Pinker event that just finished.
He looked great. Sharp pinstripe suit, impressive mane of curly silver hair, and a poppy, as if his message that the world has become more peaceful wasn’t enough.
I was glad to see he struggled ever so slightly with his power point slides, which tempered the ambient envy in the room.
Highlights for me were being reminded of the great Voltaire quote: ”Those who can be made to believe absurdities can be made to commit atrocities.”
I also enjoyed the idea that “violence is now a problem to be solved, not a conquest to be won.”
And I liked the reference to Kant’s essay on Perpetual peace, where he argued that three things would reduce violence: trade, democracy and international community.
Perhaps the best point was his claim – in response to a question about morality not being the cause of reduced violence – that the moral sense has done more harm than good. He backed this by saying that most homicides are justified on moral grounds, and that most aggressors think of their cause as morally justified.
I asked a question, which amounted to: If you define violence as human on human activity, then the argument flows beautifully and your data seems to back it. But if you give a broader definition of violence, including forms of ‘structural violence‘ in social and economic systems, violence against other species in the form of factory farming and violence against nature in the form of environmental degradation, it is not so clear that we have become less violent.
His answer was basically that these things are not really violence as such, and he slightly ridiculed the environmental point by comparing killing somebody to polluting a stream, which is rather different from entire islands disappearing and their population being displaced, or Darfur being the first of many climate change wars.
Had Matthew not asked for questions to be brief, I would have linked my question back to Kant. If you reframe violence not as direct human on human contact, but on the way our exploitative instincts manifest in the economy, towards other species and towards the planet, is it not the case that democracy, trade and international community may be responsible for the increase in violence, of a form that threatens our way of life? This idea of the world as a ‘resource to be used’ rather than something to stand in reciprocal relation toresonates with McGilchrist’s argument about the increasing dominance of a left hemisphere perspective on the world.
But then I listen to myself, and wonder if I am one of those people Pinker was talking about when he said that, for social critics, good news is bad news.
Maybe I am, but if the decline of violence is to be a measure of the success of modernity, as Pinker wants, then surely we need to give it its broadest possible definition?
Is it even possible that our violent impulses are being projected away from each other, and towards impersonal systems and structures that cannot retaliate?
An Astrologer once told me that I have an exalted sun in my house of enemies. This positive feature of my chart meant that bad feelings towards me are ‘burnt up’ by the heat of the sun, for which I am grateful. I have never really understood the basis on which astrology might conceivably make sense on even the most charitable interpretation, so I had to take his word for the rationale, but I appreciated the thought, which resonated with my experience.
The lack of enemies in my life means that I lack expertise in confrontation, and I am never quite sure what to do with hostility when it is directed towards me. Sometimes I feel disproportionately shocked and wounded, but mostly I am just bemused.
While chairing a recent RSA event in our Great Room, documented in the Guardian (where I was surprised to see my first name is ‘Donald’) by Carole Jahme, I had to endure some intemperate language from a distinguished guest after I tried unsuccessfully to involve the audience with just fifteen minutes of the lunch hour remaining. This exchange was regrettable, and while I could perhaps have timed and phrased my comments in ways less likely to provoke a hostile response, the sourness of the reaction was bizarre, and there wasn’t much to be learned from the experience.
A more significant case of hostility came a few months ago, when I wrote a Guardian CIF piece in response to The House of Lords report on Behaviour Change(which, alas, received very little coverage because it coincided with Rupert Murdoch giving evidence in Parliament). If you look at the final comment, by Cornelius Lysergic, you will see ‘the old familiar suggestion’:
“F*** off with your change our behaviour s***. Just f*** off.’
That was my favourite comment by far. As I have written before with respect to those who challenge the very basis for RSA’s work on Social Brain it is always interesting to read the ‘enemy’, because they sharpen your sense of purpose. In this case the challenge could not be clearer: Why bother with behaviour change at all?
when you consider the planet’s most pressing challenges, on debt, on energy, on population, on ageing, on stress, on obesity, on terrorism, you find that most are either at root, or in part, behavioural.
The simple answer is that when you consider the planet’s most pressing challenges, on debt, on energy, on population, on ageing, on stress, on obesity, on terrorism, you find that most are either at root, or in part, behavioural. Governments have known this for a long time, but they have only recently realised that traditional policy levers relating to tax and regulation are not always enough to change behaviour in the requisite ways, and that some ‘behavioural insight’ – a pleasing phrase that probably has a limited shelf life – is required.
The Institute for Government addressed this challenge, and their Mindspace report appears to be very popular at every level of Government. For instance, I recently attended a meeting of the DCLG Behavioural Research Network where all major government departments gave a short presentation on how they are applying some of the principles in Mindspace, and more.
But the Government being interested will not allay the concerns of people like Cornelius Lysergic, indeed it will positively reinforce them. So given that it seems we need to change our behaviour, is there a way to make the idea of ‘behaviour change’ less top-down, less about elites manipulating the masses, less behaviourist and more human, less like something done to people(or pigeons) and more about doing things with them?
We think so, and in a few days we will be publishing a report called Transforming Behaviour Change: Beyond Nudge and Neuromania, which will explain how and why. The challenge is to turn behaviour change into something people are encouraged to do themselves, based on knowledge of their own cognitive resources and frailties. A further challenge is to move away from what Aditya Chakrabortty called ‘cute technocratic solutions to mostly minor problems’ and focus on what we call ‘adaptive challenges‘. We need a richer conception of behaviour that is neither reductionist nor exclusively behaviourist, and recognises the need for individuals and groups to have more understanding of their own behaviour, including how it relates to values and attitudes.
Watch this space for details of that, and an ongoing account of the ideas emerging from our work.
Robert Whitaker writes in the current RSA journal that psychotropic medication is less safe and effective than is commonly believed. The use of antidepressant and antipsychotic medication is widespread, and Whitaker presents troubling evidence that taking these drugs can increase the frequency of relapse and reduce the chances of getting well and staying well. He presents evidence that patients who never take medication fare much better in the long term than those who accept medication early on. Whitaker is not alone in drawing attention to the unstable foundations of psychiatric prescribing.
The psychologists Richard Bentall and Joanna Moncrieff have both urged us recognise that the notion that psychiatric drugs correct imbalanced brain chemistry is a myth. It is a myth that has taken hold in the public consciousness to a very great extent. Moncrieff, like Whitaker, explains that these drugs in fact do no such thing. The drugs act on our brains in such a way as to make them function differently and in doing so change the way we feel. Moncrieff explains that these medications produce drug-induced states which mask or suppress emotional problems. She does not say that we should stop using them, but suggests instead that patients should consider whether they want to use them with greater awareness of how they work.
This is tricky territory for non-experts like me to negotiate. The public misconception that mental illness is caused by brain chemistry being out of kilter is one problem. The poor long term outcomes for patients prescribed psychiatric medication is another. The fact that some patients’ severe suffering is ameliorated in the short term by the use of such drugs cannot be denied. It is clear that the issues raised by conflicting evidence in this field are very concerning, and the voices drawing attention to them are becoming louder and more numerous.
The overall effect is of an attack on the orthodox methods of contemporary Western psychiatry. The principle labour of psychiatrists has become to diagnose and prescribe, and medication is the first line treatment for the majority of patients who come under their care. As evidence builds that such medication is neither a cure, nor always a benign intervention, there is a danger that psychiatrists get demonised as unthinking peddlers of poison. A couple of weeks ago I wrote about the problem of the stigma of mental illness. It might not make me popular, but I do have some sympathy for the psychiatrists and wonder whether they are also becoming stigmatised.
The overall effect is of an attack on the orthodox methods of contemporary Western psychiatry.
It is important to remember that psychiatry is a relatively young discipline. There remains an awful lot that is yet to be discovered about what really causes mental illness, and some of the exciting action in the psychiatric field is in neurobiological and genetic research. But it is also in the psychosocial arena, as the importance of social connectivity, mindfulness and physical exercise become ever more apparent. Although it is clear that the drugs being routinely prescribed are in some ways rather clumsy, don’t work for everyone, and bring with them unwanted side effects, it is not the case that they are exclusively bad, or that they are dished out in bad faith.
The evidence that the drugs might worsen long term outcomes is worrying, but we must remember that this is relatively new evidence and that it takes time to acquire and properly analyse. However, there is a danger that psychiatry has already painted itself into a corner whereby it is only capable of regarding mental illness as a set of neurobiological components with the driving aim being to separate and identify them, and then develop the correct psychopharmacological intervention. So while it is an exciting time for psychiatry, it is imperative that psychiatrists take this opportunity to extricate themselves from the clutches of Big Pharma, and open themselves up to the possibility that drug-based treatment should no longer be the first port of call. If they do, then I can envisage a future in which patients are sufficiently informed and reflective to confidently demand to be supported through episodes of mental illness without medication, and psychiatrists become more holistic, discerning and flexible in their approach to treating their patients.
People with mental health problems are the last minority group against whom it’s socially acceptable to discriminate. Sometimes this discrimination comes about accidentally or covertly, Lisa Appignanesi’s recent piece in the Guardian being a case in point. Appignanesi writes that the mental illness ‘industry’ is medicalising normality to a greater extent than ever before. She raises the question of whether the apparent increased prevalence of mental illness is genuinely down to a rising toll of suffering, or whether we have collectively learned to complain more. Appignanesi suggests that the more evidence there is about the increase in mental disorder in the public domain, the more likely we are to label our own problems of living as requiring the attention of a doctor. She goes on to suggest that attending reading groups or going running might do more for sufferers of depression than taking medication and questions the usefulness of psychiatric classification in helping people deal with the problems of their lives.
While I’m sure Appignanesi does not intend to cause offense to people with serious mental health problems, there is a dangerously stigmatising undercurrent to her argument. A distillation of the points she makes might roughly translate as “There’s nothing much wrong with you, you don’t need any pills, pull yourself together.” This might be a useful message for someone who’s struggling slightly with a mild case of the blues, and has the wherewithal and capacity to make a few positive changes in their life. But, for someone with a seriously debilitating mental illness, it is a potentially very damaging message.
‘Mental illness’ is no more a discrete entity than is ‘physical illness’, and no physician would deign to lump diabetes in with cancer when trying to understand patients’ ways of dealing with their illness.
A serious problem which Appignanesi does not attend to, is that the category of ‘mental illness’ is extremely dense. ‘Mental illness’ is no more a discrete entity than is ‘physical illness’, and no physician would deign to lump diabetes in with cancer when trying to understand patients’ ways of dealing with their illness. So, when we talk about mental illness, we might be referring to depression, anorexia, schizophrenia, or any of the other 300 or so disorders in the DSM. Within any one of those diagnostic categories lies a huge variation of patient experience and no two cases of any one of these conditions is ever the same. Just as we all have different pain thresholds, we all have differing levels of resilience to mental distress. But, whatever your threshold, there is a level of serious mental suffering which is as intolerable as the most excruciating physical pain. Within the classification of depression, there exists a whole spectrum of experience ranging from unpleasant but bearable gloom which allows one to continue functioning, right down to crippling despair which makes it impossible to get dressed in the morning or go to sleep at night. For those at the dark end of the spectrum, attending a reading group or going for a run are utterly inconceivable activities, and no substitute for proper medical intervention.
Appignanesi is caustic about the use of antidepressants, and it seems to me that this might be because she has in her mind people who are just a bit down in the dumps rather than those who have a serious mental health difficulty. The ‘definite lift’ Appignanesi tells us participating a reading group provides would certainly not have helped Sandra, a woman I met some years ago, who at that moment was desperately waiting for her annual ECT treatment. She told me that ECT was her lifeline, the only thing that lifted her depression sufficiently to make her life liveable, and that without it she would have killed herself ‘several times over’.
I think the point Appignanesi is really trying to make is that it has become very easy for pretty much anyone to walk into the doctors, have a bit of a moan, and leave with a diagnosis of depression and prescription for Prozac.
I think the point Appignanesi is really trying to make is that it has become very easy for pretty much anyone to walk into the doctors, have a bit of a moan, and leave with a diagnosis of depression and prescription for Prozac. Cultural factors have made it possible for mental illness to be a lifestyle choice. If you can’t be bothered to exercise, eat well, engage in wholesome activities like reading groups, you don’t have to take responsibility anymore because you can just opt for the convenient excuse that you’re ill. Once your GP has agreed that you’re ill, you can slip into the role of patient, and passively wait for treatments to work and experts to make you better. The overlapping agendas of pharmaceutical companies, the health service, and government have come together to feed this situation.
normalising mental illness is a far more urgent priority for social progress than is preventing the medicalisation of normality
Although Appignanesi’s attack of the usefulness of psychiatric classification is understandable, what we need to understand is that there is a difference between everyday, normal suffering and serious mental illness which requires specialist intervention. It is true that deciding on the cut-off point at which normal suffering becomes mental illness can only be determined using subjective means, and that the boundary is inevitably arbitrary. I agree with Appignanesi that there is something crazy about a world in which literally any kind of idiosyncrasy can be identified as a symptom of mental illness, and that there is a complex range of reasons which explain the apparent increase in prevalence of mental disorder. But, we need to exercise caution when drawing attention to these problems because there are real dangers associated with arguing against the medicalization of ‘normality.’ Firstly, that people who are really suffering and genuinely need help are not taken seriously, and secondly that the advantages that come with understanding that mental health is on a spectrum which we all occupy, are lost. Or in other words, that the stigma of mental illness is encouraged. People with mental health problems are routinely discriminated against at all levels, and normalising mental illness is a far more urgent priority for social progress than is preventing the medicalisation of normality.
Senior Researcher, Social Brain
Salary: £29,000 pa
Contract Type: Permanent, applications for both full and part-time hours are being considered for this post
Location: London, WC2N 6EZ
The Social Brain project is a core part of our research identity at the RSA, underpinning our view of human capability, and informing our approach to behaviour change.
An opportunity has arisen for a creative and skilled researcher to join our team. Working closely with the Associate Director of Social Brain in this newly created role, you will undertake and manage research, analysis and reporting on major strands of Social Brain work. You will also assist with fundraising and be responsible for horizon scanning and maintaining an engaging online presence for the project.
You will have the opportunity to contribute to the future scope of this innovative project by assisting with its development into a wider programme of work.
The ideal candidate will have an active interest in brains and behaviour, an analytical mind, and experience of successful fund raising. You will have a confident approach to your work and strong interpersonal skills, enabling you to communicate and engage effectively with a range of different people and audiences.
For over 250 years the RSA has been a cradle of enlightenment thinking and a force for social progress. Our approach is multi-disciplinary, politically independent and combines cutting edge research and policy development with practical action. This work is supported by our 27,000 Fellows around the world.
To apply for this role please submit the following to email@example.com:
- Your CV
- Covering letter explaining how you fit the requirements of this role and the RSA’s broader mission
- Your preference regarding working full or part time. If part-time, please state how many days or hours you would ideally like to work
Nudging, as Jonathan Rowson points out in a recent post on this blog, is already the flavour of the month and looks like being at the top of the menu for the rest of 2011. The government has recently announced that in the coming year we will be ‘nudged’ towards paying our taxes, quitting smoking, insulating our houses and signing up to be an organ donor. The media is lavishing attention on the idea. And the term is gaining such traction that it’s being misapplied to behaviour change measures which are rather more ‘shove’ than ‘nudge’, such as the decision to increase tax on high-strength beer and reduce it on low-alcohol brews.
At the moment, all this publicity and attention seems a bit ironic, given that nudges are meant to be minor interventions which operate unnoticed in the background. It’s perhaps unsurprising, given this is a new idea – in UK policy terms, at least. But for a number of reasons, it risks causing problems in the long run.
If nudges are to succeed, it’s surely better that we don’t recognise them for what they are and what they are trying to do.
First, there’s the point I’ve just made: if nudges are meant to go unnoticed, will they work if we are looking out for them? One of the arguments made in favour of nudges is that they are the antithesis of public approaches to behaviour change, like didactic communication, education and regulation. Apparently, in the past we have ignored, misinterpreted or reacted against these measures. We seem to have an innate antipathy to being told what to do, but because we are not very good at making behavioural choices that are in our best interests for ourselves, we have been making poor decisions in contexts ranging from healthy eating to financial planning.
Nudges are designed to circumvent this active rejection of good advice, and overcome our inability to choose well, by changing the environments in which we make subconscious decisions and thereby influencing our actions. Essentially, they work by making us passive reactors to suggestion rather than active decision makers responding to stimulus.
If nudges are to succeed, then, it’s surely better that we don’t recognise them for what they are and what they are trying to do. Otherwise we might be tempted to ignore or react against them, just as we have with direct communication. HMRC’s plan to nudge people into paying their tax by rewording its tax letters might be more effective if we respond to the suggestive wording without thinking about it than if we are looking out for it when we open the letter. So perhaps they should just go ahead and do it without telling us all about it.
Nudges are more paracetamol than radiotherapy – they might have an impact on the surface and around the edges, but they won’t address the causes of more serious and long-term problems.
Second, the current focus on nudges attracts the vocal attention of cynics and sceptics, many of whom are arguing that there is something underhand about nudging, that it is just another form of the ‘nanny state’, and/or that it involves ‘playing with people’s brains’. (There’s a wonderful example here, which includes a total misunderstanding of the RSA’s Social Brain project.) It seems to me that much of this criticism stems from a lack of understanding of the idea of ‘choice architecture’ which should underpin nudges – a sensible theory that is not exactly Big Brother and the Thought Police. Still, the negative commentary sounds good, and can’t help.
Third, all this attention risks giving the impression that nudges are the government’s sole response to the problems facing society today. There’s certainly a place for them, but there’s no way they can address deep-seated issues such as obesity, social isolation and binge drinking on their own. They’re more paracetamol than radiotherapy – they might have an impact on the surface and around the edges, but they won’t address the causes of more serious and long-term problems.
I can see why nudges are attractive at the moment – they’re cheap and light-touch, which is just what the government wants. But while they’re useful, they’re clearly not a panacea, and giving the impression that they are risks undermining support for them.
Nudging seems to me to be a good idea, and certainly worth a try. So perhaps the government should stay quiet about what it is planning, and just get on with nudging. If it works, they can tell us all about it afterwards.
Oh, and if I come across another blog post titled ‘Nudge, nudge, wink, wink’ I think I’ll scream!
Nudge is fast becoming to behaviour change what Google is to search engines. We have started to use the term as a catch-all shorthand for a patchy understanding of something like ‘all those fancy psycho-social tricks that alter how people think and act’. In doing so we are in danger of squeezing out all the other approaches we have to changing behaviour that might be more powerful or appropriate, including Think, Steer, and Mindfulness.
Tim Hartford is one of many who argue that ‘nudge’ is being overused, and suggests that we should be wary of applying a concept designed for markets to inform the the way Government changes the behaviour of citizens.
‘Nudge’ was the title of the book by Thaler and Sunstein that created the excitement around behaviour change, but as Richard Thaler indicated while speaking at the RSA, Nudge is really just a catchy term for the much more complex notion of ’Libertarian Paternalism‘ that is supposed to underpin nudge interventions. The idea is that you don’t undermine people’s freedom by choosing for them, but merely encourage them to make certain choices by altering the environment in certain ways, changing feedback mechanisms and shifting defaults.
An intelligent use of choice arcitecture makes good sense to me, but the paternalism is not unproblematic. For instance, speaking at the RSA, Anne Coote referred to “The whole ghastly nudge business which is actually about encouraging conformity”.
Moreover, it is not really transformative, which is perhaps, darkly, why people like it. Nudge changes the environment in such a way that people change their behaviour, but it doesn’t change people at any deeper level in terms of attitudes, values, motivations etc. And, as Clive Gross argued on RSA Comment, we risk oversimplifying why we change our behaviour
In any case, you can hardly open a page these days without being ‘nudged’ and Downing Street’s ‘Behavioural Insight Team’ is called ‘the nudge unit’.
I was prompted to share these thoughts after reading quite a detailed piece on the work of this team in The Independent. However, what I really wanted to say, is that if I am absolutely honest, I think one of the main reasons ‘nudge’ has become so popular is that it sounds like ‘fudge’.
I am not being entirely facetious. For instance, social psychology has taught us that we prefer people with names similar to our own, and although I can’t prove that we are thinking of fudge when we say nudge, my gut feeling is that this tacit association needs to be part of the story…
Over the last decade, I have read a lot of non-fiction books, most of them broadly related to human development, from the technical end of popular science to the facile end of self-help. Highlights have been Hare Brain, Tortoise Mind by Claxton, The Happiness Hypothesis by Haidt, Ethical Know-how by Varela, Connected by Christakis and Fowler, Into the Silent Land by Brocks, and Immunity to Change by Kegan and Laskow. All of these books (alas, mostly written by middle-aged white men) marshalled evidence to elegantly describe and develop a core thesis about human nature, and all of them answered the ‘so what?’ question about practical implications very powerfully.
However, with respect to all of these immensely impressive contributions, none of them compare with The Master and his Emissary, the book I was blessed to read to prepare for chairing last week’s RSA Keynote Event.
The best books are usually those that could only have been written by a particular person. In this case, Iain McGilchrist has a distinguished pedigree in both arts and sciences, having been an All Souls prize fellow in literature before training in medicine and becoming an accomplished psychiatrist. He therefore writes with authority in natural science and humanities, and the abundant links that lie between them for those few who know how to look. In addition to this polymathic erudition, one can also sense, between the lines, an old soul with a dry wit who is immensely generous in spirit.
The book is about the profound significance of the fact that the left and right hemispheres of our brains have radically different ‘world views’(described in the book). The hidden story of western culture, told here, is about how the abstract, instrumental, articulate and assured left hemisphere has gradually usurped the more contextual, humane, systemic, holistic but relatively tentative and inarticulate right hemisphere. The thesis is as strong on science as it is on narrative, replete with nuances, caveats, and references.
If you have ever had the feeling that the world is deeply screwed up in a way that you can’t quite articulate, this book will help you to make your case. If you want some insight into why we might be stupid enough to destroy our own planet, or why the slashing of funds for arts and humanities is even more tragic than you might think, read the book.
I might come to regret being quite so effusive, and there are certainly challenges to the core thesis and its implications that need to be entertained, hopefully in future blogs. Nonetheless, I would currently say it is one of the most important books of the 21st century. It is a grand theory for our times. If properly understood and acted upon, it has the potential to transform our view of our selves and our cultures, and prevent us from making a huge number of mistakes that might otherwise seem like sensible decisions.
For those who can’t wait to hear more, go to the end to watch or listen, but for those who prefer to read, my understanding of the argument goes as follows:
1) The left and right brain hemispheres are both involved in almost everything we do, such that crude dichotomies like the left being the logical side and the right being the creative side are a great disservice to public understanding of the brain.
2) However, if we cease to ask what the hemispheres do (language, reasoning, creativity, forecasting) and instead ask how they do it(contextualise or decontextualise, focus on lived experience or abstract models, instrumental or affective feedback, receptivity to counter-evidence, preference for old or new) we find very significant differences in the two hemispheres. The evidence for these differences are meticulously unpacked in the book in a compelling inductive argument- there is no killer fact, but a gradual unfolding of evidence, carefully tied together with an eye for counter-evidence.
3) The hemispheres are divided for good reason, because they perform different functions. The left is broadly about focussing, and the right is broadly about contextualising. These are compatible but occasionally competing aspects of our cognition and they are both essential. McGilchrist uses the example of a bird that can only focus on finding grain with its beak if it ignores surrounding context, but still needs some background awareness of surrounding context, and a capacity to respond to it, to avoid attacks by predators. The genius of the brain is its ability to switch between these modes in response to the environment.
4) The right hemisphere should be the dominant hemisphere, ‘The Master’, because it shapes the context, meaning and purpose of our experience of the world. The left hemisphere, ‘The Emissary’, should help us to achieve within this contextual, meaningful, purposeful perspective. The right hemisphere keeps us in touch with lived experience- keeps us deeply aware and responsive, while the left hemisphere is more like a very powerful computer that makes use of familiar schemas to achieve familiar ends. Cognition at its best is slightly different from army marching orders in that it should go ‘right-left-right’ i.e. context-focus-context, when in fact it often goes left-left-left, focus, focus, focus, with insufficient attention to the basis for the focussing, what is at stake, what might be different, and what is trying to be achieved.
5) There is insufficient evolutionary time for these changes to take place at a structural level of the brain. It is not that the left hemisphere is getting bigger or denser or better connected than the right. The point is that slowly but surely the left hemisphere shapes our culture in such a way that it makes its own perspective the dominant one, until we reach what McGilchrist calls ‘a hall of mirrors’ in which the explicit, instrumental, defined, abstract voice is the only one we believe in, and the implicit, intrinsic, fluid, visceral perspective sounds diminished and foreign. This perspective speaks to, inter-alia, the Art, Drama and Music therapists currently struggling to make the case for their immense social value against cruel and blinkered market logics that want to measure their impact in numerical terms.
6) The mechanism for increased left hemisphere dominance is imitation, a subject close to our heart at the RSA. Crudely, the cultural ‘stuff’ of the left hemisphere is more contagious than the cultural ‘stuff’ of the right hemisphere. Have you heard the expression: “What gets measured gets done”? Or “If you can’t say it, you don’t really understand it?” Both are examples of the ‘emissary’ overstepping his mark, but doing so in a compelling way that is hard to fight back against.
7) Through epigenetic cultural evolution, the left hemisphere gradually colonises our experience. The good news is that left hemisphere tends to be optimistic, giving us a feel-good factor, but the bad news is that it is remarkably unaware of how partial and/or deluded its view of the world can be, and scarily unreceptive to unfamiliar perspectives. In one of the best lines of the book McGilchrist writes:
“If I am right, that the story of the Western World is one of increasing left-hemisphere domination, we would not expect insight to be the key note. Instead, we would expect a sort of insouciant optimism, the sleepwalker whistling a happy tune as he ambles towards the abyss.”
I think this is a truly wonderful book, and it has certainly given me a new set of tools to think about the direction the world is taking and what we might do about it.
Thanks for reading this far, and before we amble into the abyss together, please listen to the audio of the event, which includes the avalanche of questions expertly fielded by Iain McGilchrist, or watch the video below, with just one question posed by me at the end- namely: If this colossal idea is true, which I now believe it to be, how to guard it against widespread simplification or distortion?
Since our event earlier this month, I’ve been banging on about mindfulness to anybody who will listen. Nobody has yet asked me for a definition, but I can see in their eyes that they want to.
So what is mindfulness? My quick answer is that it’s the surprisingly difficult and strangely liberating experience of paying attention to what is going on inside us.
And yet, that’s a bit of a trap, as any definition would be. There is a place for verbal definitions, because unless you pin an idea down, there is a meaning vacuum, and it is difficult to proceed to say anything useful without fear of talking at cross purposes. However, verbal definitions are not entirely benign. If I ask you: What is water? You could give me its chemical composition, H2O, but I won’t really know what it is until I have taken a drink. Some things have to be experience to be understood, as is the case with mindfulness.
A verbal description prior to a taste of the experience may not be harmful, but it is probably not helpful either. Your best definition is to find a good teacher and start practicing, but if that seems daunting, just to try to sit still, perhaps with eyes closed, and continue to breathe, but now with an ongoing awareness of your breath. You will quickly discover, if you didn’t know already, that our everyday minds are rather chaotic, and that we struggle to hold our attention on any single thing, including something as simple as our breath, our basis of our existence, for more than a few seconds.
I will later come back to why such an experience should be important, or useful, but for now I want to stick with the definition. If the injuction to ‘just do it’ sounds evasive, consider the following definition of definition, for why the experience is necessary:
Definition: The vivisection tray upon which a word is splayed; while the gist may be clearly labelled with coloured pins, resuscitation becomes problematic. (Abrose Bierce, The Devil’s Dictionary)
The Devil’s dictionary is deliberately subervise, but the point is powerful. Definitions can bring ideas to life, but if the conditions of that life are stricly demarcated, one’s idea of what the word means is needlessly limited.
Yes, yes, but what is it? What is it? Well it’s partly an attempt to free ourselves from the narcissistic verbal chattering that goes on in our heads for about 16 hours a day. So in a way, words and thoughts are part of that challenge.
Above all, I would say mindfulness is honest. It’s about facing up to our own minds, in all their complexity, and everything that follows from that honesty. The acclaimed author, Tim Parks, describes one aspect of this experience beautifully towards the end of his disarming book, Teach us to Sit Still.
“But as words and thought are eased out of the mind, so the self weakens. There is no narrative to feed it…. ‘Self” it turns out, is an idea we invented, a story we tell ourselves. It needs language to survive. The words create meaning, the meaning purpose, the purpose narrative. But here, for a little while, there is no story, no rhetoric, no deceit. Here is silence and acceptance; the pleasure of a space that need not be imbued with meaning. Intensely aware, of the flesh, the breath, the blood, consciousness allows the ‘I’ to slip away.”
This idea may sound terrifying to some, and suggests quite an advanced level of practice, but while the ‘I’ may ultimately slip away, our lives, and our personal roles and identities very much go on, as Tim Parks suggests with reference to his wife, his daughter and his dog:
“So if I can recount the first minutes, I can’t tell the rest. There are deepenings. There is a liquefaction of some kind, the things flowing into the calves, the head into the breast. And there are resistances: stones, obstructions, pains. The mind goes back and back to them. An ankle. A shoulder. Maybe they will shift, and maybe not. I am absolutely awake. I hear Rita pad downstairs with the dog behind her. I hear a scooter straining up the hill. And I am not there. I am in the stream.”
Still sounds intense, but his responsibilities are unaffected:
“Then the alarm sounds and I must move. I’m up, dressed and getting Lucy into the car in just a few minutes. By ten past seven we are speeding down the hill, trying to beat the traffic light at San Felice.”
(Tim Parks, Teach us to Sit Still, p331)
According to a story in today’s guardian, a serious migraine can cause a permanent French accent.
I have to confess that I found this idea very funny. Then it occurred to me that for the person concerned it was probably fairly devastating. So much of our identity is tied up in what we say, and how we sound. Such a condition could make you unrecognisable to yourself and to others, relationships might change, depression could set in, and your life could spiral out of control. It is serious.
And yet, does it cease to be funny? As George Bernard Shaw once put it: Life doesn’t cease to be funny when people die any more than it ceases to be serious when people laugh. And while foreign accent syndrome is serious, it is surely not as serious as death.
In this respect, like many others, I experience some degree of polyphasia. I recognise that the condition is serious, and yet I still find it funny. Is that ok? Or I am being unforgivably insensitive to the women concerned?