Life is full of problems. Consequently, it’s also full of problem solvers. But how many problem solvers is too much? And how do you know when you’ve reached that critical point? When it comes to the support available for young entrepreneurs, it appears these questions are still waiting to be grappled with.
I’ve just been reading a paper on enterprise support that was put together by the businessman Doug Richards for the former Conservative Shadow Cabinet. The central message to come out of the Richards Report was that the ecosystem of enterprise advice and guidance had ballooned under the Labour government and turned into a sprawling, confusing and not to mention expensive morass. To take a few figures cited in the report, 3,000 small business government support schemes were in operation in the mid 2000s, only 4.4 per cent of FSB survey respondents said they had used government business support, and the total amount spent by the state on such schemes ran to nearly £2.5 billion (at least, this was the case in 2003/4).
Informed in part by the paper’s findings, when the Coalition government finally came to power it took a number of steps to reduce the cost and complexity of the support available to existing and would-be entrepreneurs. Primary among these efforts was to terminate face-to-face Business Link centres across the country and replace these with one central hub for information and advice. In a similar move, the RDAs were shut down and replaced with new LEPs that were designed to be led and run by businesses, not government. In sum, the government moved from a position of offering direct support to the enterprise community to one of creating a more favourable framework that would place others, for instance universities and libraries, as the main providers.
While these changes were arguably necessary, if not popular with everyone, they failed to get to grips with the same issues of duplication and confusion that have been inflicting the support ecosystem of third and private sector providers.
Throughout the recent workshops we held with young entrepreneurs across the country, time and again we heard the same message that the help provided to those who want to start and grow a business is often, somewhat paradoxically, both disjointed and duplicated. For example, young people will find that both their local library and their university will offer a mentorship scheme, while only the library will provide desk space and only the university will provide grant loans. This can be problematic for the young person who is left bewildered by the options available to her/him, and it may also be inefficient and wasteful since different groups are effectively providing the same service.
Choice is of course a good thing. Young entrepreneurs, and all entrepreneurs for that matter, should be able to pick and choose between different services to suit their needs. But as it stands, the support sector doesn’t operate like a free market. Current provision is dictated by funders, not by the demand of young people seeking services. In other words, the choice that young entrepreneurs face in picking services is to an extent fabricated. In theory big funders, whether national banks or philanthropic foundations, should be able to make informed decisions about which support services to assist based on the result of evaluations. Yet as the Richards Report pointed out, not enough support services conduct rigorous assessments of their activities.
So what to do about it? How can we create an ecosystem of young enterprise support that offers choice and high quality services to end users, and which sees organisations working together to minimise duplication and make the journey of enterprise support as seamless as possible?
A few ideas are already floating about in this space, but one particularly attractive option to have cropped up in a few of my conversations with people in the sector is that of a new kitemark accreditation scheme for support services. This could be similar to the new Project Oracle programme in London, whereby youth services in the city are taught how to undertake basic evaluation and are judged against an agreed grading system (running from 1 to 5). The idea is that this would help funders make more informed decisions about where to channel their money, and that it would also provide some indication to young people about the quality of the services they might receive.
It’s early days but it will be interesting to see if this has any legs. In the meantime, I’d be keen to hear of other people’s ideas.
I’m in the middle of evaluating the Arts and Social Change strand of Citizen Power Peterborough. I don’t want to get into the details of the programme itself – read here if you’d like a primer – but rather, to talk about a few interesting problems that the evaluation has thrown up.
Evaluating something like Arts and Social Change isn’t about measuring ‘success vs. failure’ – if everything in the project was a ‘success’ in that narrow definition, then there would be no learning and the project as a whole would have failed. Citizen Power Peterborough has above all been an experiment – and nowhere more so than in Arts and Social Change. The goal is to find out what impact, if any, the arts can have on positive social change, and this has been pursued through a number of targeted arts-based interventions in Peterborough. Some projects have been hugely successful in terms of impact, others partly so (with important findings), and all have been able to adapt as they progressed, reflecting on-the-ground realities, new ideas and preliminary results.
The Arts and Social Change programme has run according to a set of principles; one of those principles is emergence. To paraphrase broadly, this is the idea that interventions in complex structures (like the communities of Peterborough) will lead to multiple, complex outcomes – the kind that can’t easily be predicted at the outset. These kinds of findings are extremely valuable, because they can only be brought to light through hands-on experimentation.
So, to recap: a huge experiment in a complex structure, where accurate prediction is all but impossible, where there are high levels of reflexivity, and where only the broadest of goals (increasing attachment, participation and innovation) were known at the start. How do you evaluate an experiment like the above?
One tactic is to do what many people would do when faced with a big problem: break it down into a series of smaller, more manageable problems. Arts and Social Change ran as a series of interconnected strands, linking with other parts of the Citizen Power programme: these strands were much smaller and more responsive, with fewer participants from all sides. They had more specific goals (such as ‘increasing community cohesion’) and tentative measures for their individual success or failure. Evaluating the strands themselves in this way will certainly be part of the final evaluation, and it’s incredibly exciting and positive to get to delve into the programme at that kind of level.
It would be missing a trick, though, to evaluate the whole programme by the success of its parts. Talking to people involved, one of the programme’s real (and if we’re not careful, hidden) successes has been its impact upon the ‘bigger picture’. To give an example: one of the first documents I came across whilst researching, was a letter to the Evening Telegraph (Peterborough’s local paper) from a resident, describing an intervention that had been quite strongly criticised by the paper: “…I found it one of the most enlightening and thought-provoking activities that I have ever taken part in. I still find it hard to believe that the city council had the courage to help fund this, but I am very glad that they did.” Read her words carefully once more, and try to recall the last time a Council-funded programme made you feel that way. How do you measure enlightenment? Was it ‘good value for money’? The author measures the cost favourably against some other council spending (and she makes a convincing case), but could you price the “most enlightening and thought-provoking” events in your life? I know I couldn’t. Impacts like this, if they can be nailed down and cogently articulated, give the lie to those who see the arts as an ‘optional extra’ – a luxury to be cut when money’s tight.
Consider this: I like knowing my neighbours, but I have enough social capital that I don’t rely on them – if I have personal or professional difficulties, I have plenty of places to turn to. I like where I live, but if I had to move, I’m pretty certain I’d be fine. It’s not like that for everyone. We’re talking about real interventions in places where community ties, family bonds and professional networks are all under incredible strain, and where without support, a space for dialogue and the ability to explore together, things are unlikely to improve. Art can make that happen, in a way that little else can, and Arts and Social Change is in a unique position to show how. I’ve heard neighbourhood managers talk about how an intervention has fundamentally altered how they see their work, civil society leaders tell of a re-invigorated sense of collective self-belief, and residents describe moving from isolation, to feeling that they are involved in a shared project – a shared life – with those around them.
But how to capture all that? We’re all going to face some extraordinary pressures over the next few months and years, and Peterborough will face as many of them as anywhere. If we can articulate the many things that have been learned by Peterborough’s residents, then we can share them, and play a part in handing powerful tools (for free!) to communities who need them most.
How often do we ask ourselves: is what I’m doing truly working? It’s a simple question and one which makes intuitive sense to ask if we ever intend to learn from our mistakes and improve the impact of our work. And yet despite this it is something we tend to ignore time and again. Both at an individual and institutional level, many of us are reticent to the idea of evaluating the policies and practices that shape the effectiveness of our public services and which dictate their value for money. Indeed, these days it is quite rare to hear or see the term ‘evidence-based policymaking’.
For youth services at least, Project Oracle is attempting to change all of this. The initiative was set up by the Mayor of London’s office to help smaller organisations that are working with young people evaluate their programmes using ‘rigorous and internationally recognised’ standards. The way it works is that once organisations have signed up they are provided with free advice and support on how to assess their work, and are guided through different ‘levels’ of evaluation that gradually become more sophisticated. The Project has the added benefit of creating a sound structure for collecting and disseminating cross-comparable data that everybody in the sector will find meaningful – at least in the capital.
While attending a recent seminar at NESTA to learn more about the project, I heard a number of interesting points being raised about the obstacles to undertaking evaluation schemes and the subsequent difficulties of making use of the data once it gets collected. Many of those attending, for instance, said they feel as though there are cultural differences between people working in the voluntary sector and those in the academic/policy world; academics may insist on gathering quantitative data but service practitioners find anecdotal evidence far more useful. Another key issue raised was that although many funders are willing to pay for the evaluation of an organisation’s operations, only ever a handful actually commit to providing the resources for the implementation of recommendations that emerge from the research.
While all of this is no doubt interesting and useful, it felt as though the conversation side-stepped one of the biggest impediments to the initiation, the quality and the utility of evaluation schemes. This is the simple fact that many of us have difficulty in accepting defeat and apportioning fault. Whether a frontline practitioner or a senior manager, taking part in an evaluation process may open up a Pandora’s Box of knock-effects, which at best may lead to the radical restructuring of the organisation and at worst the termination of projects and ultimately job losses. Vested interests aside, there is also the challenge to service users and colleagues who may find themselves in the uncomfortable position of saying, albeit honestly, that someone’s efforts and practices are ineffective. It is one thing to acknowledge failings in our own work, but to highlight the caveats in someone else’s takes some courage.
The reason why this is doubly important is because there has rarely been a greater time when we have had to identify failure in our work and be open to new approaches. Whereas in previous years public service innovation was characterised by the sharing and adoption of universally recognised ‘best-practice’ approaches at home and abroad, the next stage is arguably going to be an era of localised, radical experimentation. In other words, it is likely that organisations providing public services will be encouraged to become their own ‘innovation labs’, testing different methods and practices until they land on the thing that works best for them. In practice, this could mean a school experimenting with different ways of teaching maths, or it could mean a GP consortium trying out innovative new health treatments with their patients.
Wherever this new wave of experimentation and rapid evaluation takes places, it will demand that service users, practitioners and those in senior management have a certain type of mindset which is comfortable with ambiguity and not afraid of failure.
It could be said that in the future there will be two sides to the coin when it comes to public service transformation. The first is that success depends on learning what works and adopting these approaches; the second is that we learn what doesn’t and ensure that these styles gracefully bow out. To date, it seems we have focused too much on the former at the expense of the latter.