Life is full of problems. Consequently, it’s also full of problem solvers. But how many problem solvers is too much? And how do you know when you’ve reached that critical point? When it comes to the support available for young entrepreneurs, it appears these questions are still waiting to be grappled with.
I’ve just been reading a paper on enterprise support that was put together by the businessman Doug Richards for the former Conservative Shadow Cabinet. The central message to come out of the Richards Report was that the ecosystem of enterprise advice and guidance had ballooned under the Labour government and turned into a sprawling, confusing and not to mention expensive morass. To take a few figures cited in the report, 3,000 small business government support schemes were in operation in the mid 2000s, only 4.4 per cent of FSB survey respondents said they had used government business support, and the total amount spent by the state on such schemes ran to nearly £2.5 billion (at least, this was the case in 2003/4).
Informed in part by the paper’s findings, when the Coalition government finally came to power it took a number of steps to reduce the cost and complexity of the support available to existing and would-be entrepreneurs. Primary among these efforts was to terminate face-to-face Business Link centres across the country and replace these with one central hub for information and advice. In a similar move, the RDAs were shut down and replaced with new LEPs that were designed to be led and run by businesses, not government. In sum, the government moved from a position of offering direct support to the enterprise community to one of creating a more favourable framework that would place others, for instance universities and libraries, as the main providers.
While these changes were arguably necessary, if not popular with everyone, they failed to get to grips with the same issues of duplication and confusion that have been inflicting the support ecosystem of third and private sector providers.
Throughout the recent workshops we held with young entrepreneurs across the country, time and again we heard the same message that the help provided to those who want to start and grow a business is often, somewhat paradoxically, both disjointed and duplicated. For example, young people will find that both their local library and their university will offer a mentorship scheme, while only the library will provide desk space and only the university will provide grant loans. This can be problematic for the young person who is left bewildered by the options available to her/him, and it may also be inefficient and wasteful since different groups are effectively providing the same service.
Choice is of course a good thing. Young entrepreneurs, and all entrepreneurs for that matter, should be able to pick and choose between different services to suit their needs. But as it stands, the support sector doesn’t operate like a free market. Current provision is dictated by funders, not by the demand of young people seeking services. In other words, the choice that young entrepreneurs face in picking services is to an extent fabricated. In theory big funders, whether national banks or philanthropic foundations, should be able to make informed decisions about which support services to assist based on the result of evaluations. Yet as the Richards Report pointed out, not enough support services conduct rigorous assessments of their activities.
So what to do about it? How can we create an ecosystem of young enterprise support that offers choice and high quality services to end users, and which sees organisations working together to minimise duplication and make the journey of enterprise support as seamless as possible?
A few ideas are already floating about in this space, but one particularly attractive option to have cropped up in a few of my conversations with people in the sector is that of a new kitemark accreditation scheme for support services. This could be similar to the new Project Oracle programme in London, whereby youth services in the city are taught how to undertake basic evaluation and are judged against an agreed grading system (running from 1 to 5). The idea is that this would help funders make more informed decisions about where to channel their money, and that it would also provide some indication to young people about the quality of the services they might receive.
It’s early days but it will be interesting to see if this has any legs. In the meantime, I’d be keen to hear of other people’s ideas.
How often do we ask ourselves: is what I’m doing truly working? It’s a simple question and one which makes intuitive sense to ask if we ever intend to learn from our mistakes and improve the impact of our work. And yet despite this it is something we tend to ignore time and again. Both at an individual and institutional level, many of us are reticent to the idea of evaluating the policies and practices that shape the effectiveness of our public services and which dictate their value for money. Indeed, these days it is quite rare to hear or see the term ‘evidence-based policymaking’.
For youth services at least, Project Oracle is attempting to change all of this. The initiative was set up by the Mayor of London’s office to help smaller organisations that are working with young people evaluate their programmes using ‘rigorous and internationally recognised’ standards. The way it works is that once organisations have signed up they are provided with free advice and support on how to assess their work, and are guided through different ‘levels’ of evaluation that gradually become more sophisticated. The Project has the added benefit of creating a sound structure for collecting and disseminating cross-comparable data that everybody in the sector will find meaningful – at least in the capital.
While attending a recent seminar at NESTA to learn more about the project, I heard a number of interesting points being raised about the obstacles to undertaking evaluation schemes and the subsequent difficulties of making use of the data once it gets collected. Many of those attending, for instance, said they feel as though there are cultural differences between people working in the voluntary sector and those in the academic/policy world; academics may insist on gathering quantitative data but service practitioners find anecdotal evidence far more useful. Another key issue raised was that although many funders are willing to pay for the evaluation of an organisation’s operations, only ever a handful actually commit to providing the resources for the implementation of recommendations that emerge from the research.
While all of this is no doubt interesting and useful, it felt as though the conversation side-stepped one of the biggest impediments to the initiation, the quality and the utility of evaluation schemes. This is the simple fact that many of us have difficulty in accepting defeat and apportioning fault. Whether a frontline practitioner or a senior manager, taking part in an evaluation process may open up a Pandora’s Box of knock-effects, which at best may lead to the radical restructuring of the organisation and at worst the termination of projects and ultimately job losses. Vested interests aside, there is also the challenge to service users and colleagues who may find themselves in the uncomfortable position of saying, albeit honestly, that someone’s efforts and practices are ineffective. It is one thing to acknowledge failings in our own work, but to highlight the caveats in someone else’s takes some courage.
The reason why this is doubly important is because there has rarely been a greater time when we have had to identify failure in our work and be open to new approaches. Whereas in previous years public service innovation was characterised by the sharing and adoption of universally recognised ‘best-practice’ approaches at home and abroad, the next stage is arguably going to be an era of localised, radical experimentation. In other words, it is likely that organisations providing public services will be encouraged to become their own ‘innovation labs’, testing different methods and practices until they land on the thing that works best for them. In practice, this could mean a school experimenting with different ways of teaching maths, or it could mean a GP consortium trying out innovative new health treatments with their patients.
Wherever this new wave of experimentation and rapid evaluation takes places, it will demand that service users, practitioners and those in senior management have a certain type of mindset which is comfortable with ambiguity and not afraid of failure.
It could be said that in the future there will be two sides to the coin when it comes to public service transformation. The first is that success depends on learning what works and adopting these approaches; the second is that we learn what doesn’t and ensure that these styles gracefully bow out. To date, it seems we have focused too much on the former at the expense of the latter.