1. Trang chủ
  2. » Thể loại khác

ENTREPRENEURSHIP OECD framework for the evaluation of SME and entrepreneurship policies and progr

126 283 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 126
Dung lượng 2,09 MB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

POLICY EVALU FOURTH FIFTH FIRST SECOND THIRD FOURTH FIFTH FIRST SECOND THIRD FOURTH FIFTH FIRST ENTREPRENEURSHIP POLICYEVALUA TION FIRSTFIFTHFOURT H THIRDSECOND FIRSTFIFTHFOURT H TH

Trang 1

POLICY EVALU

FOURTH FIFTH

FIRST SECOND

THIRD FOURTH FIFTH

FIRST SECOND

THIRD FOURTH FIFTH

FIRST

ENTREPRENEURSHIP

POLICYEVALUA

TION FIRSTFIFTHFOURT

H THIRDSECOND

FIRSTFIFTHFOURT

H THIRDSECOND

FIRSTFIFTHFOURT

H THRIDSECON

D FIRSTFIF

SME ENTREPRE

NEURSHIP FIR

ST SECOND TH

IRD FOURTH FIFTH FI

RST SECOND T

HIRD FOURTH

FIFTH FIRST SE

COND

SMEPOLICYEVALUA

TIONENTREPRENE

URSHIPDFIRST

FIFTHFOURTH

THIRDSECON

D FIRSTFIFTHFOURT

H FIFTHFOURTH

THIRDSECOND

FIRSTFIFTH

ENTREPRENEU

RSHIP POLICY

EVALUATION SM

E

POLICYEVALUA

TIONENTREPRENE

URSHIPSMEPOLICY

EVALUATIONENTRE

PRENEURSHIP

SME

POLICY EVALUA

TION SME ENTR

EPRENEURSH

IP POLICY EVA

LUATION SME

ENTREPRENEU

RSHIP POLICY

EVALUATION SM

E

ENTREPRENEURSHIP

SMEPOLICYEVALUA

TIONENTREPRENE

URSHIPSMEPOLICY

EVALUATIONENTRE

PRENEURSHIP

SMEPOLICYEVALUA

TIONENTREPRENE

URSHIPS

SME ENTREPRE

NEURSHIP PO

LICY EVALUATI

ON SME ENTR

EPRENEURSHIP POLI

CY EVALUATIO

N SME ENTREP

RENEURSHI

SMEPOLICYEVALUA

TIONENTREPRENE

URSHIPSME

POLICYEVALUA

TIONENTREPRENE

URSHIPSME

POLICYEVALUA

TIONENTREPRENE

URSHIPSME

POLICYEVALUA

TI

ENTREPRENEU

RSHIP POLICY

EVALUATION SM

E ENTREPREN

EURSHIP POLI

CY EVALUATIO

N SME ENTREP

RENEURSHIP PO

POLICYEVALUA

TIONENTREPRENE

URSHIPSMEPOLICY

EVALUATIONENTRE

PRENEURSHIP

SMEPOLICYEVALUA

TIONENTREPRENE

URSHIPSMEPOLICY

EVALUATIONENT

POLICY EVALUA

TION SME ENTR

EPRENEURSHIP POLI

CY EVALUATIO

N SME ENTREP

RENEURSHIP

POLICY EVALUA

TION SME

ENTREPRENEURSHIP

SMEPOLICYEVALUA

TIONENTREPRENE

URSHIPSMEPOLICY

EVALUATION

ENTRE PRENEURSHIP SMEPOLICYEVALUA TIONENTREPRENE URSHIPSME SME ENTREPRE NEURSHIP POLICY EV ALUATION SM E ENTREPRENEURSH IP POLICY EVA LUATION SME ENTREPRENEU RSHI SMEPOLICYEVALUA TIONENTREPRENE URSHIPSMEPOLICY EVALUATIONENTRE PRENEURSHIP SMEPOLICYEVALUA TIONENTREPRENE URSHIPSMEPOLICY EVALUATI ENTREPRENEU RSHIP POLICY EVALUATION SM E ENTREPREN EURSHIP POLICY EVA LUATION SME ENTREPRENEU RSHIP PO POLICYEVALUA TIONENTREPRENE URSHIPSME POLICYEVALUA TIONENTREPRENE URSHIPSME POLICYEVALUA TIONENTREPRENE URSHIPSME POLICYEVALUA TION POLICY EVALUA TION SME ENTR EPRENEURSH IP POLICY EVA LUATION SME ENTREPRENEU RSHIP POLICY EVALUATION ENTREPRENEURSHIP SMEPOLICYEVALUA TIONENTREPRENE URSHIPSMEPOLICY EVALUATIONENTRE PRENEURSHIP SMEPOLICYEVALUA TIONENTREPRENE URSHIPS SME ENTR EPRENEURSHIP POLI CY EVALUATIO N SME ENTREP RENEURSHIP POLICY EVALUA TION SME ENTR EP SMEPOLICY EVALUATIONENTRE PRENEURSHIP SMEPOLICYEVALUA TIONENTREPRENE URSHIPSMEPOLICY EVALUATIONENT ENTREPRE NEURSHIP POLICY EV ALUATION SME ENTR EPRENEURSHIP POLI CY EV POLICYEVALUA TIONENTREPRENE URSHIPSMEPOLICY EVALUATIONENTRE PRENEUR POLICY EVALUA TION SME ENTR EPRENEURSH IP PO

ENTREPRENEURSHIP

SMEPOLICYEVALUA

TION

SME ENTR

EP

�����������������������

OECD Framework for the Evaluation of SME and Entrepreneurship

Policies and Programmes

-:HSTCQE=UYUU]X:

The full text of this book is available on line via this link:

www.sourceoecd.org/industrytrade/9789264040083

Those with access to all OECD books on line should use this link:

www.sourceoecd.org/9789264040083

SourceOECD is the OECD’s online library of books, periodicals and statistical databases

For more information about this award-winning service and free trials, ask your librarian, or write to

us at SourceOECD@oecd.org.

ISBN 978-92-64-04008-3

85 2007 04 1 P

of SME and Entrepreneurship Policies

and Programmes

This Framework provides policy makers with a concrete, explicit, practical and accessible

guide to best practice evaluation methods for SME and entrepreneurship policies

and programmes, drawing upon examples from a wide range of OECD countries

It examines the benefits of evaluation and how to address common issues that arise

when commissioning and undertaking SME and entrepreneurship evaluations Key

evaluation principles are set out, including the “Six Steps to Heaven” approach, and

illustrated with examples of evaluations of national, regional and local programmes

that can be explored further by the reader The publication focuses not only on the

evaluation of individual policies and programmes but also on bigger picture peer

review evaluations and assessment of the impact on SMEs and entrepreneurship

of mainstream programmes that do not have business development as their principal aim.

Trang 3

OECD Framework

for the Evaluation of SME and Entrepreneurship Policies and Programmes

Trang 4

AND DEVELOPMENT

The OECD is a unique forum where the governments of 30 democracies worktogether to address the economic, social and environmental challenges of globalisation.The OECD is also at the forefront of efforts to understand and to help governmentsrespond to new developments and concerns, such as corporate governance, theinformation economy and the challenges of an ageing population The Organisationprovides a setting where governments can compare policy experiences, seek answers tocommon problems, identify good practice and work to co-ordinate domestic andinternational policies

The OECD member countries are: Australia, Austria, Belgium, Canada, theCzech Republic, Denmark, Finland, France, Germany, Greece, Hungary, Iceland,Ireland, Italy, Japan, Korea, Luxembourg, Mexico, the Netherlands, New Zealand,Norway, Poland, Portugal, the Slovak Republic, Spain, Sweden, Switzerland, Turkey,the United Kingdom and the United States The Commission of the EuropeanCommunities takes part in the work of the OECD

OECD Publishing disseminates widely the results of the Organisation’s statisticsgathering and research on economic, social and environmental issues, as well as theconventions, guidelines and standards agreed by its members

Also available in French under the title:

Cadre de l’OCDE sur l’évaluation des politiques et des programmes à l’égard des PME

et de l’entrepreneuriat

Corrigenda to OECD publications may be found on line at: www.oecd.org/publishing/corrigenda.

© OECD 2007

No reproduction, copy, transmission or translation of this publication may be made without written permission.

Applications should be sent to OECD Publishing rights@oecd.org or by fax 33 1 45 24 99 30 Permission to photocopy a

portion of this work should be addressed to the Centre français d’exploitation du droit de copie (CFC), 20, rue des

Grands-Augustins, 75006 Paris, France, fax 33 1 46 34 67 19, contact@cfcopies.com or (for US only) to Copyright Clearance Center (CCC), 222 Rosewood Drive, Danvers, MA 01923, USA, fax 1 978 646 8600, info@copyright.com.

This work is published on the responsibility of the Secretary-General of

the OECD The opinions expressed and arguments employed herein do not

necessarily reflect the official views of the Organisation or of the governments

of its member countries.

Trang 5

The OECD Working Party on Small and Medium-sized Enterprises and Entrepreneurship (WPSMEE), in line with a recommendation of the 2004 Istanbul Ministerial Declaration on Fostering the Growth of Innovation and Internationally Competitive SMEs, has prepared this report aimed at strengthening the conceptual framework for SME policy evaluation This report seeks to be of direct practical assistance to public administrators and politicians concerned with evidence on the effectiveness of SME and entrepreneurship policies and programmes at a national and local level.

The Framework was written by Dr Jonathan Potter, Principal Administrator, OECD Centre for Entrepreneurship, SMEs and Local Development, and Prof David Storey, Warwick Business School, UK, and prepared under the supervision of Mme Marie-Florence Estimé, Deputy Director of the OECD Centre on Entrepreneurship, SMEs and Local Development (CFE).

A Steering Group, co-chaired by Dr Roger Wigglesworth, New Zealand and

Mr George Bramley, United Kingdom, guided the preparation of the Framework The Co-Chairs along with the members of the Steering Group offered many valuable comments during the drafting, revisions and review of the Framework: Mrs Sue Weston and Ms Vicki Brown, Australia; Mrs Laura Morin, and Ms Kaili Levesque, Canada; Ms Annukka Lehtonen and Mr Pertti Valtonen, Finland; Mr Serge Boscher and Mr Jean-Hugues Pierson, France; Mr Tamas Lesko and Dr Ágnes Jánszky, Hungary; Mr Young-Tae Kim and Dr Sung Cheon Kang, Korea; and Ms Ana María Lagares Pérez, Spain.

Sincere appreciation is extended to the Delegates of the OECD WPSMEE for their numerous comments and inputs during the compilation of the Framework.

Thanks also go to Mr Kevin Williams, Principal Administrator, OECD Council and Executive Committee Secretariat, Mr Hans Lundgren, Head of Section, Evaluation, Development Co-operation Directorate, and Mrs Mariarosa Lunati, Administrator, CFE/ SME and Entrepreneurship Division for their drafting suggestions and Ms Brynn Deprey,

Mr Jorge Gálvez Mérdez, Mr Damian Garnys, and Ms Elsie Lotthe for their operational support.

Trang 7

Table of Contents

Summary and Route Map . 9

Section 1 Evaluation Issues . 15

Defining evaluation 16

Why do an evaluation? 17

Typical objections to evaluation and responses 19

Key evaluation debates 22

Doing evaluations 27

Some key principles for evaluation practice 32

Notes 34

Section 2 Evaluation of Individual National Programmes . 37

Introduction 38

Evaluations of financial assistance 39

Enterprise culture 42

Advice and assistance 43

Technology 47

Conclusion 48

Section 3 Evaluation of Regional and Local Programmes 53

Introduction 54

Advice, consultancy and financial assistance 55

Clusters and local innovation systems 57

Support to areas of geographical disadvantage 59

Conclusion 64

Section 4 The Role of Peer Review in Evaluation 67

Introduction 68

The peer review methodology 68

OECD national SME reviews 70

OECD regional and local entrepreneurship reviews 71

OECD evaluation guidance 72

Section 5 Reviewing the Aggregate Impact of Public Policies 75

Introduction 76

Impact of mainstream policies on SMEs 77

Capturing the total policy package 91

Trang 8

Conclusion 94

Notes 95

References 97

Appendix A Six Steps to Heaven: Methods for Assessing the Impact of SME Policy 106

Appendix B The OECD Istanbul Position 103

Appendix C Examples of Evaluation Guidance 109

Appendix D Assessing the Quality of an Evaluation 111

Appendix E Framework Condition Indicators: Entrepreneurship Conditions in Denmark in 2005 113

Appendix F Summary of the Evaluation of State Aid to SMEs in the Member States, European Economic Area and the Candidate Countries 120

List of tables 1.1 Qualitative compared with quantitative evaluation 23

1.2 The choice of internal and external evaluators 25

2.1 SME and entrepreneurship policy areas covered 38

2.2 Loan guarantee scheme, Japan 39

2.3 Loan guarantee scheme, Canada 40

2.4 Assistance to new enterprises started by young people, Italy 40

2.5 Grant assistance and small firm performance, Ireland 40

2.6 Public subsidies to business angels: EIS and VCT, UK 41

2.7 Public subsidies to business angels: EIS, UK 41

2.8 Assisting young disadvantaged people to start up businesses, UK 42

2.9 Graduates into business, UK 43

2.10 Investment readiness, New Zealand 43

2.11 Impact of marketing advice, UK 44

2.12 Impact of business advice, Belgium 44

2.13 Impact of advisory support, Bangladesh 44

2.14 Bank customers receiving business advice, UK 45

2.15 Assistance and advice for mature SMEs, UK 45

2.16 Use and impact of business advice, UK 46

2.17 Evaluating entrepreneurial assistance programs, US 46

2.18 Encouraging partnerships amongst SMEs, Sweden 47

2.19 Technology assistance to small firms, US 48

2.20 The SBIR program, US 48

2.21 The UK SMART programme 49

2.22 Impact of science parks, Greece 49

2.23 Impact of science parks, Sweden 50

2.24 University/SME links, New Zealand 50

Trang 9

2.25 Impact of management training on SMEs, UK 51

2.26 Small firms training loans, UK 51

3.1 Regional/local policy areas covered 55

3.2 Subsidised consulting, Belgium, Wallonia 56

3.3 Business advisory services, UK, South West England 56

3.4 Enhancing the capability of the SME owner through use of consultants, UK, Scotland 57

3.5 Export information and advice, Canada, Quebec 57

3.6 Enterprise partnerships for exporting, Sweden, Örebo 58

3.7 Small business grants, UK, North East England 58

3.8 Regional development agency grants, Ireland, Shannon 59

3.9 Local innovation system policy, EU regions 60

3.10 Business networking, UK, North East England 61

3.11 Enterprise Zone evaluation, US, Indiana 61

3.12 Enterprise Zone evaluation, US, Five States 62

3.13 Enterprise Zone evaluation, UK 62

3.14 Evaluation of enterprise support in disadvantaged areas, UK 63

3.15 Regional policy evaluation, UK 63

3.16 Regional policy evaluation, Italy 64

3.17 Rural policy evaluation, Canada, Quebec 64

3.18 Rural enterprise support, United Kingdom, Northumberland 65

5.1 The indicators 83

5.2 Ease of Doing Business ranking 84

5.3 Starting a business in 1999, 2004 and 2006 85

5.4 Average conversion rates young businesses/nascent entrepreneurs, 2000-2004 92

5.5 Selecting policy areas 92

B.1 Six Steps to Heaven: Methods for assessing the impact of SME policy 106

D.1 Grid for a synthetic assessment of the quality of evaluation work 112

Figure 1.1 New Zealand Trade and Enterprise (NZTE) Growth Range Programme Logic Model 31

Trang 11

Policies and Programmes

© OECD 2007

Summary and Route Map

This Framework document provides a forum for the international exchange

of knowledge on best practice evaluation of Small and Medium-sizedEnterprise (SME) and Entrepreneurship policy Its target readership is publicadministrators and policy-makers concerned with the formulation,development and implementation of SME policy, together with professionalsconcerned with evaluation of such policies It seeks to be concrete, explicit,practical and accessible, drawing upon examples from a wide range of OECDcountries Almost all the evaluations documented are publicly availableonline It is also intended that the text will assist SME policy makers in non-member countries

In line with the OECD Istanbul Position, which underlines the need tostrengthen the culture of evaluation of SME and entrepreneurship policies(Appendix A), this document has four objectives:

● To increase the awareness of politicians and public officials of the benefitsfrom having an evaluation culture

● To disseminate examples of good micro evaluation practice at national andsub national levels

● To highlight key evaluation debates: Who does evaluations? Whatprocedures and methods should be used? When to do the evaluations?What about the dissemination of findings? Should all policies bedisseminated in the same way?

● To make a clear distinction between policies that operate at the micro level,

i.e SME and entrepreneurship specific policies, and those that operate at

the macro level, i.e mainstream policies that nonetheless influence SMEs

and entrepreneurship

To achieve this end, the Framework is divided in three main parts Thefirst deals with evaluations of micro-entrepreneurship and SME policiesformulated and delivered at the national level The second deals withentrepreneurship and SME policies delivered at the local/regional level Thethird section is rather different It reviews approaches to establishing theaggregate impact of a range of public policies that strongly influenceentrepreneurship and SME performance, yet are rarely the responsibility of

Trang 12

the main department of government responsible for SMEs Prior to that, theFramework reviews good practice in evaluation more generally.

It should be noted that the Framework does not seek to be a handbook ormanual that sets out the steps that need to be taken to complete anevaluation A substantial body of such handbooks and manuals exist andselected examples are provided in Appendix C Rather the focus of theFramework is on discussing the difficult issues that arise in evaluating SMEand entrepreneurship policies and programmes, particularly with respect toquantitative impact evaluation, and providing examples of evaluationapproaches that have been used to address these issues The Frameworkshould therefore be read in conjunction with, rather than in place of, otherevaluation guidance in this field

This summary provides a route-map for the reader, highlighting its keyconclusions It then moves on to setting out the key conclusions from each ofthe three parts Finally it sets out a proposal for continuous improvement inthe evaluation of SME policy

So, why do evaluation?

● To establish the impact of policies and programmes

● To make informed decisions about the allocation of funds

● To show the taxpayer and business community whether the programme is

a cost-effective use of public funds

● To stimulate informed debate

● To achieve continued improvements in the design and administration ofprogrammes

When and how should programme evaluation be done?

● Evaluation has to be integral to the policy process Hence there is merit inundertaking prospective evaluations – as policy options are beingformulated; formative evaluations as the policy is in operation; andsummative evaluations once a clear policy impact can be judged Thesummative evaluation findings have to feed back into current policymaking

● For summative evaluations we favour a dual approach The first is toestablish the impact of established large scale programmes by usingquantitative, statistical methods using “control groups” that score highly onthe “Six Steps to Heaven” metric

● These can be valuably complemented with qualitative approaches such ascase studies and peer reviews for more detail on how policy works and how

Trang 13

it may be adjusted Qualitative approaches are also useful for smaller scaleprogrammes for which the costs of quantitative evaluation may be too high.

And by whom?

● Evaluation undertaken by specialists is essential for reliable impact evaluation.Sometimes the necessary independence can only be delivered by “outsiders”but independent evaluation units within government can also perform thisrole

The bedrock of good evaluation comprises:

● The programme has to have clearly specified objectives from which it ispossible to determine whether or not it succeeded

● The evaluation has to be set in progress and data collection begun as, oreven before, the programme is implemented

● The evaluation has to be able to lead to policy change

The evaluation of national programmes

● This section of the Framework provides examples of evaluations that havebeen undertaken on the following policy areas: Financial Assistance;Enterprise Culture; Advice and Assistance; Technology; and ManagementTraining

● It concludes that, whilst there are examples of high quality evaluations, this

is not the norm

● Broadly, lower quality evaluations seem to produce more “favourable”outcomes for the project because they attribute observed change to the policywhen this may not be justified

The evaluation of local and regional programmes

● At the regional and local level less costly and less sophisticated approachesare often adopted because the programmes are often smaller and becauseevaluation structures in terms of information bases, professionalevaluation capabilities and understanding of evaluation methods by usersmay be weaker

● The work of the OECD Local Economic and Employment Development(LEED) Programme with city and regional governments and developmentagencies has shown that a critical issue for policy development is increasingunderstanding of the real policy needs of the region or locality andassessing the alternative options for intervention given the specific localcontext

Trang 14

Peer review: a tool for evaluation

● Whilst evaluation of programme impacts is still required, broader “peerreviews” are also useful in providing “big picture” assessments of the fullrange of entrepreneurship and SME policies including in selected regions

Reviewing the aggregate impact on SMEs and entrepreneurship

of public policies

● Although explicit and targeted SME and entrepreneurship policies influencethe creation of new firms and the development of SMEs, so also do othergovernment policies which do not have such a focus They are also rarelythe responsibility of the main SME department of government Thesepolicies include control of interest rate and tax policies, social policies such

as the setting of unemployment benefits, the cost and time of starting anew business and the role of immigration and emigration

● These policies represent substantial expenditures in many countries andour review shows they impact powerfully on entrepreneurship and SMEdevelopment However, control, or influence, over that total expenditure israrely exercised by the department of government responsible for SMEpolicy Instead, other departments or organisations of government oftenhave considerably larger budgets, but may have different priorities to that ofthe main SME department

● The challenge for SME and entrepreneurship policy makers is to identifythese macro policies and their links to enterprise It is then to seek toensure that they work in a way which is congruent with the objectives ofenterprise support

● Evaluation approaches need to be developed that permit policy makers withSME and entrepreneurship responsibilities to be able to engage more fully incross-government discussions on priority setting

Trang 15

of policy makers, SMEs and the taxpayer A future text could benefit from thefollowing:

● More examples of high quality evaluations that can be shared betweencountries; and

● More evidence, probably in a case study format, of the links betweenevaluations undertaken and policy changes An example here might be thereview by OECD of SME policy in Mexico and the changes that subsequentlyoccurred in that country

In short, what we are able to provide in this current text are some genericapproaches to evaluation and some examples of evaluations undertaken,some of which are better than others in terms of their technical merit

Trang 17

Policies and Programmes

© OECD 2007

Section 1

Evaluation Issues

Trang 18

Defining evaluation

In their review of policy evaluation in innovation and technology,Papaconstantinou and Polt (1997) provide a very helpful definition ofevaluation They say: “Evaluation refers to a process that seeks to determine

as systematically and objectively as possible the relevance, efficiency andeffectiveness of an activity in terms of its objectives, including the analysis ofthe implementation and administrative management of such activity”

Several words or phrases in this definition merit strong emphasis The

first key-word is “process” This emphasises that evaluation is not a “once-off”

activity, undertaken once a particular programme has been completed

Instead it is an integral element of a process of improved policy or service

delivery

A second key phrase in the definition of evaluation is “as systematically

and objectively as possible” Given that evaluation traditionally takes place “at

the end of the line”1 there are likely to be strong entrenched interests in placeonce a programme has been in existence for a number of years Theseentrenched interests include the direct beneficiaries of the programme, such

as the businesses receiving funds, but they will also include those who areresponsible for initiating and administering these programmes All else heldequal, it is to be expected that all these groups will choose the programme to

continue or expand The task of the evaluator, however, is to “systematically

and objectively” assess the merits of the programme In this task, the evaluator

may well conflict with those committed to the programme Only through the

use of objective techniques, discussed later in the paper, can the evaluatordemonstrate their independence to those delivering programmes

The third key phrase in the definition is “the relevance, efficiency and effect

of an activity in terms of its objective” The implicit assumption in this statement

is that the policy has clear objectives and that these are stated in sufficientlyclear terms for them to be used by the evaluator In practice, this is by nomeans always the case As will be shown later, a key role for evaluators is often

to formalise for the first time the objectives of programmes, often after suchprogrammes have been in operation for many years

This definition, and the OECD Istanbul paper,2 emphasised that

evaluation has an integral role to play in the policy process Evaluation cannot

be left “at the end of the line” Instead, it has to be a key element of initial

Trang 19

policy formulation Once the policy is operational, all organisations andindividuals responsible for delivery have to be aware that evaluation is to takeplace Once the evaluation has been undertaken, and sometimes as it is takingplace, it should be used as the basis for dialogue with policy makers, with theobjective of delivering better policy The outcome of the evaluation can thenbecome an input into a debate on the appropriate ways for governments andSMEs to interact.

Why do an evaluation?

Whilst some countries have a long established tradition of undertakingevaluation, others do not For those seeking to champion a culture ofevaluation, the following arguments summarise the case in favour We thenalso take the arguments that are often used against evaluation and addressthem

To establish the impact of policies and programmes against their

objectives

The principal reason for doing evaluation is to establish whether or notpolicy has contributed to correcting or ameliorating the problem it set out toresolve This is often thought of in terms of tackling market failures thatreduce economic efficiency, such as inadequate availability of finance, skills,advice and technologies, but may also encompass a desire to improve equityamong groups of people or places, for example by supporting entrepreneurshipamong unemployed youth or entrepreneurship in poor localities Evaluation

of these impacts is facilitated by a clear statement of measurable outcomesright at the start of the policy/programme design and the collection of relevantdata throughout its life

To make informed decisions about the allocation of funds

Governments manage a portfolio of policies and programmes each withits own rationale and justification Evaluation assists managers to assess therelative effectiveness of these policies and programmes and to makejudgements about where to place their efforts in order to obtain the greatestbenefits for given costs Evaluation evidence can help to identify wheregovernment can make the biggest difference to its objectives and targets

To show the tax payer and business community whether

the programme is a cost effective use of public funds

The scale of tax-payers’ funding for entrepreneurship and small businesspolicies clearly varies from one country to another It also varies according toprecisely what is incorporated into the definition Nevertheless the amounts

Trang 20

are usually substantial For example, EIM (2004) reports that approximatelysix billion Euros was spent annually by EU Member States on state aid to smalland medium sized enterprises.3 However, even this may be a considerableunderestimate One EU Member State – the United Kingdom – in acomprehensive review of tax-payers’ funding directed towards SMEs, reportedthat 2.5 billion GBP of public money was spent on direct support to SMEs inEngland alone, PACEC [2005] quoted by [National Audit Office (2006)] A thirdexample is a programme in the United States – the Small Business InnovationResearch Program Cooper (2003) reports this programme made annual awards

of USD 1.1 billion in the 1997-1999 calendar years.4

These examples illustrate that, probably for most developed countries,public funding of SMEs is substantial, even if it is extremely difficult toquantify in aggregate and may still be relatively modest in terms of tax-payers’support to large enterprises Given these substantial sums of public money, it

is reasonable for tax-payers to be reassured that their funding is being spent

in an appropriate manner It is reasonable for tax-payers to demand evidencethat public programmes are spending funds in accordance with their statedobjectives This role is normally played by public auditors A second role, butone not normally played by auditors, is to assess whether the public funds areachieving the objectives set out by politicians This is the function ofevaluators

To stimulate democratic debate

In democracies, it is reasonable for the electorate to question thedecisions made by governments In order to facilitate that debate, it isappropriate for organisations to be able to have access to evidence on theimpact of policies In this regard, SME and entrepreneurship policies are nodifferent from other areas of government expenditure For this reason, theresults of evaluations enhance and inform public debate

This debate only takes place when the results of evaluations enter thepublic domain This emphasises not only the importance of undertakingevaluations, but also of their findings being disseminated

To achieve continued improvement in the design and administration

of programmes

Politicians and public servants administering SME and entrepreneurshipprogrammes should be seeking continuous improvements and there is ofcourse a need to ensure adaptation to changing conditions Evaluation is a keytool for learning about how well policies and programmes are delivering, whatproblems may be emerging, what elements work well and less well and whatcould be done better in the future For example, policy makers may seek to

Trang 21

deliver policies to different groups, for example by directing more resourcestowards enterprises established by the socially disadvantaged or by thoselikely to employ others, or those in high technology They may seek to deliverpolicies using different organisational forms, to stimulate the take-up ofpolicies or to deliver them in a more cost effective manner All these changes

of focus can emerge from undertaking appropriate evaluations Alternatively,existing policies can be delivered more effectively as a result of accumulatedevaluation experience

Typical objections to evaluation and responses

The discussion above focussed on the positive aspects of evaluation.However, one of the barriers to spreading an evaluation practice is a resistance

to evaluation amongst a range of politicians, policy makers and practitioners.Here we discuss some of the most common objections to evaluation and thedegree to which they stand up to critical assessment Our judgement is thatalthough the objections have some weight, on balance, they do not amount to

a solid case for rejecting evaluation and hence sacrificing the benefits citedabove

But evaluation is expensive and bureaucratic

Evaluation is not costless Costs include the payment of consultants/evaluators, the collection of data and the time taken from those deliveringprogrammes to inform the evaluation The United Kingdom statistical office,for example, requires the time of recipients of the programme in providing

their opinions and information about the programme to be costed (i.e the cost

of the respondents time must be explicitly included in the cost of theevaluation) Data may also have to be collected from both clients of theprogramme, and a “control group” of non clients

However, the resources committed to evaluation are normally verymodest in comparison with the total size of the programme For example, thereview by Sheikh and Steiber [2002], “Evaluating Actions and MeasuresPromoting Female Entrepreneurship” identified an appropriate budget ofbetween 2% and 5% for the purposes of evaluation This may be appropriatefor small programmes but for programmes in larger countries a figure ofbetween 0.5% and 1% of annual expenditure would be more usual

Given the opportunity which evaluation provides for using resourcesmore efficiently, and for the design of new programmes, these seem to be verymodest costs indeed

Trang 22

But evaluation does not always lead to policy improvements

Evaluations of programmes can fail to lead to policy change for severalreasons It may be because those responsible for programme management arehostile to the concept of evaluation It can also happen where evaluators fail

to engage programme managers, or where they fail to understand the details

of the programme Evaluators themselves may fail to express their findings in

a language that is easily understandable to policy makers and thoseresponsible for policy delivery

Although there are instances where evaluations have not led toimprovement, this is not a sufficient justification for being reluctant toundertake any form of evaluation To minimise the potential problems,programme managers have to be persuaded that the quality of programmedelivery can be enhanced through evaluations and the consultants have to

“reach out” to programme managers to engage them wherever possible

But ultimately evaluation takes place for the benefit of the tax-payer, andnot for the provider[s] of the programme Those programmes that are shown

to be demonstrably ineffective have to be closed, and this has to be recognised

by programme managers

In practice, if evaluation is to lead to change, a balance must be struckbetween, on the one hand, ensuring the independence of the evaluator whilst,

on the other, engaging support of those involved with programme delivery

And risks diverting attention away from programme delivery

It is the case that there are cultural differences between evaluators anddeliverers of programmes The former are often analytical individuals, oftenwith an academic background, whereas the latter consider themselvespractical individuals focused upon delivering services to their clients Becausethey are so close to their clients they view themselves as the best judge of theeffectiveness of the programme They have difficulty seeing what value a

“detached” consultant can provide in terms of programme improvement Forthis reason, programme deliverers often resent the time taken in completingforms and collecting data which are, however, vital to the success of anevaluation Programme managers and deliverers understandably can also feelthreatened by an evaluation, especially when they know they do not fullyunderstand the techniques used by the evaluators, but fear the evaluators donot fully understand the programme

For an evaluation to be a success however, these cultural differences have

to be managed The most effective way of achieving this, as identified above,

is to demonstrate that the interests of both the evaluators and the programmemanagers/deliverers can be more closely aligned by both parties focussing onareas for programme improvement This can be most effectively achieved by

Trang 23

engaging those delivering the policy through ensuring the issues of concernare addressed in the evaluation and by being adequate opportunity tocomment upon, and offer their interpretation of, provisional findings.

It is, of course, a simplification to imply that the programme managersmost hostile to evaluation are those fearing negative feedback Neverthelesssenior policy-makers need to be aware that evaluation, whilst it is in thetaxpayers interests, may provoke considerable hostility from programmedeliverers The latter have to be engaged but not the ultimate voice

But evaluation is only for advanced countries

It is the case that programme evaluation is more frequently undertaken

in advanced, rather than in developing economies In part this may be because

it is more difficult to find sufficient numbers of individuals with the type ofanalytical skills necessary to conduct good quality evaluations in developingeconomies Major donor organisations, such as the World Bank, can thereforeplay a role in both undertaking evaluations themselves and in training others

to perform these tasks

Nevertheless it is not only the most developed countries that undertakeevaluation In its review of state aid to SMEs, EIM [2004] surveyed EU MemberStates, European Economic Area and candidate countries A total of

29 countries were identified Only Ireland, the Netherlands and Slovakiaperformed state aid evaluations on all schemes, implying that evaluation isnot simply characteristic of the more wealthy countries EIM specifically notedthat the State Aid Act obliges the Slovak Government to evaluate all state aidusing statistical analysis of aid recipients and control groups They also notedthat the analyses are performed on macro and micro levels Full details of thisimportant survey are provided in Appendix F

Mexico has also recently committed to undertaking SME evaluation Itbelieves this will “improve support systems” and identify areas of opportunity,thus granting certainty to the population on the efficient use of resources

These examples illustrate that it is not necessarily the most economicallydeveloped countries which are committed to undertaking evaluation

But there is no history of undertaking evaluation

In countries without a tradition of evaluation it can be difficult to makethis transition Nevertheless, it is clear that the electorates in many countriesare becoming more sophisticated, in part because of access to the media andthe internet Countries where evaluations do not take place are likely, in thefuture, to be asked why it is that such policy assessments take placeelsewhere The, perhaps unjustified, inference is that evaluations do not takeplace because there is something to hide It is not sufficient to imply that

Trang 24

policies are being delivered efficiently because there is no information to thecontrary.

Key evaluation debates

This section reviews four key evaluation debates The first is theappropriate technique for evaluating SME and entrepreneurship policies Thesecond is the appropriate level of sophistication of the quantitative evaluationapproaches The third is whether evaluation should be undertaken by

“insiders or outsiders” The fourth is whether the same evaluation techniquesshould be used for all programmes

The choice of technique

There are two basic options in undertaking summative evaluations5 – thequantitative and qualitative approaches Quantitative evaluation involvesassessment of the impact of programmes through a comparison of outcomesbetween the group in receipt of aid and some form of “control group”, forexample a similar group of enterprises that have not benefited from policy orthe same enterprises before and after receipt of policy support Such data may

be collected either directly from the firms themselves or from official data.Qualitative evaluation or approaches are much more likely to rely upon theopinions of programme stakeholders including managers and beneficiariesabout the functioning and impact of the programme through techniquesincluding surveys, case studies and peer reviews Both approaches will relyupon a careful scrutiny of programme documentation

Table 1.1 reviews the advantages and disadvantages of the quantitativeand qualitative approaches

The principal advantage of qualitative evaluation is the additionalinformation that it can provide beyond that associated with quantitativeevaluations Qualitative evaluation normally involves face-to-face discussionswith those in receipt of aid, those responsible for delivering programmes andother stakeholders These conversations help not only to obtain informationfrom stakeholders that can lead to a deeper understanding of the mechanisms

by which policy impact is achieved and how policy might be adjusted but also

to engage stakeholders in policy learning processes The approach can alsopick up a wide range of other information of interest to policy makers, goingbeyond impact to issues such as client satisfaction, policy appropriateness,sustainability and conflict with other policies

However, qualitative evaluation has the major disadvantage that it is notgood at providing reliable estimates of policy impact for a number of reasons.First, surveys of a sample of stakeholders run the risk of being unrepresentative

of programme participants Increasing the numbers however either adds

Trang 25

considerably to budgets or reduces the quality or depth of the interviews.Second, despite the best efforts of interviewers, there remains a strong risk ofinterviewer bias Thirdly, the outcome of qualitative evaluation is more often

to describe a process rather than to evaluate an outcome Fourthly, there is noopportunity for independent verification Finally, programme participantsmay be asked questions that are virtually impossible to answer The classicexample is “What impact do you think this programme had on yourbusiness?” Implicitly the respondent is required to hold every other influence

on their business constant and estimate how a programme which probablytook place some years previously has influenced their business in theintervening period Even if some programme participants were able toundertake such mental gymnastics others clearly are not and there is no way

of distinguishing between the answers of the two groups

The principal disadvantages of the quantitative approach concern itstechnical difficulties and the relatively narrow nature of the results it offers,which focus primarily on issues of effectiveness and efficiency In terms of thetechnical issues, effective quantitative evaluation requires extensive datacollection on the performance of policy-targeted and control group firms.More importantly, however, in SME and entrepreneurship policy evaluationsituations, there may sometimes be no natural, uncontaminated controlgroup Whilst good quantitative analysis seeks to match as closely as

Table 1.1 Qualitative compared with quantitative evaluation

Qualitative evaluation methodologies Quantitative evaluation methodologies

Advantages Disadvantages Advantages Disadvantages

Engages participants in

policy learning

Respondents and interviewers may be biased

or poorly informed

Clear answers on impact Cost of data collection and

technical demands Can vary the scale and

Absence of pure control groups

Should be easy to interpret Risks including

“un-representative” groups

Possible false impression

of precision Can assess against a wide

range of evaluation criteria

No opportunity for independent verification

Narrow focus on effectiveness and efficiency Picks up unintended

policy options and

alternatives

Hard to establish cause and effect

Trang 26

possible policy-influenced and non-policy-influenced firms and seeks toaccount for possible selection bias between the two groups, there are alwayssome differences between the “treatment” and “control” groups that cannot

be taken into account To address this, evaluators in several OECD countrieshave collaborated with their own statistical agencies to derive samples ofSMEs with prescribed characteristics so as to act as a “control group” Even

so, some evaluators may be tempted to give a false impression of precision

in reporting their results In terms of the nature of the results the main

drawback is the problem of the “black box”, i.e that little information is

provided on the nature of the policy problem and how it is addressed bypolicy and hence on how policy might be adjusted to increase impact Thiscan be reflected in an unduly narrow focus of quantitative approaches ontwo evaluation criteria, namely efficiency (impacts against expenditure) andeffectiveness (impacts against targets), that can leave other evaluationquestions unanswered

On the other hand, the fundamental advantage of quantitative evaluation

is that it should provide clear answers If it is well done it will get as close aspossible to a value-free assessment of impact Of course, no evaluation iswholly value-free

Given the advantages and disadvantages of both approaches, thisFramework argues for the use of a plurality of approaches that are able togain from the complementarities in the information they can provide Therole of the qualitative approach to evaluation is recognised and the role ofsurvey, case study and peer review approaches is outlined in this respect.However, the Framework focuses in particular on setting out the issuesinvolved in undertaking good quantitative evaluations, reflecting theoriginal concern of the OECD Working Party on SMEs and Entrepreneurship(WPSMEE) to share information on best practices in impact evaluation Thisreflects both the perception that quantitative impact evaluations are notsufficiently used in SME and entrepreneurship policy evaluation and thepresence of some difficult issues that are not sufficiently well understood bypolicy makers, particularly in accurately establishing the counterfactual

Assessing quantitative evaluations: The “Six Steps to Heaven”

A useful guide in developing robust quantitative evaluations andassessing the quality of such evaluation evidence is the so called “Six Steps

to Heaven” approach, (Storey, [2000]), reviewed and operationalised recently

by Lenihan et al [2007], Bonner and McGuiness [2007], and Ramsey and Bond

[2007]

Trang 27

The Six Steps methodology is a categorisation in which Step 1 is the least,

and Step 6 the most, sophisticated approach The six steps are:

● Step 1 Take up of schemes

● Step 2 Recipients’ opinions

● Step 3 Recipients’ views of the difference made by the assistance

The above three steps tend to be associated with qualitative approaches,but the following three steps typify quantitative evaluations:

● Step 4 Comparison of the performance of the assisted with typical firms

● Step 5 Comparison with match firms

● Step 6 Taking account of selection bias

This is an approach that is mainly relevant to quantitative and ex post evaluations rather than to qualitative and ex ante evaluation It is nonetheless

a very helpful framework for assessing the former type of evaluations and isreferred to a number of times in this Framework, notably in relation to wherethe evaluation examples provided later in the report stand in relation to thedifferent levels of sophistication in establishing impact

Fuller details of the approach are to be found in Appendix B, which istaken from OECD (2004a)

Evaluation by insiders or outsiders?

A second key evaluation debate is who should undertake the evaluation –should it be insiders or outsiders? The arguments for and against are set out

in Table 1.2 below

The key argument in favour of using external evaluators is that they arenot only less likely to be influenced by the political regime, but they are alsomore likely to be seen, by others, to be independent.6 This independence islikely to provide more objectivity to the evaluation A second argument for the

Table 1.2 The choice of internal and external evaluators

External evaluator Internal evaluator Advantages Disadvantages Advantages Disadvantages

Less likely to be influenced

by political regime

Less well informed of the

“real” situation

More insights through

“understanding the realities

More chance of “buy-in”

from those delivering the programme

Less likely to be able to

“think outside the box”

Brings new ideas and fresh

approaches

More chance of really changing policy

Trang 28

use of external evaluators is that they can bring new ideas and freshapproaches not only to the evaluation but also to subsequent policydevelopment.

In contrast, the key advantage of using internal evaluators is that theyfrequently have a much better knowledge both of the policy itself and of thepolitical context in which it is undertaken Internal evaluators therefore have

to spend less time in acquainting themselves with the detailed workings ofpolicy and can focus much more upon producing targeted recommendations.Internal evaluators also are more likely to engage the support of the managersdelivering the programmes because of their greater knowledge and becausethey are perceived to be less threatening Finally, internal evaluators are alsomore likely to be careful about their policy recommendations, since they willhave to perhaps live with, and possibly implement, any changes theyrecommend Unlike external evaluators they are less likely to be able to “walkaway from the issue”

The OECD (2004a) recognised that the choice of internal or externalevaluators was a close call Much might depend upon whether, in commissioningthe evaluation, the purpose was to undertake a “root and branch” approach inwhich case external evaluators might be preferred In contrast, evaluationdesigned to ensure programmes were “on track” might favour the use ofinternal evaluators

Ultimately, therefore, there is a broad choice between selectingevaluators who are more independent but with perhaps less policy insight andevaluators who may be less radical in their recommendations but whoperhaps are more likely to induce changes in programmes The Istanbul

Ministerial Declaration, however, made it clear that it favoured “independent

but informed evaluators.”

It is also possible to develop alternative models that are neither fullyinternal nor fully external For example, some government departments andagencies create independent evaluation units that are not directly attached orresponsible to the particular units responsible for the programmes that theyevaluate Another option is to create teams of evaluators, with some comingfrom inside and some from outside the organisation This latter approach istypical of the peer review method described in Section 4

Should the same evaluation techniques be used for evaluating

all programmes?

A third debate is whether the same approaches should be used to reviewall programmes The central argument favouring a similar approach toevaluation is that the ultimate purpose, if the tax-payer is to obtain value formoney from SME and entrepreneurship policies, is that all programmes

Trang 29

should have the same effectiveness at the margin In simple terms it shouldnot be possible to transfer funds from one programme to another and increasethe benefits to SMEs and/or the wider economy So, the economic impact ofpolicies to reduce taxes for SMEs should have the same marginal benefit aspolicies to provide export advice or management training or access to finance.There is clear evidence from work on SME evaluation that the methodsused for evaluation appear to influence the apparent effectiveness ofprogrammes and policies Expressed baldly, the less sophisticated theevaluation the more likely it is to apparently demonstrate benefits Thisreflects the more simple evaluations failing to hold constant the myriad ofinfluences on outcomes and, by implication, attributing them to theprogramme In contrast, the more sophisticated approaches strip out theother influences, and so only attribute to the programme its “real” effects.

This finding has major implications because it means that it is not valid

to compare the findings from a study which uses a Step 2 or Step 3 approachwith that which uses a Step 5 or Step 6 approach Indeed it may even beinvalid to compare findings between Steps 5 and 6 Hence it means that only

by using a uniform methodology can governments really ensure thatentrepreneurship and SME policy is efficiently delivered

The opposing argument is the following: that programmes varyconsiderably in scale and budget and that if a fixed proportion of programmefunds are to be allocated to evaluation then inevitably evaluation budgets willalso vary More sophisticated evaluations are, of course, generally both moreexpensive and with higher fixed costs than less sophisticated approaches.This means that if the same approach were used across all programmes thensmall programmes would have to devote a much higher proportion of theirfunds to evaluation than is the case for larger programmes This is unrealistic.Both arguments, of course, have validity, but some form of compromise ispossible If the desirability of uniform evaluation procedures is accepted, then itstill may be possible for individual smaller programmes to be evaluated lessfrequently, or possibly as part of an evaluation of a package of small programmes.What is clear is that programmes with small budgets should not either escapefrom all evaluation or be assessed by radically different – and by implication lesschallenging – procedures

Doing evaluations

This section examines the practical issues of how to prepare, manage anddisseminate evaluations Further useful information on preparing, managing anddisseminating evaluations is provided in the evaluation guidance documentsreferred in Appendix C, including the web resources of the OECD DevelopmentAssistance Committee (DAC) Evaluation Resource Centre

Trang 30

Preparing an evaluation

A number of key issues have to be addressed when preparing an

evaluation: the first is to identify precisely what it is that will be evaluated.

This can be a problem when one item in a “package” of assistance is beingassessed For example, some SME programmes combine both financialassistance and business advice It is therefore important, at the outset, todecide whether the whole programme is to be evaluated or whether thecomponent parts are to be evaluated separately The advantage of examiningthe whole programme is that an overall assessment of the use of public fundscan be undertaken But separately examining the component parts – thefinance and the advice – may show that it is only one or the other that is reallyeffective For example, the evaluation might show that the impact on firmperformance is primarily due to access to finance In that case, resourcesmight be moved away from the advice towards activities that improve access

to finance

A second question is when the evaluation should be conducted Here

again there is no simple answer because some forms of assistance take longer

to impact on firm performance than others For example, a programmedesigned to network firms with one another at a trade fair might be expected

to have an impact in terms of additional sales within 3-6 months In contrast,

a programme to provide management training for SME owners might not beexpected to have significant impacts for at least 2-3 years Finally,entrepreneurship policies – such as those designed to influence the attitudes

of school children to enterprise creation might not be expected to beobservable for at least 20 years Given these varying likely outcomes the periodafter the policy is implemented after the evaluation takes place is also likely tovary However, a broad rule of thumb is that SME policy initiatives such as theimpact of loans and grants should plan for the evaluation immediately thepolicy is introduced and begin the formal evaluation within 2-3 years

The objectives of the programme have to be clearly specified…

Unless programmes have objectives which are in principle capable ofmeasurement then a quantitative evaluation cannot be undertaken Veryoften these objectives are set out in a logic model that provides policy makersand evaluators with a clear understanding of possible programme outcomesand how they are likely to be achieved This is important for evaluationbecause it provides a guide to what should be assessed and measured Logicmodels can take many forms, which will all be valid as long as they clearlyexpress what policy is seeking to achieve An example from New Zealand is setout in Figure 1.1 The University of Wisconsin online reference guide onEnhancing Programme Performance with Logic Models provides further useful

Trang 31

information for the development of these tools.7 The involvement of skilledevaluators at the outset of programme formulation will help ensure thatobjectives and programme logic are clearly stated In practice, however, this isnot always the case This means that a key function of the evaluator has to be

to infer – perhaps even guess – what the objectives of the policy maker werewhen the programme was designed Although this might seem curious, it isoften in practice a very valuable role of the evaluator, and even more so forpolicy-makers and programme manager.8

Specifying the content of the evaluation

Those responsible for preparing the evaluation have to be clear,particularly when the evaluation is to be undertaken externally, about what it

is expected to achieve and what the role of the evaluation manager will be.The latter may have to assist consultants in clarifying both the objectives ofthe programme and the current requirements of politicians Howeverevaluation managers should not normally specify in detail the methodology to

be used but merely identify the questions to be addressed such asadditionality, dead weight or displacement.9 To specify the methodology indetail would be to exclude the possibility of evaluators employing new ornovel approaches The only clear exception would be where the purpose is todirectly compare policy impacts at two points in time Here there would bemerit in using a similar methodology, providing the chosen method wasdeemed to be satisfactory

Ensuring good data are available

Whatever level of sophistication of evaluation is used, a minimumrequirement is that data are available For example an evaluation of theimpact of business advice or of loans or grants requires, as a minimum, acomplete and up to date list of clients to be available Until such informationexists no evaluation should even be contemplated

Managing an evaluation

Six major issues arise in managing an evaluation

Should the evaluator be internal or external?

This issue was discussed in-depth above and it was concluded that whilstthe external evaluator was more likely to be independent from theorganisation responsible for designing and delivering policy, and hence morelikely to be critical of it, the internal evaluator was likely to have greaterknowledge and political awareness There would be, therefore, circumstances

Trang 32

in which either was appropriate but, all else equal, the “independence” of theevaluator was critical.

The scale of the budget

As noted earlier the scale of budget strongly influences the methodologywhich can be undertaken It also influences the outcome of the evaluationsince it seems that inexpensive evaluations seem to produce more “positive”findings This can produce considerable pressure to undertake only the mostsimple of evaluations

Timetable

A timetable for the delivery of the evaluation has to be specified Veryoften this coincides with a wider appraisal of policies within thecommissioning department To achieve this there has to be agreed milestones

in the form of interim and final reports to ensure that research is on track Toachieve this it may be necessary to have a small steering committee

In practice, however, the more sophisticated evaluations frequently tend

to overrun in terms of time This is because of the difficulty of contactingappropriate numbers of enterprises – often because the lists given to theconsultants are incomplete Hence, a crucial element of ensuring thatevaluations are on time is to ensure that the base data are of high qualitybefore the evaluators begin their work

Quality assessment

Initially an assessment has to be made of the quality of the evaluation.One clear purpose of this Framework is to enable an accurate assessment ofthe quality of the evaluation to be made Our overwhelming focus is upon thetechnical quality of the evaluation – defined as the extent to which polic

Trang 33

Ultimate Accelerated development of firms with high growth potential: • Increased revenue • Increased profits • Increased salaries and wages • Increased employment (FTEs)

to create, absorb and commer

External factors beyond the control of NZTE

Non-NZTE government business assistance programmes

Not all of the intermediate outcomes will be applicable to all clients The redevance of the

process Depth of assessment depends on needs and growth potential of firm)

Segmentation by growth potential (fluid)

Trang 34

makers can be confident that an identified programme impacts genuinelyreflect the contribution of the programme However other considerations mayconcern those managing evaluations and these are outlined in Appendix D.

Dissemination

A key decision is the extent to which the results of an evaluation enterthe public domain Some evaluations – normally those conducted by insiders– are specifically targeted towards the interests of, for example, departmentalcommittees Other evaluations, however, are intended to contribute to a widerpublic debate on SME or entrepreneurship policy These are likely to beundertaken by academics with a commitment to disseminating their work.Therefore, a key decision is the nature of any dissemination which is to beundertaken

A second issue is the extent to which the data collected by the evaluators

is able to be accessed by other researchers in the area Such researchers maywish to verify the interpretations and conclusions that have been placed uponthe data, but much depends upon data protection legislation The latter alsoinfluences the opportunities for researchers to use the data for extending theresearch over time by following up these businesses or individuals or derivingsamples from similar businesses or individuals in other industries or regions

In principle therefore, subject to complying with data protection legislation, it

is desirable for the data derived from evaluations to be available to bona fide

researchers

Ultimately, however, the purpose of evaluation is to stimulate and informpublic debate It is desirable that evaluation reports are made available to themedia and other interested parties Only in this way can evaluation truly lead

to policy change

Some key principles for evaluation practice

Drawing on the discussion above, the following key principles forevaluation can be set out

1 Evaluation should lead to policy change

The prime purpose of undertaking evaluation is that it informs keydecisions Such decisions may be to change policy For example, it may lead to

a policy budget being increased, decreased or the policy itself beingabandoned It may also lead to different objectives of the policy beingspecified and, most likely, will lead to the policy being delivered in differentways – possibly to different target groups Alternatively, the policy decisionmay be that no change is required and that the programmes are “on track”

Trang 35

2 Evaluation should be part of the policy debate

If it is to be at the heart of policy making, evaluation cannot be confined

to providing a historical review of previous policies Instead, evaluation has tolead to policy learning so that current policies may be amended in the light ofthis knowledge and new policies developed from such learning

3 Evaluators should be “in at the start”

Evaluation is, as noted above, vital in the formulation of new policies.Those skilled in evaluation techniques can make major contributions to policydevelopment, most notably in helping policy makers to clearly formulatepolicy objectives Without the input of evaluators, policy objectives may not bespecified at all, or be expressed in such a way that they cannot fail to beachieved, or be specified in such opaque language that it is impossible todetermine at a future point whether or not they were achieved

The role of the evaluator is to ensure that policy makers specify clear andtangible objectives as policy is being developed A second merit in evaluationbeing included at an early stage is that budgets for evaluation are specifiedwhen programmes are being formulated A third advantage of evaluatorsbeing “in at the start” is the methods to be used to determine the success orotherwise of the programme are clearly brought to the attention of the policymakers Finally, policy makers are made aware that a programme is to beevaluated and the criteria of success that will be used to assess effectivenessare agreed in advance by all parties

4 Evaluation techniques should always use the most appropriate

methodology

Subject to the provisos outlined below, the most appropriate evaluationtechniques should be employed The “Six Steps to Heaven” is a simple method

of assessing sophistication in this area, and for impact evaluation of medium

to large sized programmes we recommend that at least a Step 4 approachshould be used By this we mean that, as a minimum, the beneficiaries of aprogramme or policy should be compared with a “control group” of otherwisesimilar firms or individuals, but who did not participate in the programme Bycomparing outcomes for the two groups a crude estimate can be made ofpolicy impact Governments, with their substantial access to statistical data,always have the opportunity to formulate such control groups, even if on someoccasions their own statistical agencies are unwilling to collaborate in suchwork The importance of evaluation however is such that the statisticalagencies should be mandated to develop this aspect of their evaluationculture

Trang 36

However, in order to understand more deeply the processes of how policyworks and to involve stakeholders in policy learning qualitative methods such

as peer reviews and case studies also have their place These methods are notsuited for robust impact estimates but are necessary for many other aspects ofevaluation

5 Evaluation should apply to all policies and programmes

It appears that some policies and programmes are evaluated many times,and with some rigour, whereas others seem to either escape scrutinyaltogether, or are evaluated in a much less challenging manner than otherpolicies/programmes This is extremely unfortunate since the overall purpose

of policy is to ensure that, at the margin, all policies are equally effective

6 International comparisons should be made where necessary

Finally this Framework concludes that for some policy areas, evaluationcan only be undertaken on an international basis For example, comparing theimpact of tax regimes is best undertaken across countries As noted above,international comparisons and peer reviews are appropriate for reviewingpolicies at the regional or local level In this respect, OECD can serve a valuablefunction as a “clearing house” for information on policy effectiveness, forproducing harmonised data and in having access to experts in this field

Notes

1 This term was coined in OECD (2004a) It refers to evaluation only beingconsidered after the objectives and targets have been set, and the programme hasbeen in operation for some time When evaluation is “at the end of the line” it canonly serve as a historical auditing function OECD (2004a) contrast this with themore preferred “Integral Evaluation” Here Evaluation is integrated into the policyprocess So, as the policy is being developed, consideration is given to how it will

be evaluated This has four merits The first is that it ensures that the objectives andtargets of the policy are clearly specified Second, it ensures that the necessary datacollection can begin immediately and is “built in” to the programme Thirdly, itensures that the evaluation budget is identified Fourthly, it ensures that theprogress of the programme is monitored so that modifications and improvementscan be made in the light of evidence

2 “Evaluation of SME Policies and Programmes”, background report for the 2nd OECD

Conference of Ministers Responsible for SMEs, Istanbul, June 2004

3 Source: European Commission State Aid Scoreboard, Spring 2004 update

4 The awards specified by Cooper were 1.107 in 1997, 1.067 in 1998 and 1.097 in 1999– all in billions of USDs

5 For formative and prospective evaluations, the opportunities for undertakingquantitative/statistical approaches are generally much less, unless they drawupon previous summative evaluations

Trang 37

6 Examples of external evaluators are generally those not employed as publicservants by the government These will include private sector consultants andresearch contractors, such as academics

7 www.uwex.edu/ces/lmcourse/#

8 For example, when the objectives of a programme are unclear the evaluator mightask the question “What outcome from the programme would you identify asfailure?” Experience of asking this question is that it is initially met with hostilesilence Frequently it is then followed by specifying outcomes that are [close to]impossible These might include … Nobody is interested in the policy Afterconsideration, however, more realistic objections normally emerge

9 Additionality refers to the proportion of the supported activity that would nothave gone ahead without the support, deadweight activity that would have gone

ahead anyway, displacement refers to reductions of activity elsewhere (e.g in other firms) as a result of increased competition from the supported activity (e.g.

the firms receiving assistance)

Trang 39

Policies and Programmes

© OECD 2007

Section 2

Evaluation of Individual National Programmes

Trang 40

according to the Six Steps framework but others are less so The inclusion of

Table 2.1 SME and entrepreneurship policy areas covered

Financial assistance Enterprise culture Advice and assistance Technology Management training Loan Guarantee

Schemes.

Programmes to encourage young disadvantaged individuals to start businesses.

Provision of Marketing Advice.

Subsidies to New Technology Based firms

Subsidies to stimulate the take up

of management training in SMEs.

to start businesses.

Provision of general business advice.

Creation of Science Parks.

Tax relief to business

angels.

Enhancing Investment readiness

of SME owners.

Encouraging SMEs to export.

Ngày đăng: 19/07/2017, 14:23

TỪ KHÓA LIÊN QUAN

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN

🧩 Sản phẩm bạn có thể quan tâm

w