A Look at the Many Ways to Evaluate Tax Incentives Part I
An evaluation expert offers his perspective
In a two-part essay written for The Pew Charitable Trusts, Jim Landers, associate professor of clinical public affairs and Enarson fellow, John Glenn College of Public Affairs, The Ohio State University, examines the key components of effective incentive evaluations, including examining design and administration, reviewing usage data to analyze program administration and operation, and using surveys to collect data in support of conducting a robust evaluation.
These pieces were originally published in a newsletter distributed to tax incentive evaluators and scholars in August and December of 2019.
Evaluation perspectives
How Do We Evaluate Tax Incentive Programs? There Are Multiple Approaches.
Jim Landers
Associate Professor of Clinical Public Affairs, Enarson Fellow
John Glenn College of Public Affairs
The Ohio State University
How should we evaluate tax incentives? What evaluative approach or approaches should we use? What analytical methods should we employ to conduct our evaluations? What data sources could be useful? What data will we have to collect and how? What data are accessible or readily available from secondary sources? These are just some of the questions I asked as a fiscal staffer in Indiana when the legislature initiated annual tax incentive evaluations in 2014. Not that I was the only one asking these questions. My colleagues in Indiana were posing similar questions. As well, these questions have been the subject of much discussion at annual incentive evaluators roundtables hosted by the National Conference of State Legislatures and The Pew Charitable Trusts. For those who are initiating or in the early stages of conducting tax incentive evaluations, what is important to know is that the growing collection of tax incentive evaluations conducted in the states helps provide answers to these questions.
The key is that it is not a one-size-fits-all world when it comes to incentive evaluation. That is, the approaches, methods, data, and data sources will vary from one tax incentive to another because the tax incentives themselves vary in terms of purpose, design, administrative procedures, target population, usage, and the like. Clearly, from the tax incentive evaluations published in recent years, evaluators working in the states understand the need for using a mix of approaches, methods, and data to conduct exhaustive and informative incentive evaluations. While the stock of tax incentive evaluations comprises an abundance of evaluative approaches ranging from the simple to the very elaborate, evaluators still must be selective in the approaches they employ. In the first place, they have to strike a balance between the evaluative approaches they would like to employ under ideal circumstances and what is feasible given the time, talent, and technical constraints they may face. Moreover, evaluators need not and likely cannot employ every approach available. Besides just being parsimonious, the approaches that evaluators employ will depend on the focus of the evaluation process and the information and data that evaluators can reasonably obtain relating to a tax incentive program.
In my first two columns I highlight some of the more noteworthy and informative approaches in recent tax incentive evaluations. This time I focus on the merits of examining incentive design and administration, reporting on incentive usage, and employing surveys for data collection.
Examining Incentive Design and Administration. Examining incentive design and administration may not be as appealing as investigating incentive effectiveness or economic impacts, but it can still be an important component of an incentive evaluation. Information about the design of a tax incentive program, its day-to-day implementation, and the outputs of the incentive program can advise policymakers about whether administrators are implementing the tax incentive program in compliance with legal requirements. Furthermore, this research can potentially flesh out design features or administrative processes that may hinder incentive use and efficacy, examine the potential impacts of design flaws or problematic administrative practices, and develop remedies to design or administrative process problems. My experience is that good research on these matters can inform future summative research by providing an explanation for a tax incentive’s poor performance. On the other hand, there are times when good summative research aimed at determining the effectiveness and impact of a tax incentive reveals important incentive design and administrative process problems.
Incentive evaluations from Florida, Maryland, Minnesota, and Virginia provide good examples of thoughtful analysis and discussion of incentive design flaws and problematic administrative processes.
Reporting Incentive Usage Data to Inform Discussion of Program Administration and Operation. Simple reports of incentive program outputs or usage data can also be an important tool in evaluation. Usage data include measures like incentive application and recipient totals, amounts awarded or claimed, and investment or employment totals. These data also could include taxpayer data such as recipient income and tax liability. Reporting usage data augments the description of the operations and activities of incentive programs and is useful for estimating revenue loss or budgetary impact. What’s more, these reports provide context and perspective about the scope and importance of an incentive program.
Usage data also may serve as a “tell” to evaluators to examine potential design or administrative process problems. Low usage numbers are not necessarily an indication that an incentive is ineffective. An incentive used by relatively few businesses nonetheless may have encouraged the intended economic activity by the recipient firms. In such cases, low usage could be an indication of flaws in program design or administrative processes that dampen response rates or lead to other unintended consequences.
Various incentive evaluations provide excellent program overviews with extensive reporting of incentive usage data, including evaluations from Indiana, Nebraska, Virginia, and Washington.
Using Surveys to Collect Data. Surveys and interviews can be a rich source of data on incentive use and effectiveness. They can pose questions about incentive use, recipient characteristics, and the extent to which a tax incentive caused the intended behavior. Surveys lend themselves to probability sampling and allow evaluators to generalize findings from a sample of incentive recipients to all beneficiaries. Moreover, surveys of program participants can be made more rigorous by also surveying a comparison group of nonparticipants.
On the downside, surveys and survey data are not without their challenges. A good survey instrument may take a significant amount of time and talent to design. Moreover, survey data can be imprecise or invalid. It is possible a respondent can definitively say that a tax incentive led to increased investment or employment but cannot give a precise quantitative estimate of this impact. Survey responses also can be invalid if a respondent is not knowledgeable about the tax incentive or the response to that incentive. This type of response error can be a significant challenge for surveys sent to large business organizations where those knowledgeable about incentive use do not get the survey. Finally, invalid responses can occur simply because a respondent intentionally provides false information—some recipients may be motivated to exaggerate an incentive’s impact.
There are several examples of how surveys can be used as part of an evaluation. Virginia employed surveys and interviews quite extensively in its 2018 evaluation of workforce and small-business incentives. Evaluators surveyed 1,300 businesses and interviewed state agency personnel, industry stakeholders, and economic developers to assess the impact of the incentives on firm location, expansion, job retention, investment, and other decisions, as well as design and administrative process problems.
Minnesota’s evaluation of its research tax credit and Florida’s evaluation of its film and entertainment industry incentives similarly make extensive use of survey data to assess the effectiveness of these incentives and to examine incentive design and administrative process problems.
Conclusion. Tax incentive evaluation is a formidable undertaking. However, thanks to a sizable and growing collection of incentive evaluations, analysts have resources to help them. The incentive evaluations demonstrate a variety of approaches, analytical methods, data collection methods, data sources, and many good examples of how to report evaluation results. In particular, a number of evaluations demonstrate the utility of examining incentive design and administrative processes, reporting administrative data, and collecting data employed for both descriptive and analytical research.