A Review of the 2024 Roundtable on Evaluating Tax Incentives
An expert shares highlights and perspectives from the conference
In this summary, originally published by The Pew Charitable Trusts in 2024, Jim Landers, professional practice associate professor, Enarson Fellow, director of graduate/professional studies at the John Glenn College of Public Affairs at The Ohio State University, shares highlights from the 2024 NCSL Roundtable on Evaluating Economic Development Tax Incentives. In addition to highlighting sessions on artificial intelligence in evaluation and thinking beyond economic indicators, Landers features a discussion on the synthetic control method.
Evaluation perspectives
In With the New: Reflections on and Key Takeaways From the 2024 Roundtable
Jim Landers
Professional Practice Associate Professor, Enarson Fellow, Director of Graduate/Professional Studies at the John Glenn College of Public Affairs
The Ohio State University
Introduction
The 2024 NCSL Roundtable on Evaluating Economic Development Tax Incentives revisited some important topics from the past while also focusing on methods and practices that have not been spotlighted during previous roundtables. Right out of the gate, the Roundtable provided informative discussion about the important considerations and challenges of modeling economic development incentive impacts with regional economic development models (e. g., IMPLAN and REMI). I never grow tired of hearing from other incentive evaluators about the challenges and benefits of using regional economic models and learning how they used them to evaluate incentive program impacts.
The Roundtable also initiated discussion of assessing the methods and processes used for incentive evaluations. The session highlighted reflections by evaluators on the performance of their incentive evaluation programs and the challenges of creating valid and informative approaches for examining the impact that evaluators are having on incentive policy in their states.
There were two key takeaways from this session. First, start an evaluation by using a logic model to describe and explore the full scope of an incentive program, from program mission and inputs to program outputs and outcomes. Second, use a scorecard to rate incentives against recognized best practices—for example, minimum eligibility thresholds, wage requirements, incentive cap, or provisions targeting distressed areas.
Three new topics introduced at the Roundtable yielded considerable takeaways:
- Using artificial intelligence for incentive evaluations.
- Going beyond economic indicators to evaluate incentive programs.
- Causal analysis of incentive program effects using the synthetic control method.
Unleashing AI for predictive modeling of incentive impacts
The 2024 Roundtable offered some new insights on the potential for using AI to model and assess public policies and, maybe, economic development incentive programs. This session provided informative discussions of AI’s strengths and weaknesses as a research tool, alongside examples of some of its policy analysis uses. Most interesting was the presentation focused on how auditors with the Kansas Legislative Post Audit Division used “machine learning” to assess the level of unemployment claims fraud in the state during the COVID-19 pandemic.
I queried Microsoft Copilot to get a quick definition of machine learning. Copilot indicates
“. . . [m]achine learning is a branch of AI that focuses on using data and algorithms to enable computers to learn from experience, much like humans do. This process allows machines to improve their performance on tasks over time without being explicitly programmed for each specific task.”
The machine learning model developed in Kansas predicted whether an unemployment benefit claim was fraudulent based on 26 fraud risk indicators and information from the benefit application. A sample of 1,000 benefit applications from more than 1 million was used to train and validate the model for precision, minimizing prediction error.
While I’m not entirely certain how machine learning could be used to assess the impacts of economic development incentives, this could be a very interesting session for the 2025 Roundtable. I’ve not come across a machine learning study of an economic development incentive program, but I recently read a study testing machine learning for tax revenue forecasting. Thinking outside the box a little, machine learning models could be an important tool for a priori fiscal analysis of proposed incentive programs, potentially allowing fiscal researchers to predict revenue and economic outcomes of the proposed incentive program.
Exploring alternative indicators of incentive effectiveness
Most published research on the effects of economic development incentive programs tends to estimate economic impacts. As a result, past Roundtables have focused quite a bit on data, models, and methodologies that lead to conventional estimators of incentive program success, such as increasing jobs, wages, investment, and business establishments.
The session on evaluating beyond fiscal and economic indicators deviated from the norm, providing discussions about alternative measures of program success, like quality of life, job quality, community health and environmental quality, and measures relating to who benefits from incentive programs (e.g., business, resident households, in-migrants). The session highlighted measures of job quality that could be important additions to incentive evaluators’ tool-kits. These measures, including fringe benefits, paid medical or family leave, working conditions, and advancement opportunities, can be linked via existing research to positive worker, family, and community outcomes, and are quantifiable using survey methods.
Laurel Berman’s research measuring community health outcomes of redevelopment provided another approach to assessing quality-of-life indicators. Her approach includes a systematic model for developing community-based indicators, with a website that specifies various community health indicators related to community redevelopment projects.
Additionally, Kansas auditors provided an interesting case study of adapting their approach to a second evaluation of the state’s STAR bond program. Their first evaluation of the program focused on the state’s primary tourism attraction goals. In lieu of repeating this assessment the second time around, auditors focused on the quality-of-life impact of the program. They measured the impact of quality of life directly by surveying graduating students and alumni of Kansas public universities about factors impacting where they live, including quality of life. They also surveyed the extent that industries in Kansas affect their quality of life.
Crafting causality with the synthetic control method
The Roundtable also included a session on the synthetic control method (SCM). We learned that SCM allows researchers to conduct causal analysis of public policies impacting geographic units like states, counties, or cities. Initially used to estimate the causal effects of California’s Tobacco Control Program, SCM has recently been applied to causal studies of state minimum wage policies and economic development incentive programs.
SCM systematically constructs a counterfactual of the unit receiving the policy treatment from characteristics of other units that haven’t received the policy treatment. The counterfactual or synthetic unit mimics the characteristics of the treated unit before the policy treatment. Researchers can then estimate the causal effect of the policy treatment by comparing the post-treatment outcomes of the treated and synthetic units. One of SCM’s strengths is its effectiveness when there is a single treated unit and multiple untreated units, which often applies to incentive evaluations that assess a statewide program or a location-specific program within a state.
Studies examining the impact of New Jersey’s enterprise zone program and Georgia’s film tax credit demonstrated practical use of SCM and explanatory value of the graphical and statistical results SCM yields. Both studies yielded conventional SCM results, suggesting New Jersey’s enterprise zones had no discernible impact on employment and Georgia’s film tax credit had discernible positive economic effects. The Georgia study even used the SCM results to estimate “but for” percentages for the film credit—that is, the extent to which the incentives are required for the project to proceed and/or lead to additional economic activity such as proposed capital investment and/or employment.
Wrap-up
While the 2024 Roundtable dug deeper into some methods and practices we’ve discussed in the past, it also broke new ground with robust discussions and presentations about new data, performance measures, and methods. Hats off to NCSL once again for organizing an informative and thought-provoking Roundtable and to The Pew Charitable Trusts for being wonderful hosts in Washington, D.C. A big thank you to the incentive evaluators advisory group members and The Pew Charitable Trusts staff who led the efforts in developing an excellent agenda, and to all of you who participated in the Roundtable, especially our presenters.