Program Evaluation Is Critical in Assessing Court Technology
Throughout the past ten months, our justice system has made giant strides in its use of technology. Video- and tele-conferencing, e-filing, and remote jury trials have been implemented in courts across the country, in many instances for the first time ever. And, when it comes to civil disputes, ODR is one of many digital processes that court leaders are turning to in hopes of balancing the need for an efficient justice system with public health and safety. The question now faced by many courts is: are these programs working like they’re supposed to?
Erika Rickard, who serves on IAALS’ US Justice Needs Advisory Committee, and Qudsiya Naqui recently wrote an article that explores how courts can utilize program evaluations to determine whether or not virtual processes like ODR are working as intended and meeting the needs of both litigants and the courts.
Rickard and Naqui write that “program evaluation allows for the impact and effects of a program to be measured against its goals.” But it can do a lot more, too. Program evaluation allows us to properly diagnose the problems a program should address (needs assessment), appropriately design programs to meet those needs (program theory assessment), understand whether our programs are being implemented as designed (process evaluation), and determine whether the observed benefits of a program justify its costs (cost effectiveness evaluation). These elements of evaluation, along with the outcomes and impact evaluations that Rickard and Naqui describe, are essential to developing robust programs and assuring they achieve their goals.
Program evaluation has been used to great effect across the spectrum of public and social programs for decades, but we in the legal field are lagging behind in our adoption of evaluation techniques for assessing our programs. The encouraging news is that there is a growing call for courts to utilize research and evaluation in this season of change, which we have discussed in depth here.
Although many within the legal profession have praised the system for its embrace of technology over the past ten months, others have gone one step further and encouraged courts to evaluate these new processes and use the data they collect to make informed decisions moving forward. In fact, last October, the Conference of Chief Justices (CCJ) and Conference of State Court Administrators (COSCA) recommended six principles related to courts’ use of technology going forward, stating that “courts now have a unique opportunity to leverage creative thinking, seize on an emergency-created receptivity to change, and adopt technology to create long-term and much-needed improvements.” The final recommendation was to take an open, data-driven, and transparent approach to implementing and maintaining court processes and supporting technologies.
Court leaders now have the opportunity to take what they’ve learned from ODR and join forces “with researchers and other stakeholders to create a adopt a national framework for evaluating technology tools that could help courts with limited resources learn common lessons from evaluations in other jurisdictions.” This approach, according to Rickard and Naqui, would inspire courts to “employ scientifically sound methods for internal assessments of their technology platforms” and equip court leaders across the country with a playbook for program evaluation—a critical asset as these types of tools continue to evolve.
Many of the technologies implemented throughout the COVID1-19 pandemic are likely here to stay. And while these tools and programs are being implemented with the intent to increase efficiency and accessibility as well as decrease costs, we cannot know they’re working as intended without the data—and rigorous data collection processes—to back that up.