Bar News - February 19, 2014
Measuring Judicial Performance
By: Kristen Senz
The nature of a New Hampshire judge’s position and work – public servant appointed for life, constantly picking winners and losers – makes evaluating his or her job performance a more complicated task than evaluating, say, a customer service representative at a bank.
That, combined with recent amendments to the state’s law on judicial evaluations, questions about whether individual evaluation results should be made public, and a historically low response rate on evaluation surveys, is why the NH Judicial Performance Evaluations (JPE) Advisory Committee is looking at a new, more comprehensive way to judge the work of the state’s judges. The approach the committee is considering involves collecting information from a variety of sources, instead of relying only on surveys, to assess a judge’s performance.
New Hampshire is one of only three states where judges are appointed for life (along with Massachusetts and Rhode Island) and one of about 20 jurisdictions, including the District of Columbia and Puerto Rico, where judges’ performance is officially and regularly evaluated by judicial administrators. In some other states, typically those where judges stand for election or retention, bar associations poll their members about judge performance.
The JPE Advisory Committee was formed by Supreme Court order in 2012 and comprises judges, court administrators, lawmakers and representatives from the NH Judicial Council, Public Defender, Bar Association and Attorney General’s Office. The committee aims to make the evaluation process more accurate and fair, while improving the performance of judges and the judicial system as a whole.
Historically in New Hampshire, a total of about 30 or 40 lawyers, litigants and jurors respond to court surveys for each trial judge being evaluated in a given year (judges are evaluated every three years on a rotating cycle). Respondents rate a judge on a scale of 1-5 in seven areas: performance, temperament and demeanor, judicial management skills, legal knowledge, attentiveness, bias and objectivity, and degree of preparedness. From that relatively low number of responses, average numerical scores are calculated.
Confusion on Confidentiality
Among other changes, amendments made in 2012 to the judicial evaluation statute (RSA 490:32) required that a summary of each judge’s evaluation results, including any corrective action taken, be made public in the court’s annual evaluation report, which is due by June 30 each year. The 2013 evaluation report posted on the court’s website does not include individual results summaries. NH Judicial Branch spokeswoman Carol Alfano said that is because: “The report covers the performance evaluations of NH Superior Court justices, Circuit Court judges and martial masters evaluated in 2012, not 2013” and “the report covers the time period before the amendment to RSA 490:32 went into effect on July 22, 2012.”
Other language added to RSA 490:32 in 2012 requires a judge to waive his or her right to confidentiality with respect to evaluation results following two consecutive negative evaluations. JPE committee members say the two confidentiality provisions appear to conflict with each other.
Co-sponsored this session by JPE committee members Sen. Sharon Carson and State Rep. Marjorie Smith, Senate Bill 249 seeks to remove the requirement to name judges and provide individual results summaries in the annual JPE reports. The NH Senate passed the bill by voice vote Jan. 30.
In some states, judges are appointed by a selection commission and face retention elections after their initial terms. These elections vary as to whether they are partisan or not, or involve opponents. In some of these states, judicial evaluation results are made public, to promote more informed voting.
Discussion about the extent to which judicial evaluation results should be made public in New Hampshire has been part of the work of the JPE committee, which holds public meetings. Committee member Chris Keating, executive director of the NH Judicial Council, questions the value in naming judges and making individual evaluation summaries public. He points out that evaluation scores are based on surveys with low response rates and says, “I think you’ve got to take into account the nature of what the judge does… The judge presides over a lot of cases where someone wins and someone loses.”
JPE committee Chair and NH Supreme Court Associate Justice Carol Ann Conboy agrees. Human nature dictates that people who were happy with their courtroom experiences are less likely to fill out evaluation forms. That means the results are usually skewed toward the negative. “If you’re getting responses from people who didn’t prevail, how do you put that into perspective?” Conboy said.
Jordan Singer, a professor at New England Law School in Boston and the former director of research for the Denver-based Institute for the Advancement of the American Legal System, is a national expert on judicial performance evaluations. He has worked with New Hampshire court officials on the state’s evaluation process in the past and met with the JPE committee last year. A member of a Massachusetts commission that is reevaluating that state’s JPE process, Singer spoke to Bar News about his personal perspectives, not those of the commission.
Singer thinks the manner in which a state selects judges relates to whether and to what extent evaluation results should be made public. “Half of the judges are going to be below average, so when it comes to a state with a lifetime appointment, I take the position that it’s not necessary to identify judges by individual name.”
In states with lifetime appointments, Singer says, it becomes more important for the public to know how evaluations are conducted and what the court does with the information.
A More Thorough Approach
When it comes to judicial selection, Singer said he believes merit selection of judges followed by retention election is the system that “provides the proper balance of judicial independence and judicial accountability.” But, regardless of the selection method, he says, when it comes to evaluating judicial performance, more information is better.
“I think, as a general rule, the more sources of information that are available to you, the easier it is to get a picture of what people think about the judge and what the court metrics show about how the judge is doing,” he said.
Under the more comprehensive system the JPE committee is considering, a judge’s evaluation portfolio might include a written evaluation by the administrative judge, audio and/or video recordings of court hearings, reports by trained courtroom observers, and the results of surveys tailored to specific groups, such as lawyers, pro se litigants, jurors and witnesses. Justice Conboy says this new approach would likely go farther in helping to achieve the goals of the evaluation process.
“I like the idea of a portfolio approach, because we do get such a low response rate,” she said. “I think we ought to be talking about more narrative reporting, rather than the statistical approach we’ve been taking.”
Professor Andrew Smith of the UNH Survey Center met with the JPE committee last month to discuss potential changes to the Survey Monkey survey instruments the court uses to conduct judicial evaluation surveys. He urged committee members to make sure each question is linked to the ultimate goals of the evaluation process, and to tailor questions to specific groups of respondents. Professor Smith and a JPE subcommittee planned to work on updating the survey questions over the next few months.
Improving the evaluation process would not come without cost. Keating has pointed out that administering separate surveys and gathering information for about 80 judge portfolios, as well as managing access to the information in the portfolios, would require more time and effort than is expended now. He suggested that some judicial branch staff time might need to shift, or additional personnel might be needed.
“People have to put their money where their mouth is,” he said at the January committee meeting. “If people think this is important, there needs to be some resources available to do it.”
According to Singer, in most states that have retention elections and more comprehensive judicial evaluation processes, there is someone whose job it is to gather the information and release it to the public, with a recommendation from an evaluation commission.
The JPE committee could recommend that additional resources be allocated to the evaluation process when it makes its report to the court this spring.
The next judicial evaluation report will be issued in June. The JPE committee intends to submit its recommendations about the process to the court before then, Conboy said. “I’m very encouraged that this is where we’re headed,” she said of the new approach the committee is considering.
If put forward by the committee and approved by the court, the new evaluation methodology could be implemented within a year.
To read the current NH judicial evaluation report, visit the judicial branch website. For more information about judicial selection and evaluation in other states, visit www.judicialselection.us.