The Effect of Legal Expert Commentary on Lay Judgments of Judicial Decision Making

The public's view of the judiciary is a key factor in the legitimacy of any legal system. Ideally, popular judgments of the adjudicative branch would be independent of the outcomes of the decisions it furnishes. In a previous study (Simon & Scurich 2011), we found that lay people's evaluations of the judicial decision‐making process were highly contingent on the decision outcomes. Participants gave favorable evaluations of the judges and their decisions when they agreed with the judges' outcomes, but reported negative evaluations when they disagreed with them. These results held true across four different types of judicial reasoning, and despite the fact that all decisions were described as having followed proper procedures and been argued by competent lawyers. That study left open the possibility that the public's judgments might be moderated by professional elites, namely, legal experts. Indeed, in real life, much of the public's information about judicial decisions is derived from legal experts who communicate and comment on them in the media. This study examined the effect of professional commentators on lay people's judgments of judicial decision making. We found that the experts' commentaries do not alter participants' evaluations of the courts' decisions, as the evaluations continue to be influenced strongly by the participants' agreement with the outcomes of the judges' decisions. Moreover, lay people's reactions to the experts follow a similar pattern: the experts are deemed competent and their commentaries are deemed reliable when the participants agree with the outcomes propounded by the experts, but the opposite is true when the participants' preferred outcomes are incongruent with the outcomes endorsed by the experts. These findings suggest that the outcome‐dominated judgments of courts cannot easily be tempered by professional elites. This conclusion could also provide some insight into the dynamic process that enables political polarization.


I. Introduction
The authority of the unelected judicial branch to exert its powers in a legitimate and effective manner is a central condition for an ordered society (see, e.g., Bickel 1986;Breyer 2010;Dworkin 1986;Marmor 2007). In this vein, legal institutions and practices have been designed to constrain judicial powers and conform them to the democratic system, with the objective of distinguishing judicial decisions from merely political or otherwise unprincipled ones (Fuller 1978;Hart & Sacks [1958] 1994; see also Eskridge & Frickey 1994;Waldron 2011). The proper role and function of the judicial branch has been the subject of extensive scholarly debate over the years (e.g., Dworkin 1986;Fuller 1978;Kennedy 1986). Less research has been devoted to examining how judicial decision making is viewed by the judiciary's most important constituent, the general public (see, e.g., Gibson & Caldeira 2011).
A study by Simon and Scurich (2011) sought to provide preliminary insight into how lay people evaluate judicial decision making. We presented 700 lay participants with synopses of three cases that were decided by an arbitrator, a judge, and an appellate court. The materials contained a summary of the legal procedures followed (which were always described as favorable), three main arguments made by the lawyers of each side, and the court's decision accompanied by the reasons for that decision. Participants were then given eight measures to evaluate the judicial decisions: five measures assessed their evaluation of the quality of the judicial decision (whether it was thorough, thoughtful, etc.), and three measures assessed their evaluation of the judicial decisionmakers (namely, their competence and legitimacy). Finally, participants were asked how they would have decided the case.
One of the key findings was that the evaluations were heavily influenced by the congruence between the outcomes of the judges' decisions and the participants' preferred outcomes. In all three cases, when participants agreed with the outcomes of the courts' decisions, they evaluated both the judges and their decisions very favorably, but they provided low evaluations when they disagreed with those outcomes. These effects were obtained despite explicit instructions to focus on the manner in which the decisions were made and to disregard their outcomes. The study also found that the types of reasoning offered by the judges did not have a major effect on the overall judgments, although one type of reasoning-open-ended, two-sided reasoning-did have a positive effect when the participants disagreed with the judicial outcome.
One would have expected that the participants' verdict preferences would be rather weak. By design, the cases involved primarily mundane and technical questions, and were thus relatively devoid of the ideological and political issues that are typically related to people's "core" or "central" attitudes (see Judd & Krosnick 1982;Chong 2000;Sherman & Gorkin 1980). Nor did the cases pertain to issues that could be deemed to constitute moral mandates (see Mullen & Nadler 2008;Skitka 2002). Indeed, the participants appear to have chosen their preferred outcomes on the merits of the cases, as manifested by the fact that the preferred decisions were almost entirely unrelated to any of the demographic variables, including age, gender, education, ideology, support for the death penalty, religiosity, and political affiliation. 1 In the terminology of the motivated reasoning framework (see Kunda 1990), our participants appear to have been driven primarily by neutral "accuracy goals," rather than by "directional goals." Given the expected neutrality and the roughly even split of the participants' preferred decisions, the likely explanation for their skewed evaluations of the judges is the Gestaltian notion what goes together, must fit together, which is encapsulated by coherence-based reasoning. This body of research shows that when making decisions and forming judgments about complex situations, the cognitive system tends to impose coherence on the mental representation of the task. As the process evolves, the variables that support the emerging conclusion become stronger, while the variables that support the rejected alternative wane. The ensuing lopsided interpretation of the task spreads the vying options apart, thus leading to confident conclusions (see Glöckner et al. 2010;Glöckner & Engel 2013;Holyoak & Simon 1999;Simon et al. 2004; for reviews, see Read & Simon 2012;Simon 2004). This coherence effect helps explain the polarized evaluations observed by Simon and Scurich (2011), by which judges were evaluated very favorably when the participants agreed with their judicial outcomes, but very poorly when the participants disagreed with their decisions. The coherence effect also explains the high intercorrelations among the eight attributes that were used to evaluate the judges and their decisions. Rather than make fine-grain distinctions among these attributes, participants viewed the judges in a globally coherent way, evaluating them either favorably or unfavorably on all attributes.

A. Implications for the Judicial System
All in all, the findings by Simon and Scurich (2011) do not seem encouraging for the rule of law. Despite the great efforts invested by the legal system to insulate judicial decision making from the public's satisfaction or displeasure with the outcomes it produces (Hart & Sacks [1958] 1994), lay people do not seem to separate their view of the judiciary from their taste for the results. It is not hard to see how this phenomenon could exacerbate the fragmentation of society. In his book Republic.com 2.0, Cass Sunstein (2007) raises the possibility that social fragmentation could be mitigated perhaps by means of "generalinterest intermediaries," such as newspapers and other neutral entities. In theory, intermediaries-such as experts-could provide a relatively neutral perspective and an anchor that harnesses the forces of division. A large body of research dating back to the seminal work of Hovland et al. (1953) has shown that persuasion is strongly influenced by the credibility and competence of the source. Indeed, the research indicates that people are particularly susceptible to persuasion by unbiased experts (see Chaiken et al. 1989;Petty & Cacioppo 1984; for a meta-analysis, see Wilson & Sherrell 1993).
Thus, it is possible that lay evaluations of judicial decision making could be influenced by the input of professional elites, namely, professional legal experts who communicate judicial decisions and comment on them in the media. The lay public might well consider experts to be better positioned to understand and evaluate the often difficult legal issues that arise in judicial decisions. In other words, expert commentaries could conceivably enable people to evaluate the judiciary separately from their agreement with the outcomes of the courts' decisions. This prospect is an empirical question, which is the subject of the current study.
This study explores two central issues. First, we examine whether experts' commentaries influence lay people's reactions to judicial decisions. Specifically, we test experimentally whether experts' commentaries can moderate people's heavy reliance on their preferred outcomes in evaluating the appropriateness of judicial decisions and the judges

Effect of Expert Legal Commentary on Lay Judgments
who rendered them. Second, we seek to understand what influences lay people's reaction to experts and their commentaries. In particular, we explore whether judgments of the experts themselves and the commentaries they offer might too be influenced by agreement with the outcomes espoused by the experts. To test these two questions, we constructed the materials in the form of newspaper articles, written by the legal experts.
Testing these cases in the format of newspaper articles has the additional advantage of increasing the ecological validity of the prior study by Simon and Scurich (2011). It seems fair to say that the vast majority of public exposure to judicial decisions occurs through media accounts, and that the majority of media accounts are provided not as verbatim reproductions of the judicial opinions, but are presented and explicated by legal experts, who typically supplement the account with their own commentary. Hence, we designed the materials in the form of newspaper accounts of judicial decisions, written by legal commentators. In this first test, we could not venture to examine the host of mediating variables and boundary conditions that might affect expert commentary. Thus, we designed the materials to mimic the most common format of newspaper articles as they appear in the written press.

B. Study Overview
In this study, lay participants were presented with three newspaper articles that were described as having been written by the papers' legal commentators. Each article dealt with one of the cases that were used in the study by Simon and Scurich (2011). Two of the articles described a decision by a single judge, and one described a decision made by a panel of three appellate judges. The procedures leading up to the decision were said to have been appropriate, and the cases to have been argued by competent lawyers. Importantly, after reporting on the case and the judicial decision, the legal commentators (experts) provided an analysis of the court's decision, and noted whether they agreed or disagreed with it. To support their conclusions, the experts provided three reasons, which were paraphrases of the arguments presented by the lawyers at the hearing. As in the study by Simon and Scurich (2011), participants were asked to evaluate the judge and the judicial decision. In the current study, participants were asked also to evaluate the expert and her commentary. Participants were also asked to indicate their own preferred outcome, that is, how they would have decided the case.

A. Participants
Six-hundred-sixteen people participated in the experiment. The sample was comprised of 268 (43.5 percent) males and 348 (56.5 percent) females, with a mean age of 28.79 (S.D. = 15.09) and median 30 (IQR = 24). Eighty-seven percent of the sample had some post-high-school education, with a median of three years of post-high-school education. Thirty-two percent of the participants described themselves as liberal, 25.8 percent described themselves as moderate, 23 percent described themselves as conservative, and the rest described themselves as "other." Participants were recruited through an affiliate of the online survey company Qualtrics, which maintains a very large mailing list of individuals who have consented to participate in online studies in exchange for small fees or rewards.

B. Procedure and Design
Participants first read a consent form and then completed the experiment online by clicking through a series of webpages that contained a set of general instructions, followed by three articles about legal cases. Each article contained the legal and factual issues involved in the case, three key arguments made by each side, the court's decision on the matter, and the expert's commentary on the ruling. The court ruled for one side or the other, and the expert either agreed or disagreed with that ruling. Participants were presented with the dependent measures that gauged their evaluations of the judicial decision and the expert's commentary. The articles were presented in a randomized order, as were the measures that followed each case. On a separate webpage presented at the end of each case, participants were asked to indicate which side they would have found for (i.e., "If you had to decide the case, what would your decision be?"), followed by a measure of confidence in their decision. To ensure that participants had paid attention to the case, two questions tapped their memory for facts mentioned in the case. Consistent with current practice (see Oppenheimer et al. 2009), participants who failed to remember the facts correctly were removed from the analysis. Finally, participants provided demographic information.
This experiment employed a 2 (court decision: for plaintiff vs. for respondent) × 2 (expert agrees with decision vs. expert disagrees with decision) fully crossed betweenparticipants design. In each of the three cases, participants were randomly assigned to one of the four possible experimental conditions.

C. Materials
The general instructions introduced the task to the participants. Participants were told that all decisions followed the appropriate legal procedure, the cases were argued by competent lawyers, and the decisionmakers spent considerable effort thinking about the dispute. Participants were also informed that they were not required to have any legal knowledge, that there are no right or wrong answers to these questions, and that their responses should convey how they personally feel about the issues.
Participants were informed that they would be asked to evaluate "whether the judges' decisions were made in an appropriate manner . . . such as whether they were made in a manner that was thoughtful, rigorous, and legitimate." Participants were informed also that they would be asked to evaluate the experts' commentaries, "such as whether they were fair and thoughtful." The instructions cautioned participants: "In making these evaluations, try to ignore your agreement or disagreement with the outcomes of the decisions." In each of the three cases, participants were asked to assume that they had just read a newspaper article written by the newspaper's legal expert (the experts were described as a "legal analyst," "legal correspondent," and "legal commentator"; for the sake of consistency, all experts were female). The legal cases used in this experiment were the same cases

Effect of Expert Legal Commentary on Lay Judgments
used by Simon and Scurich (2011), two of which were adapted from stimuli used in previous research (Holyoak & Simon 1999;Simon et al. 2004). Two of the cases revolved mostly around legal issues and one was concerned primarily with a factual determination. In each case, the lawyers representing each of the sides made three key arguments in support of their position. The court then rendered a decision for either one of the sides, without providing any specific reasoning. The decision was followed by a commentary written by the newspaper's legal expert. For half the participants, the expert agreed with the judges' decision, and for the other half she disagreed with the decision. The experts then supported their opinion with reasons, which were paraphrases of the arguments offered by the winning side's lawyer.
For example, the Waste Disposal Corporation case (WDC) case was based loosely on a decision of the Supreme Court from 2001 (Solid Waste Agency v. United States Army Corps of Engineers). The article described an appeal filed in a federal appellate court by a garbage disposal company. The company appealed a lower court's decision to uphold a decision by the Army Corps of Engineers to prevent the corporation from developing a landfill in an abandoned gravel pit. The Corps denied the permit because the site had become a habitat for migratory birds. The Corps' decision relied on regulations that it had promulgated in previous years. Since that time, some members of Congress had sought unsuccessfully to repeal those regulations. The dispute revolved mostly around the statutory powers of the Corps to deny the permit. Each of the sides presented three principal arguments to the court in support of its position.
The essence of the arguments made by the lawyers of the Waste Disposal Corporation was: 1. The Army Corp of Engineers had no jurisdiction, since the site contained an isolated, not a navigable, body of water. 2. The fact that Congress did not make a new law to change the definition of navigable water does not imply that Congress endorses the existing state of affairs. 3. There are no alternative sites for landfill, which is bound to cause considerable hardship for the region.
In defending the Army Corps of Engineers decision, the federal government raised the following arguments.
1. The Army Corps of Engineers did have jurisdiction over the particular site, since the site was connected with adjacent bodies of water by a seasonal waterway, which made it a navigable body of water. 2. Congress's refusal to redefine "navigable waters" indicates that it endorses that definition. 3. The corporation did not submit any surveys or environmental impact reports, so there is no basis to determine the impact of denying the permit.
After participants were told the court's decision, the expert expressed either support or disagreement with the decision, and articulated her reasoning for that conclusion. Half the participants were exposed to a commentary that supported a decision in favor of the Army Corps of Engineers. The expert explained: My view is based on the following conclusions: As the site was correctly classified as a "navigable" body of water, the Army Corps of Engineers did have jurisdiction over it. The fact that Congress refused to repeal the Army Corps of Engineers' broad definition of "navigable waters" indicates that it endorsed that understanding of the law. As the Waste Disposal Corporation did not submit any surveys or environmental impact reports about alternative sites, it cannot claim any harm to the region that might be caused by the denial of the permit.
In the other experimental condition, the commentator explained why she supported a decision that favored the corporation.
My view is based on the following conclusions: The Army Corps of Engineers had no jurisdiction over this matter, as the site could not be classified a body of "navigable water." The fact that Congress did not limit the authority of the Army Corps of Engineers over this type of land did not amount to an endorsement of that authority. Due to the unavailability of alternative sites for waste disposal, the denial of the permit will inflict considerable hardship on the region.
Participants were reminded that they should ignore their agreement or disagreement with the outcome of the case before answering seven items designed to tap the appropriateness of the judge's decision. Four of the measures asked for evaluations of the decision. These measures included questions such as, "How satisfied are you with the manner in which the decision was made?" and "How legitimate is the judge's decision?" Three items asked for evaluations of the judges themselves, such as "How competent is the judge?" and "To what extent do you trust this judge to make good decisions in the future?" All ratings were made on an 11-point Likert scale. Given the high intercorrelations that we found in each one of the studies, we collapsed these items into a single measure labeled evaluation of judge.
Participants then answered seven more items, which assessed the participants' evaluations of the experts' commentary. This instrument included questions concerning the commentary (e.g., "Do you think that the commentary is fair?" and "To what extent is the commentary objective?"). Other measures asked for evaluations of the commentator herself (e.g., How competent is [] as a legal commentator?" and "To what extent do you trust [] as a legal commentator?"). These ratings were also made on an 11-point Likert scale. Again, given the high intercorrelations that we found in each one of the studies, we collapsed these items into a single measure labeled evaluation of expert. The core of the materials used in the other two cases is reproduced in the Appendix.

III. Results
For the following analyses, we constructed three variables. First, the variable participantjudge agreement refers to the agreement or disagreement between the participants' preferred outcome and the court's decision. A second variable, expert-judge agreement, captures the agreement or disagreement between the decision propounded by the expert and the court's decision. Finally, participant-expert agreement captures the agreement or disagreement between the participant's preferred outcome and the decision propounded by the expert.

Effect of Expert Legal Commentary on Lay Judgments 803 A. The Jason Wells Case
Five-hundred-fifty-nine participants correctly answered the memory check questions and were included in this analysis. We first examine whether the experts' commentaries had any influence on the participants' evaluations of the judges and their decisions. Aggregating the seven items comprising the evaluation of judge yielded a Cronbach alpha of 0.977. A two-way ANOVA with evaluation of judge as the dependent variable, and participantjudge agreement and expert-judge agreement as the independent variables, detected a significant main effect for participant-judge agreement F(1, 559) = 299.81, p < 0.001, η2 = 0.36. The main effect for expert-judge agreement was not significant F(1, 559) < 1, nor was the interaction F(1, 559) = 2.96, p > 0.05.
These findings replicate the findings of Simon and Scurich (2011) in that the participant's evaluations of the judges and their decisions depend heavily on whether the participant's preferred outcome was congruent with the court's decision. Importantly, the evaluations were not significantly affected by whether the expert agreed or disagreed with the court's decision. In other words, the experts' commentaries had no effect on the participants' evaluations of the judges and their decisions.
We conducted a statistical power analysis to rule out the possibility that the null finding for experts' commentaries on evaluations of the judge was the result of a lack of power to detect an effect, had it existed. Assuming a medium effect size of Delta = 0.75 (Delta, sometimes referred to as d, is the difference between the largest mean and the smallest mean, in units of the within-cell standard deviation) (Cohen 1988) and Type I error of 0.05, each case requires more than 32 participants per condition to obtain power greater than 0.80. Given that each cell had approximately four times this quantity, the null finding cannot be explained by a deficiency in statistical power.
Aggregating the seven items probing the evaluation of expert yielded a Cronbach alpha of 0.97. A two-way ANOVA was conducted with evaluation of expert as the dependent variable, and participant-judge agreement and participant-expert agreement as the independent variables. The ANOVA detected only a significant main effect for participant-expert agreement F(1, 559) = 273.81, p < 0.001, η2 = 0.330. The main effect for participant-judge agreement was not significant F(1, 559) < 1, nor was the interaction F(1, 559) < 1. These findings indicate that the sole determinant of the evaluations of the experts is the congruence between the outcomes preferred by the participants and the experts: when they were in agreement, the experts were viewed favorably, but they were viewed unfavorably when the outcomes were incongruent.
The results also demonstrated the coherence effect. As predicted by coherence-based reasoning, participants' evaluations of the seven items strongly cohered with one another. For evaluation of judge, the average interitem correlation was r = 0.86, and for evaluation of expert the average intertem correlation was r = 0.83. Figures 1 and 2 depict the above-mentioned findings in graphic form. As the pattern of results was consistent across the three studies, these figures will present the results from all cases combined. Figure 1 displays

Par cipant-Expert Agreement
Notes: Participants' evaluations of the experts (and their commentaries) as a function of the agreement between the participant's preferred outcome and the outcome espoused by the expert. The figure shows that the evaluations were strongly influenced by the participants' agreement with the outcomes espoused by the experts.

Effect of Expert Legal Commentary on Lay Judgments
participants' agreement with the outcomes of the judicial decisions and of the experts' agreement with the judicial decisions. The main effect depicted in Figure 1 replicates the findings of Simon and Scurich (2011), in that the evaluation of judge was strongly influenced by the congruence between the participants' preferred outcomes and the judges' outcomes. The evaluations were very favorable when the participants agreed with the judges' outcome, but were unfavorable when they disagreed with the outcome. Importantly, the experts' commentary had no effect on the participants' evaluation of the judges and the decisions they made. We now examine the second central question, namely: How do people react to the experts and their commentaries? Figure 2 depicts the evaluation of expert measures as a function of the congruence between the participant's preferred outcome and the outcome propounded by the expert. We see clearly that these evaluations were strongly influenced by the congruence of the preferred outcomes: participants thought highly of the experts when they agreed with the outcomes espoused by the experts, but held them in low regard when they disagreed with those outcomes.
A similar pattern emerges from Tables 1 and 2, which contain the means for the evaluation of judge and the evaluation of expert, respectively, for each individual case.

B. The Waste Disposal Corporation Case
The results from the Waste Disposal Corporation case were very similar to the results of the Jason Wells case. Five-hundred-forty-seven participants correctly answered the memory check questions and were included in this analysis. Aggregating the evaluation of judge items yielded a Cronbach alpha of 0.974. A two-way ANOVA with evaluation of judge as the dependent variable and expert-judge agreement and participant-judge agreement as the independent variables detected a significant main effect for participant-judge agreement F(1, 547) = 317.46, p < 0.001, η2 = 0.44. The main effect for expert-judge agreement F(1, 547) = 1.77, p > 0.05 was not significant, nor was the interaction F(1, 547) < 1.
These findings indicate that the only factor affecting the participant's evaluation of the judges and their decisions was whether the participant's preferred outcome was congruent with the court's decision.
Aggregating the evaluation of expert items yielded a Cronbach alpha of 0.969. A two-way ANOVA with evaluation of expert as the dependent variable and participantjudge agreement and participant-expert agreement as the independent variables detected only a significant main effect for participant-expert agreement F(1, 547) = 108.01, p < 0.001, η2 = 0.21. The main effect for participant-judge agreement was not significant F(1, 547) = 3.16, p > 0.05, nor was the interaction F(1, 547) < 1. These findings indicate that the sole determinant of the evaluations of the experts was whether the participants agreed with them or not. In light of the above-mentioned power analysis, the ineffectiveness of the experts' opinion cannot be explained by a deficiency in statistical power.
Again, the results demonstrated the coherence effect, as the evaluations of the seven items strongly cohered with one another. For evaluation of judge the average interitem correlation was r = 0.84, and for evaluation of expert the average intertem correlation was r = 0.82.

C. The Quest Case
The results from the Quest case were very similar to the previous cases. Five-hundredseventy-seven participants correctly answered the memory check questions and were included in this analysis. Aggregating the evaluation of judge items yielded a Cronbach alpha of 0.972. A two-way ANOVA was conducted with evaluation of judge as the dependent variable and expert-judge agreement and participant-judge agreement as the independent variables. The ANOVA detected a significant main effect for participant-judge agreement F(1, 577) = 286.03, p < 0.001, η2 = 0.35. The main effect for expert-judge

Effect of Expert Legal Commentary on Lay Judgments
agreement was not significant F(1, 577) = 1.83, p > 0.05, nor was the interaction F(1, 577) = 1.13, p > 0.05. These findings indicate that the only factor affecting the evaluations was whether the participant's preferred outcome was congruent with the judge's decision.
Aggregating evaluation of expert items yielded a Cronbach alpha of 0.968. A two-way ANOVA with evaluation of expert as the dependent variable and participantjudge agreement and participant-expert agreement as the independent variables detected only a significant main effect for participant-expert agreement F(1, 577) = 144.86, p < 0.001, η2 = 0.21. The main effects for participant-judge agreement F(1, 577) < 1 and the interaction were not significant F(1, 577) = 2.57, p > 0.05. Again, these findings indicate that the sole determinant of the evaluations of the expert was whether the participants agreed with them or not. In light of the power analysis discussed above, the ineffectiveness of the experts' opinion cannot be explained by a deficiency in statistical power.
The results also demonstrated the coherence effect, as for judge's appropriateness ratings the average interitem correlation was r = 0.84, and for evaluation of expert the average interitem correlation was r = 0.82.

IV. Discussion
This study replicates the key finding made by Simon and Scurich (2011): lay evaluations of judicial decision making are strongly affected by the participants' agreement with the outcomes of the courts' decisions. Participants give high ratings to the judicial decisions and to the judges themselves when they agree with the outcomes of the decisions, but give low ratings when they disagree with those outcomes. The importance of replication in the social sciences has recently received copious attention (see, e.g., Pashler & Jan Wegenmakers 2012). Replicating the finding of Simon and Scurich (2011) speaks to the robustness of the effect, especially given that we tested a different sample of participants, using a different format of presentation, and measuring a different set of dependent variables.
As mentioned in that previous study, the heavy influence of decision outcomes on lay evaluations of the judiciary can be deemed problematic for its legitimacy. People appear to treat judicial decisions more like political decisions than like the rarified product of a constrained legal process. It is not difficult to recognize that this can be harmful for the judiciary's legitimacy, which, in turn, can hinder the maintenance of the rule of law (Fuller 1978;Hart & Sacks [1958Nadler 2005;Robinson & Darley 1995;Tyler 2006;Waldron 2011). The limited ability of the judiciary to bridge divisions in the polity is bound to limit social coordination and inhibit political efficacy (Sunstein 2007).
The central finding of this study is that lay evaluations of judicial decision making are not tempered by "general-interest intermediaries," such as legal experts. In all three studies, participants continued to evaluate the judges and the judicial decisions in a way that cohered with their agreement with the decision outcomes, regardless of the reasoned opinions provided by the experts. A power analysis indicated that sufficient statistical power existed to detect an effect for the experts' opinions, yet no such effect was detected in any of the three studies.

Simon and Scurich
The second central finding is that "general-interest intermediaries" are viewed in the public's eye neither as general-interest nor as intermediaries. Instead, the evaluation of the experts themselves was strongly influenced by the outcomes they propounded: experts were rated highly when they endorsed the same outcomes as those preferred by the participants, but received low ratings when they disagreed with those outcomes. Not unlike shooting the messenger, people spurn the expert who disagrees with them; by the same token, they herald the like-minded expert. The similar swamping of evaluations of both judges and experts is a testament to the influence of people's preferred outcomes over other judgments. Our findings thus lead us to share Sunstein's (2007) pessimism about the efficacy of general-interest intermediaries, especially given the waning influence of consensus media and the rise of more insular and ideological media outlets.
The current findings also offer a potential contribution to the political science literature concerning the status of political elites in the realm of public opinion (see Baum & Groeling 2009;Bullock 2011). Our study examines not only how elites might influence the lay public but, importantly, how the lay public evaluates those elites. Our findings illustrate the potential of elites to exacerbate political polarization (on the prevalent state of polarization in the U.S. polity, see McCarty et al. 2006). Given that experts are generally bound to seek ways to enhance their standing in the public debate, it would not be surprising to find that they target their message in accordance with the public's reaction to them. Herein lies the danger: the forward-looking expert might be tempted to offer opinions that are expected to be endorsed by his or her target audience. Given the pervasive tendency of selective exposure (see Festinger 1957;Frey 1986;Jonas et al. 2001), those audiences will likely expose themselves to those whom they favor and hold in high regard (Mutz 2001;Sweeney & Gruber 1984). At the same time, people are likely to shun or dismiss the opinions of experts who espouse outcomes that do not fit with their preferences. In sum, rather than provide a neutral position to anchor the public debate, expert opinions can perpetuate and even escalate the state of polarization.
It cannot be overstated that our study does not seek to challenge the wellestablished literature that demonstrates the persuasive power of experts (see Chaiken et al. 1989;Petty & Cacioppo 1984;Wilson & Sherrell 1993). As stated, this exploratory study was designed to test the effect of legal experts' commentaries on lay evaluations of judicial decisions in a realistic setting. However, it cannot be denied that this is just one setting. As such, it provides no insight into the possible effect of experts under a host of different circumstances and permutations. Indeed, it is quite possible that our findings would have been different had we provided participants with the credentials of the experts, enriched the cases, added creative and elaborate legal commentary, or reversed the order of presentation of the expert's commentary and the judge's decision. Given the inherent limitations of any single study, we decided to opt for ecologically valid materials. Under the conditions tested, our findings indicate that the commentaries of experts do not appear to yield the desirable mediating effects on the public's evaluations of judicial decisions.
It should also be noted that our findings likely underestimate the power of judicial outcomes in real life. Many of the judicial decisions that are reported in the press touch on salient political and cultural topics, which are often related to people's "core" or "central" attitudes (see Judd & Krosnick 1982;Chong 2000;Sherman & Gorkin 1980), or that constitute moral mandates (see Mullen & Nadler 2008;Skitka 2002). In such charged cases, the coherence effect will likely be compounded by the strong pull of motivated reasoning (see Brownstein et al. 2004;Koehler 1993;Kunda 1990;Munro et al. 2002;Wyer & Frey 1983), thus further exacerbating the effect of decision outcomes on the evaluations of the judiciary and its decisions.
In the other experimental condition, she explained why she supported a verdict for Smith.
My view is based on the following conclusions: The company's collapse was not caused by Smith's message, but by its grave mismanagement. Smith was not motivated by greed, but by a desire to prevent other investors from being misled into a bad investment. I consider the Internet to be more like a telephone system than a newspaper. Thus, according to legal precedent, Smith's posting cannot give rise to a claim of libel.