A MEASURE OF MEDIA BIAS

 

 

 CD ROM Annuaire d'Entreprises France prospect (avec ou sans emails) : REMISE DE 10 % Avec le code réduction AUDEN872

10% de réduction sur vos envois d'emailing --> CLIQUEZ ICI

Retour à l'accueil, cliquez ici

THE QUARTERLY JOURNAL OF ECONOMICS Vol. CXX November 2005 Issue 4 A MEASURE OF MEDIA BIAS* TIM GROSECLOSE AND JEFFREY MILYO We measure media bias by estimating ideological scores for several major media outlets. To compute this, we count the times that a particular media outlet cites various think tanks and policy groups, and then compare this with the times that members of Congress cite the same groups. Our results show a strong liberal bias: all of the news outlets we examine, except Fox News’ Special Report and the Washington Times, received scores to the left of the average member of Congress. Consistent with claims made by conservative critics, CBS Evening News and the New York Times received scores far to the left of center. The most centrist media outlets were PBS NewsHour, CNN’s Newsnight, and ABC’s Good Morning America; among print outlets, USA Today was closest to the center. All of our ?ndings refer strictly to news content; that is, we exclude editorials, letters, and the like. “The editors in Los Angeles killed the story. They told Witcover that it didn’t ‘come off ’ and that it was an ‘opinion’ story. . . . The solution was simple, they told him. All he had to do was get other people to make the same points and draw the same conclusions and then write the article in their words” (emphasis in original). Timothy Crouse, Boys on the Bus [1973, p. 116]. Do the major media outlets in the U. S. have a liberal bias? Few questions evoke stronger opinions, but so far, the debate has largely been one of anecdotes (“How can CBS News be balanced * We are grateful for the research assistance by Aviva Aminova, Jose Bustos, Anya Byers, Evan Davidson, Kristina Doan, Wesley Hussey, David Lee, Pauline Mena, Orges Obeqiri, Byrne Offutt, Matthew Patterson, David Primo, Darryl Reeves, Susie Rieniets, Thomas Rosholt, Michael Uy, Diane Valos, Michael Visconti, Margaret Vo, Rachel Ward, and Andrew Wright. Also, we are grateful for comments and suggestions by Matthew Baum, Mark Crain, Timothy Groeling, Frances Groseclose, Phillip Gussin, James Hamilton, Wesley Hussey, Chap Lawson, Steven Levitt, Jeffrey Lewis, Andrew Martin, David Mayhew, Jeffrey Minter, Michael Munger, David Primo, Andrew Waddell, Barry Weingast, John Zaller, and Jeffrey Zwiebel. We also owe gratitude to the University of California at Los Angeles, the University of Missouri, Stanford University, and the University of Chicago. These universities paid our salaries, funded our research assistants, and paid for services such as Lexis-Nexis, which were necessary for our data collection. No other organization or person helped to fund this research project. © 2005 by the President and Fellows of Harvard College and the Massachusetts Institute of Technology. The Quarterly Journal of Economics, November 2005 1191when it calls Steve Forbes’ tax plan ‘wacky’?”) and untested theories (“if the news industry is a competitive market, then how can media outlets be systematically biased?”). Few studies provide an objective measure of the slant of news, and none has provided a way to link such a measure to ideological measures of other political actors. That is, none of the existing measures can say, for example, whether the New York Times is more liberal than Senator Edward Kennedy or whether Fox News is more conservative than Senator Bill Frist. We provide such a measure. Namely, we compute an adjusted Americans for Democratic Action (ADA) score for various news outlets, including the New York Times, the Washington Post, USA Today, the Drudge Report, Fox News’ Special Report, and all three networks’ nightly news shows. Our results show a strong liberal bias. All of the news outlets except Fox News’ Special Report and the Washington Times received a score to the left of the average member of Congress. And a few outlets, including the New York Times and CBS Evening News, were closer to the average Democrat in Congress than the center. These ?ndings refer strictly to the news stories of the outlets. That is, we omitted editorials, book reviews, and letters to the editor from our sample. To compute our measure, we count the times that a media outlet cites various think tanks and other policy groups. 1 We compare this with the times that members of Congress cite the same think tanks in their speeches on the ?oor of the House and Senate. By comparing the citation patterns, we can construct an ADA score for each media outlet. As a simpli?ed example, imagine that there were only two think tanks, and suppose that the New York Times cited the ?rst think tank twice as often as the second. Our method asks: what is the estimated ADA score of a member of Congress who exhibits the same frequency (2:1) in his or her speeches? This is the score that our method would assign the New York Times. A feature of our method is that it does not require us to make a subjective assessment of how liberal or conservative a think tank is. That is, for instance, we do not need to read policy reports of the 1. Our sample includes policy groups that are not usually called think tanks, such as the NAACP, NRA, and Sierra Club. To avoid using the more unwieldy phrase “think tanks and other policy groups” we often use a shorthand version, “think tanks.” When we use the latter phrase, we mean to include the other groups, such as the NAACP, etc. 1192 QUARTERLY JOURNAL OF ECONOMICSthink tank or analyze its position on various issues to determine its ideology. Instead, we simply observe the ADA scores of the members of Congress who cite it. This feature is important, since an active controversy exists whether, e.g., the Brookings Institution or the RAND Corporation is moderate, left-wing, or right-wing. I. SOME PREVIOUS STUDIES OF MEDIA BIAS Survey research has shown that an almost overwhelming fraction of journalists are liberal. For instance, Povich [1996] reports that only 7 percent of all Washington correspondents voted for George H. W. Bush in 1992, compared with 37 percent of the American public. 2 Lichter, Rothman, and Lichter [1986] and Weaver and Wilhoit [1996] report similar ?ndings for earlier elections. More recently, the New York Times reported that only 8 percent of Washington correspondents thought George W. Bush would be a better president than John Kerry. 3 This compares with 51 percent of all American voters. David Brooks notes that for every journalist who contributed to George W. Bush’s campaign, another 93 contributed to Kerry’s campaign. 4 These statistics suggest that Washington correspondents, as a group, are more liberal than almost any congressional district in the country. For instance, in the Ninth California district, which includes Berkeley, 12 percent voted for Bush in 1992, nearly double the rate of the correspondents. In the Eighth Massachusetts district, which includes Cambridge, 19 percent voted for Bush, approximately triple the rate of the correspondents. 5 Of course, however, just because a journalist has liberal or conservative views, this does not mean that his or her reporting will be slanted. For instance, as Jamieson [2000, p. 188] notes: “One might hypothesize instead that reporters respond to the cues of those who pay their salaries and mask their own ideologi- 2. Eighty-nine percent of the Washington correspondents voted for Bill Clinton, and two percent voted for Ross Perot. 3. “Finding Biases on the Bus,” John Tierney, New York Times, August 1, 2004. The article noted that journalists outside Washington were not as liberal. Twenty-?ve percent of these journalists favored Bush over Kerry. 4. “Ruling Class War,” New York Times, September 11, 2004. 5. Cambridge and Berkeley’s preferences for Republican presidential candidates have remained fairly constant since 1992. In the House district that contains Cambridge, Bob Dole received 17 percent of the two-party vote in 1996, and George W. Bush received 19 percent in 2000. In the House district that contains Berkeley, Bob Dole received 14 percent of the two-party vote, and George W. Bush received 13 percent. A MEASURE OF MEDIA BIAS 1193cal dispositions. Another explanation would hold that norms of journalism, including ‘objectivity’ and ‘balance’ blunt whatever biases exist.” Or, as Crouse [1973] explains: It is an unwritten law of current political journalism that conservative Republican Presidential candidates usually receive gentler treatment from the press than do liberal Democrats. Since most reporters are moderate or liberal Democrats themselves, they try to offset their natural biases by going out of their way to be fair to conservatives. No candidate ever had a more considerate press corps than Barry Goldwater in 1964, and four years later the campaign press gave every possible break to Richard Nixon. Reporters sense a social barrier between themselves and most conservative candidates; their relations are formal and meticulously polite. But reporters tend to loosen up around liberal candidates and campaign staffs; since they share the same ideology, they can joke with the staffers, even needle them, without being branded the “enemy.” If a reporter has been trained in the traditional, “objective” school of journalism, this ideological and social closeness to the candidate and the staff makes him feel guilty; he begins to compensate; the more he likes and agrees with the candidate personally, the harder he judges him professionally. Like a coach sizing up his own son in spring tryouts, the reporter becomes doubly severe [pp. 355–356]. However, a strong form of the view that reporters offset or blunt their own ideological biases leads to a counterfactual implication. Suppose that it is true that all reporters report objectively, and their ideological views do not color their reporting. If so, then all news would have the same slant. Moreover, if one believes Crouse’s claim that reporters overcompensate in relation to their own ideology, then a news outlet ?lled with conservatives, such as Fox News, should have a more liberal slant than a news outlet ?lled with liberals, such as the New York Times. Spatial models of ?rm location, such as those by Hotelling [1929] or Mullainathan and Shleifer [2003] give theoretical reasons why the media should slant the news exactly as consumers desire. 6 The idea is that if the media did not, then an entrepreneur could form a new outlet that does, and he or she could earn 6. Some scholars claim that news outlets cater not to the desires of consumers, but to the desires of advertisers. Consequently, since advertisers have preferences that are more pro-business or pro-free-market than the average consumer, these scholars predict that news outlets will slant their coverage to the right of consumers’ preferences (e.g., see Parenti [1986] or Herman and Chomsky [1988]). While our work ?nds empirical problems with such predictions, Sutter [2002] notes several theoretical problems. Most important, although an advertiser has great incentive to pressure a news outlet to give favorable treatment to his own product or his own business, he has little incentive to pressure for favorable treatment of business in general. Although the total bene?ts of the latter type of pressure may be large, they are dispersed across a large number of businesses, and the advertiser himself would receive only a tiny fraction of the bene?ts. 1194 QUARTERLY JOURNAL OF ECONOMICSgreater-than-equilibrium pro?ts, possibly even driving the other outlets out of business. This is a compelling argument, and even the libertarian Cato Journal has published an article agreeing with the view: in this article Sutter [2001] notes that “Charges of a liberal bias essentially require the existence of a cartel [p. 431].” However, contrary to the prediction of the typical ?rm-location model, we ?nd a systematic liberal bias of the U. S. media. This is echoed by three other studies—Hamilton [2004], Lott and Hassett [2004], and Sutter [2004], the only empirical studies of media bias by economists of which we are aware. Although his primary focus is not on media bias, in one section of his book, Hamilton [2004] analyzes Pew Center surveys of media bias. The surveys show—unsurprisingly—that conservatives tend to believe that there is a liberal bias in the media, while liberals tend to believe there is a conservative bias. While many would simply conclude that this is only evidence that “bias is in the eyes of the beholder,” Hamilton makes the astute point that that individuals are more likely to perceive bias the further the slant of the news is from their own position. Since the same surveys also show that conservatives tend to see a bias more than liberals do, this is evidence of a liberal bias. Lott and Hassett [2004] propose an innovative test for media bias. They record whether the headlines of various economic news stories are positive or negative. For instance, on the day that the Department of Commerce reports that GDP grows by a large degree, a newspaper could instead report “GDP Growth Less than Expected.” Lott and Hassett control for the actual economic ?gures reported by the Department of Commerce, and they include an independent variable that indicates the political party of the president. Of the ten major newspapers that they examine, they ?nd that nine are more likely to report a negative headline if the president is Republican. 7 7. One of the most novel features of the Lott-Hassett paper is that to de?ne unbiased, it constructs a baseline that can vary with exogenous factors. In contrast, some studies de?ne unbiased simply as some sort of version of “presenting both sides of the story.” To see why the latter notion is inappropriate, suppose that a newspaper devoted just as many stories describing the economy under President Clinton as good as it did describing the economy as bad. By the latter notion this newspaper is unbiased. However, by Lott and Hassett’s notion the newspaper is unbiased only if the economy under Clinton was average. If instead it was better than average, then Lott and Hassett (as many would recognize as appropriate, including us) would judge the newspaper to have a conservative bias. Like Lott and Hassett, our notion of bias also varies with exogenous factors. For instance, suppose that after a series of events, liberal (conservative) think tanks gain more A MEASURE OF MEDIA BIAS 1195Sutter [2004] collects data on the geographic locations of readers of Time, Newsweek, and U.S. News and World Report. He shows that as a region becomes more liberal (as indicated by its vote share for President Clinton), its consumption of the three major national news magazines increases. With a clever and rigorous theoretical model he shows that, under some reasonable assumptions, this empirical ?nding implies that the U. S. newsmagazine industry, taken as a whole, is biased to the left. Notwithstanding these studies, it is easy to ?nd quotes from prominent journalists and academics who claim that there is no systematic liberal bias among the media in the United States, some even claiming that there is a conservative bias. The following are some examples: Our greatest accomplishment as a profession is the development since World War II of a news reporting craft that is truly non-partisan, and non-ideological, and that strives to be independent of undue commercial or governmental in?uence. . . . It is that legacy we must protect with our diligent stewardship. To do so means we must be aware of the energetic effort that is now underway to convince our readers that we are ideologues. It is an exercise of, in disinformation, of alarming proportions. This attempt to convince the audience of the world’s most ideology-free newspapers that they’re being subjected to agendadriven news re?ecting a liberal bias. I don’t believe our viewers and readers will be, in the long-run, misled by those who advocate biased journalism. 8 . . . when it comes to free publicity, some of the major broadcast media are simply biased in favor of the Republicans, while the rest tend to blur differences between the parties. But that’s the way it is. Democrats should complain as loudly about the real conservative bias of the media as the Republicans complain about its entirely mythical bias . . . 9 The mainstream media does not have a liberal bias. . . . ABC, CBS, NBC, CNN, the New York Times, The Washington Post, Time, Newsweek and the rest—at least try to be fair. 10 respect and credibility (say, because they were better at predicting those events), which causes moderates in Congress to cite them more frequently. By our notion, for a news outlet to remain unbiased, it also must cite the liberal (conservative) think tanks more frequently. The only other media-bias study of which we are aware that also constructs a baseline that controls for exogenous events is Groeling and Kernell’s [1998] study of presidential approval. These researchers examine the extent to which media outlets report increases and decreases in the president’s approval, while controlling for the actual increases and decreases in approval (whether reported by the media or not). The focus of the paper, however, is on whether news outlets have a bias toward reporting good or bad news, not on any liberal or conservative bias. 8. New York Times Executive Editor Howell Raines accepting the “George Beveridge Editor of the Year Award” at a National Press Foundation dinner, shown live on C-SPAN2 February 20, 2003. 9. Paul Krugman, “Into the Wilderness,” New York Times, November 8, 2002. 10. Al Franken [2003, p. 3] Lies and the Lying Liars Who Tell Them: A Fair and Balanced Look at the Right. 1196 QUARTERLY JOURNAL OF ECONOMICSI’m going out telling the story that I think is the biggest story of our time: how the right-wing media has become a partisan propaganda arm of the Republican National Committee. We have an ideological press that’s interested in the election of Republicans, and a mainstream press that’s interested in the bottom line. Therefore, we don’t have a vigilant, independent press whose interest is the American people. 11 II. DATA The web site, www.wheretodoresearch.com lists 200 of the most prominent think tanks and policy groups in the United States. Using the of?cial web site of Congress, http://thomas. loc.gov, we and our research assistants searched the Congressional Record for instances where a member of Congress cited one of these think tanks. We also recorded the average adjusted ADA score of the member who cited the think tank. We use adjusted scores, constructed by Groseclose, Levitt, and Snyder [1999], because we need the scores to be comparable across time and chambers. 12 Groseclose, Levitt, and Snyder use the 1980 House scale as their base year and chamber. It is convenient for us to choose a scale that gives centrist members of Congress a score of about 50. For this reason, we converted scores to the 1999 House scale. 13 Along with direct quotes of think tanks, we sometimes included sentences that were not direct quotes. For instance, many of the citations were cases where a member of Congress noted “This bill is supported by think tank X.” Also, members of Congress sometimes insert printed material into the Congressional Record, such as a letter, a newspaper article, or a report. If a think tank was cited in such material or if a think tank member 11. Bill Moyers, quoted in “Bill Moyers Retiring from TV Journalism,” Frazier Moore, Associated Press Online, December 9, 2004. 12. Groseclose, Levitt, and Snyder [1999] argue that the underlying scales of interest group scores, such as those compiled by the Americans for Democratic Action, can shift and stretch across years or across chambers. This happens because the roll call votes that are used to construct the scores are not constant across time, nor across chambers. They construct an index that allows one to convert ADA scores to a common scale so that they can be compared across time and chambers. They call such scores adjusted ADA scores. 13. Importantly, we apply this conversion to congressional scores as well as media scores. Since our method can only make relative assessments of the ideology of media outlets (e.g., how they compare with members of Congress or the average American voter), this transformation is benign. Just as the average temperature in Boston is colder than the average temperature in Baltimore, regardless if one uses a Celsius scale or Fahrenheit scale, all conclusions we draw in this paper are unaffected by the choice to use the 1999 House scale or the 1980 House scale. A MEASURE OF MEDIA BIAS 1197wrote the material, we treated it as if the member of Congress had read the material in his or her speech. We did the same exercise for stories that media outlets report, except with media outlets we did not record an ADA score. Instead, our method estimates such a score. Sometimes a legislator or journalist noted an action that a think tank had taken—e.g., that it raised a certain amount of money, initiated a boycott, ?led a lawsuit, elected new of?cers, or held its annual convention. We did not record such cases in our data set. However, sometimes in the process of describing such actions, the journalist or legislator would quote a member of the think tank, and the quote revealed the think tank’s views on national policy, or the quote stated a fact that is relevant to national policy. If so, we would record that quote in our data set. For instance, suppose that a reporter noted “The NAACP has asked its members to boycott businesses in the state of South Carolina. ‘We are initiating this boycott, because we believe that it is racist to ?y the Confederate Flag on the state capitol,’ a leader of the group noted.” In this instance, we would count the second sentence that the reporter wrote, but not the ?rst. Also, we omitted the instances where the member of Congress or journalist only cited the think tank so he or she could criticize it or explain why it was wrong. About 5 percent of the congressional citations and about 1 percent of the media citations fell into this category. In the same spirit, we omitted cases where a journalist or legislator gave an ideological label to a think tank (e.g., “Even the conservative Heritage Foundation favors this bill.”). The idea is that we only wanted cases where the legislator or journalist cited the think tank as if it were a disinterested expert on the topic at hand. About 2 percent of the congressional citations and about 5 percent of the media citations involved an ideological label. 14 14. In the Appendix we report the results when we do include citations that include an ideological label. When we include these data, this does not cause a substantial leftward or rightward movement in media scores—the average media score decreased by approximately 0.5 points; i.e., it makes the media appear slightly more conservative. The greater effect was to cause media outlets to appear more centrist. For instance, the New York Times and CBS Evening News tended to give ideological labels to conservative think tanks more often than they did to liberal think tanks. As a consequence, when we include the labeled observations, their scores, respectively, decreased (i.e., became more conservative) by 3.8 and 1.6 points. Meanwhile, Fox News’ Special Report tended to do the opposite. When we included labeled observations, its score increased (i.e., became more liberal) by 1.8 points. We think that such an asymmetric treatment of think tanks 1198 QUARTERLY JOURNAL OF ECONOMICSFor the congressional data, we coded all citations that occurred during the period January 1, 1993, to December 31, 2002. This covered the 103rd through 107th Congresses. We used the period 1993 to 1999 to calculate the average adjusted ADA score for members of Congress. 15 As noted earlier, our media data do not include editorials, letters to the editor, or book reviews. That is, all of our results refer only to the bias of the news of media. There are several reasons why we do not include editorials. The primary one is that there is little controversy over the slant of editorial pages; e.g., few would disagree that Wall Street Journal editorials are conservative, while New York Times editorials are liberal. However, there is a very large controversy about the slant of the news of various media outlets. A second reason involves the effect (if any) that the media have on individuals’ political views. It is reasonable to believe that a biased outlet that pretends to be centrist has more of an effect on readers’ or viewers’ beliefs than, say, an editorial page that does not pretend to be centrist. A third reason involves dif?culties in coding the data. Editorial and opinion writers, much more than news writers, are sometimes sarcastic when they quote members of think tanks. If our coders do not catch the sarcasm, they record the citation as a favorable one. (i.e., to give labels more often to one side) is itself a form of media bias. This is why we base our main conclusions on the nonlabeled data, which accounts for this form of bias. 15. Groseclose, Levitt, and Snyder [1999] have not computed adjusted scores for years after 1999. One consequence of this is that members who ?rst entered Congress in 2001 do not have adjusted scores. Consequently, we omitted these observations from our sample. This omission causes little harm, if any, to our estimation procedure. First, the citations of the new members comprised less than one-half of 1 percent our sample. Second, the ideologies of the new members were fairly representative of the old members. Third, even if the new members were not representative, this fact alone would not cause a bias in our method. To see this, suppose that these omitted members were disproportionately extreme liberals. To estimate ADA scores for a media outlet, we need estimates of the citation behavior of a range of members with ideologies near the ideology of the media outlet. If we had omitted some extreme liberal members of Congress, this does not bias our estimate of the citation pattern of the typical liberal, it only makes it less precise, since we have less data for these members. If, on the other hand, new members behaved differently from old members who have the same adjusted ADA score, then this could cause a bias. For instance, suppose that new members with a 70 adjusted ADA score tend to cite conservative think tanks more often than do old members with a 70 score. Then this would mean that Congress’s citation patterns are really more conservative than we have recorded. This means the media’s citation patterns are really more liberal (relative to Congress) than they appear in our data set, which would mean that the media is really more liberal than our estimates indicate. However, we have no evidence to believe this (or the opposite) is the case. And even if it were, because the new members are such a small portion of the sample, any bias should be small. A MEASURE OF MEDIA BIAS 1199This biases the results toward making the editorials appear more centrist than they really are. In Table I we list the 50 groups from our list that were most commonly cited by the media. The ?rst column lists the average ADA score of the legislator citing the think tank. These averages closely correspond to conventional wisdom about the ideological positions of the groups. For instance, the Heritage Foundation and Christian Coalition, with average scores of 20.0 and 22.6, are near the conservative end; the Economic Policy Institute and the Children’s Defense Fund (80.3 and 82.0) are near the liberal end; and the Brookings Institution and the World Wildlife Fund (53.3 and 50.4) are in the middle of our mix of think tanks. While most of these averages closely agree with the conventional wisdom, two cases are somewhat anomalous. The ?rst is the ACLU. The average score of legislators citing it was 49.8. Later, we shall provide reasons why it makes sense to de?ne the political center at 50.1. This suggests that the ACLU, if anything, is a right-leaning organization. The reason the ACLU has such a low score is that it opposed the McCain-Feingold Campaign Finance bill, and conservatives in Congress cited this often. In fact, slightly more than one-eighth of all ACLU citations in Congress were due to one person alone, Mitch McConnell (R.-KY), perhaps the chief critic of McCain-Feingold. If we omit McConnell’s citations, the ACLU’s average score increases to 55.9. Because of this anomaly, in the Appendix we report the results when we repeat all of our analyses but omit the ACLU data. The second apparent anomaly is the RAND Corporation, which has a fairly liberal average score, 60.4. We mentioned this ?nding to some employees of RAND, who told us they were not surprised. While RAND strives to be middle-of-the-road ideologically, the more conservative scholars at RAND tend to work on military studies, while the more liberal scholars tend to work on domestic studies. Because the military studies are sometimes classi?ed and often more technocratic than the domestic studies, the media and members of Congress tend to cite the domestic studies disproportionately. As a consequence, RAND appears liberal when judged by these citations. It is important to note that this fact—that the research at RAND is more conservative than the numbers in Table I suggest—will not bias our results. To see this, think of RAND as two think tanks: RAND I, the left-leaning think tank which produces the 1200 QUARTERLY JOURNAL OF ECONOMICSTABLE I THE 50 MOST-CITED THINK TANKS AND POLICY GROUPS BY THE MEDIA IN OUR SAMPLE Think tank/policy group Average score of legislators who cite the group Number of citations by legislators Number of citations by media outlets 1 Brookings Institution 53.3 320 1392 2 American Civil Liberties Union 49.8 273 1073 3 NAACP 75.4 134 559 4 Center for Strategic and International Studies 46.3 79 432 5 Amnesty International 57.4 394 419 6 Council on Foreign Relations 60.2 45 403 7 Sierra Club 68.7 376 393 8 American Enterprise Institute 36.6 154 382 9 RAND Corporation 60.4 352 350 10 National Ri?e Association 45.9 143 336 11 American Association of Retired Persons 66.0 411 333 12 Carnegie Endowment for International Peace 51.9 26 328 13 Heritage Foundation 20.0 369 288 14 Common Cause 69.0 222 287 15 Center for Responsive Politics 66.9 75 264 16 Consumer Federation of America 81.7 224 256 17 Christian Coalition 22.6 141 220 18 Cato Institute 36.3 224 196 19 National Organization for Women 78.9 62 195 20 Institute for International Economics 48.8 61 194 21 Urban Institute 73.8 186 187 22 Family Research Council 20.3 133 160 23 Federation of American Scientists 67.5 36 139 24 Economic Policy Institute 80.3 130 138 25 Center on Budget and Policy Priorities 88.3 224 115 26 National Right to Life Committee 21.6 81 109 27 Electronic Privacy Information Center 57.4 19 107 28 International Institute for Strategic Studies 41.2 16 104 29 World Wildlife Fund 50.4 130 101 30 Cent. for Strategic and Budgetary Assessments 33.9 7 89 31 National Abortion and Reproductive Rights Action League 71.9 30 88 32 Children’s Defense Fund 82.0 231 78 33 Employee Bene?t Research Institute 49.1 41 78 34 Citizens Against Government Waste 36.3 367 76 35 People for the American Way 76.1 63 76 36 Environmental Defense Fund 66.9 137 74 37 Economic Strategy Institute 71.9 26 71 38 People for the Ethical Treatment of Animals 73.4 5 70 39 Americans for Tax Reform 18.7 211 67 40 Citizens for Tax Justice 87.8 92 67 A MEASURE OF MEDIA BIAS 1201research that the media and members of Congress tend to cite, and RAND II, the conservative think tank which produces the research that they tend not to cite. Our results exclude RAND II from the analysis. This causes no more bias than excluding any other think tank that is rarely cited in Congress or the media. The second and third columns, respectively, report the number of congressional and media citations in our data. These columns give some preliminary evidence that the media is liberal, relative to Congress. To see this, de?ne as right-wing a think tank that has an average score below 40. Next, consider the ten mostcited think tanks by the media. Only one right-wing think tank makes this list: the American Enterprise Institute. In contrast, consider the ten most-cited think tanks by Congress. (These are the National Taxpayers Union, AARP, Amnesty International, Sierra Club, Heritage Foundation, Citizens Against Government Waste, RAND, Brookings, NFIB, and ACLU.) Four of these are right-wing. For perspective, in Table II we list the average adjusted ADA score of some prominent members of Congress, including some well-known moderates. These include the most conservative Democrat in our sample, Nathan Deal (GA), and the most liberal Republican in our sample, Constance Morella (MD). Although Nathan Deal became a Republican in 1995, the score that we list TABLE I (CONTINUED) Think tank/policy group Average score of legislators who cite the group Number of citations by legislators Number of citations by media outlets 41 National Federation of Independent Businesses 26.8 293 66 42 Hudson Institute 25.3 73 64 43 National Taxpayers Union 34.3 566 63 44 Stimson Center 63.6 26 63 45 Center for Defense Information 79.0 28 61 46 Handgun Control, Inc. 77.2 58 61 47 Hoover Institution 36.5 35 61 48 Nixon Center 21.7 6 61 49 American Conservative Union 16.1 43 56 50 Manhattan Institute 32.0 18 54 1202 QUARTERLY JOURNAL OF ECONOMICSin the table is calculated only from his years as a Democrat. 16 The table also lists the average scores of the Republican and Democratic parties. 17 To calculate average scores, for each member we note all of his or her scores for the seven-year period for which we 16. In fact, for all members of Congress who switched parties, we treated them as if they were two members, one for when they were a Democrat and one for when they were a Republican. 17. The party averages re?ect the midpoint of the House and Senate averages. Thus, they give equal weight to each chamber, not to each legislator, since there are more House members than senators. TABLE II AVERAGE ADJUSTED ADA SCORES OF LEGISLATORS Legislator Average score Maxine Waters (D-CA) 99.6 Edward Kennedy (D-MA) 88.8 John Kerry (D-MA) 87.6 Average Democrat 84.3 Tom Daschle (D-SD) 80.9 Joe Lieberman (D-CT) 74.2 Constance Morella (R-MD) 68.2 Ernest Hollings (D-SC) 63.7 John Breaux (D-LA) 59.5 Christopher Shays (R-CT) 54.6 Arlen Specter (R-PA) 51.3 James Leach (R-IA) 50.3 Howell He?in (D-AL) 49.7 Tom Campbell (R-CA) 48.6 Sam Nunn (D-GA) 48.0 Dave McCurdy (D-OK) 46.9 Olympia Snowe (R-ME) 43.0 Susan Collins (R-ME) 39.3 Charlie Stenholm (D-TX) 36.1 Rick Lazio (R-NY) 35.8 Tom Ridge (R-PA) 26.7 Nathan Deal (D-GA) 21.5 Joe Scarborough (R-FL) 17.7 Average Republican 16.1 John McCain (R-AZ) 12.7 Bill Frist (R-TN) 10.3 Tom DeLay (R-TX) 4.7 The table lists average adjusted ADA scores. The method for adjusting scores is described in Groseclose, Levitt, and Snyder [1999]. Scores listed are converted to the 1999 scale and are an average of each legislator’s scores during the 1993–1999 period. The one exception is Nathan Deal, who switched parties in 1995; only his score as a Democrat in 1994 –1995 is included. Deal is the most conservative Democrat over this time period; Constance Morella is the most liberal Republican. A MEASURE OF MEDIA BIAS 1203recorded adjusted scores (1993–1999). Then we calculated the average over this period. Because at times there is some subjectivity in coding our data, when we hired our research assistants we asked for whom they voted or would have voted if they were limited to choosing Al Gore or George Bush. We chose research assistants so that approximately half our data was coded by Gore supporters and half by Bush supporters. For each media outlet we selected an observation period that we estimated would yield at least 300 observations (citations). Because magazines, television shows, and radio shows produce less data per show or issue (e.g., a transcript for a 30-minute television show contains only a small fraction of the sentences that are contained in a newspaper), with some outlets we began with the earliest date available in Lexis-Nexis. We did this for (i) the three magazines that we analyze, (ii) the ?ve evening television news broadcasts that we analyze, and (iii) the one radio program that we analyze. 18 III. OUR DEFINITION OF BIAS Before proceeding, it is useful to clarify our de?nition of bias. Most important, the de?nition has nothing to do with the honesty or accuracy of the news outlet. Instead, our notion is more like a taste or preference. For instance, we estimate that the centrist United States voter during the late 1990s had a left-right ideology approximately equal to that of Arlen Specter (R-PA) or Sam Nunn (D-GA). Meanwhile, we estimate that the average New York Times article is ideologically very similar to the average speech by Joe Lieberman (D-CT). Next, since vote scores show Lieberman to be more liberal than Specter or Nunn, our method concludes that the New York Times has a liberal bias. However, in no way does this imply that the New York Times is inaccurate or dishonest— just as the vote scores do not imply that Joe Lieberman is any less honest than Sam Nunn or Arlen Specter. In contrast, other writers, at least at times, do de?ne bias as a matter of accuracy or honesty. We emphasize that our differences with such writers are ones of semantics, not substance. If, say, a reader insists that bias should refer to accuracy or honesty, then we urge him or her simply to substitute another 18. Table III, in Section V, lists the period of observation for each media outlet. 1204 QUARTERLY JOURNAL OF ECONOMICSword wherever we write “bias.” Perhaps “slant” is a good alternative. However, at the same time, we argue that our notion of bias is meaningful and relevant, and perhaps more meaningful and relevant than the alternative notion. The main reason, we believe, is that only seldom do journalists make dishonest statements. Cases such as Jayson Blair, Stephen Glass, or the falsi?ed memo at CBS are rare; they make headlines when they do occur; and much of the time they are orthogonal to any political bias. Instead, for every sin of commission, such as those by Glass or Blair, we believe that there are hundreds, and maybe thousands, of sins of omission—cases where a journalist chose facts or stories that only one side of the political spectrum is likely to mention. For instance, in a story printed on March 1, 2002, the New York Times reported that (i) the IRS increased its audit rate on the “working poor” (a phrase that the article de?nes as any taxpayer who claimed an earned income tax credit); while (ii) the agency decreased its audit rate on taxpayers who earn more than $100,000; and (iii) more than half of all IRS audits involve the working poor. The article also notes that (iv) “The roughly 5 percent of taxpayers who make more than $100,000 . . . have the greatest opportunities to shortchange the government because they receive most of the nonwage income.” Most would agree that the article contains only true and accurate statements; however, most would also agree that the statements are more likely to be made by a liberal than a conservative. Indeed, the centrist and right-leaning news outlets by our measure (the Washington Times, Fox News’ Special Report, the Newshour with Jim Lehrer, ABC’s Good Morning America, and CNN’s Newsnight with Aaron Brown) failed to mention any of these facts. Meanwhile, three of the outlets on the left side of our spectrum (CBS Evening News, USA Today, and the [news pages of the] Wall Street Journal) did mention at least one of the facts. Likewise, on the opposite side of the political spectrum there are true and accurate facts that conservatives are more likely to state than liberals. For instance, on March 28, 2002, the Washington Times, the most conservative outlet by our measure, reported that Congress earmarked $304,000 to restore opera houses in Connecticut, Michigan, and Washington. 19 Meanwhile, none of 19. We assert that this statement is more likely to be made by a conservative because it suggests that government spending is ?lled with wasteful projects. This, conservatives often argue, is a reason that government should lower taxes. A MEASURE OF MEDIA BIAS 1205the other outlets in our sample mentioned this fact. Moreover, the Washington Times article failed to mention facts that a liberal would be more likely to note. For instance, it did not mention that the $304,000 comprises a very tiny portion of the federal budget. We also believe that our notion of bias is the one that is more commonly adopted by other authors. For instance, Lott and Hassett [2004] do not assert that one headline in their data set is false (e.g., “GDP Rises 5 Percent”) while another headline is true (e.g., “GDP Growth Less Than Expected”). Rather, the choice of headlines is more a question of taste, or perhaps fairness, than a question of accuracy or honesty. Also, much of Goldberg’s [2002] and Alterman’s [2003] complaints about media bias are that some stories receive scant attention from the press, not that the stories receive inaccurate attention. For instance, Goldberg notes how few stories the media devote to the problems faced by children of dual-career parents. On the opposite side, Alterman notes how few stories the media devote to corporate fraud. Our notion of bias also seems closely aligned to the notion described by Bozell and Baker [1990, p. 3]: “But though bias in the media exists, it is rarely a conscious attempt to distort the news. It stems from the fact that most members of the media elite have little contact with conservatives and make little effort to understand the conservative viewpoint. Their friends are liberals, what they read and hear is written by liberals.” 20 Similar to the facts and stories that journalists report, the citations that they gather from experts are also very rarely dishonest or inaccurate. Many, and perhaps most, simply indicate the side of an issue that the expert or his or her organization favors. For instance, on April 27, 2002, the New York Times reported that Congress passed a $100 billion farm subsidies bill that also gave vouchers to the elderly to buy fresh fruits and vegetables. “This is a terri?c outcome—one of the most important pieces of social welfare legislation this year,” said Stacy Dean of the Center on Budget and Policy Priorities, her only quote in the article. In another instance, on May 19, 2001, CBS Evening News described President Bush’s call for expanding nuclear power. It quoted the Sierra Club’s Daniel Becker: “[S]witching from coal to nuclear power is like giving up smoking and taking up crack.” 20. We were directed to this passage by Sutter’s [2001] article, which adopts nearly the same de?nition of bias as we do. 1206 QUARTERLY JOURNAL OF ECONOMICSMost would agree that these statements are more normative than positive; that is, they are more an indication of the author’s preferences than a fact or prediction. Similarly, another large fraction of cases involve the organization’s views of politicians. For instance, on March 29, 2002, the Washington Times reported that the National Taxpayers’ Union (NTU) gave Hillary Clinton a score of 3 percent on its annual rating of Congress. The story noted that the score, according to the NTU, was “the worst score for a Senate freshman in their ?rst year in of?ce that the NTU has ever recorded.” Finally, many other citations refer to facts that are generally beyond dispute. However, like the facts that reporters themselves note, these facts are ones that conservatives and liberals are not equally likely to state. For instance, on March 5, 1992, CBS Evening News reported a fact that liberals are more likely to note than conservatives: “The United States now has greater disparities of income than virtually any Western European country,” said Robert Greenstein of the Center on Budget and Policy Priorities. Meanwhile, on May 30, 2003, CNN’s Newsnight with Aaron Brown noted a fact that conservatives are more likely to state than liberals. In a story about the FCC’s decision to weaken regulations about media ownership, it quoted Adam Thierer of the Cato Institute, “[L]et’s start by stepping back and taking a look at . . . the landscape of today versus, say, 10, 15, 25, 30 years ago. And by almost every measure that you can go by, you can see that there is more diversity, more competition, more choice for consumers and citizens in these marketplaces.” 21 21. Like us, Mullainathan and Shleifer [2003] de?ne bias as an instance where a journalist fails to report a relevant fact, rather than chooses to report a false fact. However, unlike us, Mullainathan and Shleifer de?ne bias as a question of accuracy, not a taste or preference. More speci?cally, their model assumes that with any potential news story, there are a ?nite number of facts that apply to the story. By their de?nition, a journalist is unbiased only if he or she reports all these facts. (However, given that there may be an unwieldy number of facts that the journalist could mention, it also seems consistent with the spirit of their de?nition that if the journalist merely selects facts randomly from this set or if he or she chooses a representative sample, then this would also qualify as unbiased.) As an example, suppose that, out of the entire universe of facts about free trade, most of the facts imply that free trade is good. However, suppose that liberals and moderates in Congress are convinced that it is bad, and hence in their speeches they state more facts about its problems. Under Mullainathan and Shleifer’s de?nition, to be unbiased a journalist must state more facts about the advantages of free trade—whereas, under our de?nition a journalist must state more facts about the disadvantages of free trade. Again, we emphasize that our differences on this point are ones of semantics. Each notion of bias is meaningful and relevant. And if a reader insists that “bias” should refer to one notion instead of the other, we suggest that he or she substitute a different word for the other A MEASURE OF MEDIA BIAS 1207IV. A SIMPLE STRUCTURAL MODEL De?ne xi as the average adjusted ADA score of the ith member of Congress. Given that the member cites a think tank, we assume that the utility that he or she receives from citing the jth think tank is (1) aj  bj xi  eij . The parameter bj indicates the ideology of the think tank. Note that if xi is large (i.e., the legislator is liberal), then the legislator receives more utility from citing the think tank if bj is large. The parameter aj represents a sort of “valence” factor (as political scientists use the term) for the think tank. It captures nonideological factors that lead legislators and journalists to cite the think tank. Such factors may include such things as a reputation for high-quality and objective research, which may be orthogonal to any ideological leanings of the think tank. We assume that e i j is distributed according to a Weibull distribution. As shown by McFadden [1974] (also see Judge et al. [1985, pp. 770 –772]), this implies that the probability that member i selects the jth think tank is notion, such as “slant.” Further, we suggest that Mullainathan and Shleifer’s notion is an ideal that a journalist perhaps should pursue before our notion. Nevertheless, we suggest a weakness of Mullainathan and Shleifer’s notion: it is very inconvenient for empirical work, and perhaps completely infeasible. Namely, it would be nearly impossible—and at best a very subjective exercise—for a researcher to try to determine all the facts that are relevant for a given news story. Likewise, it would be very dif?cult, and maybe impossible, for a journalist to determine this set of facts. To see this, consider just a portion of the facts that may be relevant to a news story, the citations from experts. There are hundreds, and maybe thousands, of think tanks, not to mention hundreds of academic departments. At what point does the journalist decide that a think tank or academic department is so obscure that it does not need to be contacted for a citation? Further, most think tanks and academic departments house dozens of members. This means that an unbiased journalist would have to speak to a huge number of potential experts. Moreover, even if the journalist could contact all of these experts, a further problem is how long to talk to them. At what point does the journalist stop gathering information from one particular expert before he or she is considered unbiased? Even if a journalist only needs to contact a representative sample of these experts, a problem still exists over de?ning the relevant universe of experts. Again, when is an expert so obscure that he or she should not be included in the universe? A similar problem involves the journalist’s choice of stories to pursue. A news outlet can choose from a huge—and possibly in?nite— number of news stories. Although Mullainathan and Shleifer’s model focuses only on the bias for a given story, a relevant source of bias is the journalist’s choice of stories to cover. It would be very dif?cult for a researcher to construct a universe of stories from which journalists choose to cover. For instance, within this universe, what proportion should involve the problems of dual-career parents? What proportion should involve corporate fraud? 1208 QUARTERLY JOURNAL OF ECONOMICS(2) expaj  bj xi /  k1 J expak  bk xi , where J is the total number of think tanks in our sample. Note that this probability term is no different from the one we see in a multinomial logit (where the only independent variable is xi ). De?ne cm as the estimated adjusted ADA score of the mth media outlet. Similar to the members of Congress, we assume that the utility that it receives from citing the jth think tank is (3) aj  bj cm  emj . We assume that emj is distributed according to a Weibull distribution. This implies that the probability that media outlet m selects the jth think tank is (4) expaj  bj cm/  k1 J expak  bk cm. Although this term is similar to the term that appears in a multinomial logit, we cannot use multinomial logit to estimate the parameters. The problem is that cm, a parameter that we estimate, appears where normally we would have an independent variable. Instead, we construct a likelihood function from (2) and (4), and we use the “nlm” (nonlinear maximization) command in R to obtain estimates of each aj , bj , and cm. Similar to a multinomial logit, it is impossible to identify each aj and bj . Consequently, we arbitrarily choose one think tank and set its values of aj and bj to zero. It is convenient to choose a think tank that is cited frequently. Also, to make most estimates of the bj ’s positive, it is convenient to choose a think tank that is conservative. Consequently, we chose the Heritage Foundation. It is easy to prove that this choice does not affect our estimates of cm. That is, if we had chosen a different think tank, then all estimates of cm would be unchanged. This identi?cation problem is not just a technical point; it also has an important substantive implication. Our method does not need to determine any sort of assessment of the absolute ideological position of a think tank. It only needs to assess the relative position. In fact, our method cannot assess absolute positions. As a concrete example, consider the estimated bj ’s for AEI A MEASURE OF MEDIA BIAS 1209and the Brookings Institution. These values are .026 and .038. The fact that the Brookings estimate is larger means that Brookings is more liberal than AEI. (More precisely, it means that as a legislator or journalist becomes more liberal, he or she prefers more and more to cite Brookings than AEI.) These estimates are consistent with the claim that AEI is conservative (in an absolute sense), while Brookings is liberal. But they are also consistent with a claim, e.g., that AEI is moderate-left while Brookings is far-left (or also the possibility that AEI is far-right while Brookings is moderate-right). This is related to the fact that our model cannot fully identify the bj ’s; that is, we could add the same constant to each and the value of the likelihood function (and therefore the estimates of the cm’s) would remain unchanged. One dif?culty that arose in the estimation process is that it takes an unwieldy amount of time to estimate all of the parameters. If we had computed a separate aj and bj for each think tank in our sample, then we estimate that our model would take over two weeks to converge and produce estimates. 22 Complicating this, we compute estimates for approximately two dozen different speci?cations of our basic model. (Most of these are to test restrictions of parameters. For example, we run one speci?cation where the New York Times and NPR’s Morning Edition are constrained to have the same estimate of cm.) Thus, if we estimated the full version of the model for each speci?cation, our computer would take approximately one year to produce all the estimates. Instead, we collapsed data from many of the rarely cited think tanks into six mega think tanks. Speci?cally, we estimated a separate aj and bj for the 44 think tanks that were most-cited by the media. These comprised 85.6 percent of the total number of media citations. With the remaining think tanks, we ordered them left to right according to the average ADA score of the legislators who cited them. Let pmin and pmax be the minimum and maximum average scores for these think tanks. To create the mega think tanks, we de?ned ?ve cut points to separate them. Speci?cally, we de?ne cut point i as (5) pi  pmin  i/6 pmax  pmin. 22. Originally we used Stata to try to compute estimates. With this statistical package we estimate that it would have taken eight weeks for our computer to converge and produce estimates. 1210 QUARTERLY JOURNAL OF ECONOMICSIn practice, these ?ve cut points were 22.04, 36.10, 50.15, 64.21, and 78.27. The number of actual and mega think tanks to include (respectively, 44 and 6) is a somewhat arbitrary choice. We chose 50 as the total number because we often used the mlogit procedure in Stata to compute seed values. This procedure is limited to at most 50 “choices,” which meant that we could estimate aj and b’ s for at most 50 think tanks. This still leaves an arbitrary choice about how many of the 50 think tanks should be actual think tanks and how many should be mega think tanks. We experimented with several different choices. Some choices made the media appear slightly more liberal than others. We chose six as the number of mega think tanks, because it produced approximately the average of the estimates. In the Appendix we also report results when instead we choose 2, 3, 4, 5, 7, or 8 as the number of mega think tanks. Our choice to use 50 as the total number of actual and mega think tanks, if anything, appears to make the media appear more conservative than they really are. In the Appendix we report results when instead we chose 60, 70, 80, and 90 as the total number of actual and mega think tanks. In general, these choices cause the average estimate of cm to increase by approximately one or two points. V. RESULTS In Table III we list the estimates of cm, the adjusted ADA scores for media outlets. The ordering of the scores is largely consistent with conventional wisdom. For instance, the two most conservative outlets are the Washington Times and Fox News’ Special Report, two outlets that are often called conservative (e.g., see Alterman [2003]). Near the liberal end are CBS Evening News and the New York Times. Again, these are largely consistent with the conventional wisdom. For instance, CBS Evening News was the target of a best-selling book by Goldberg [2002], a former reporter who documents several instances of liberal bias at the news show. Further, some previous scholarly work shows CBS Evening News to be the most liberal of the three network evening news shows. Hamilton [2004] recorded the congressional roll call votes that the Americans for Democratic Action chose for its annual scorecard, and he examined how often each network covered the roll calls. Between 1969 and 1998, CBS Evening News A MEASURE OF MEDIA BIAS 1211consistently covered these roll calls more frequently than did the other two networks. 23 One surprise is the Wall Street Journal, which we ?nd as the most liberal of all twenty news outlets. We should ?rst remind readers that this estimate (as well as all other newspaper estimates) refers only to the news of the Wall Street Journal; we omitted all data that came from its editorial page. If we included 23. However, Hamilton also notes that CBS covered roll calls by the American Conservative Union more frequently than the other two networks. Nevertheless, one can compute differences in frequencies between roll calls from the ADA and ACU. These differences show CBS to be more liberal than ABC and NBC. That is, although all three networks covered ADA roll calls more frequently than they covered ACU roll calls, CBS did this to a greater extent than the other two networks. TABLE III RESULTS OF MAXIMUM LIKELIHOOD ESTIMATION Media outlet Period of observation Estimated ADA score Standard error ABC Good Morning America 6/27/97– 6/26/03 56.1 3.2 ABC World News Tonight 1/1/94– 6/26/03 61.0 1.7 CBS Early Show 11/1/99– 6/26/03 66.6 4.0 CBS Evening News 1/1/90– 6/26/03 73.7 1.6 CNN NewsNight with Aaron Brown 11/9/01– 2/5/04 56.0 4.1 Drudge Report 3/26/02– 7/1/04 60.4 3.1 Fox News’ Special Report with Brit Hume 6/1/98– 6/26/03 39.7 1.9 Los Angeles Times 6/28/02–12/29/02 70.0 2.2 NBC Nightly News 1/1/97– 6/26/03 61.6 1.8 NBC Today Show 6/27/97– 6/26/03 64.0 2.5 New York Times 7/1/01– 5/1/02 73.7 1.6 Newshour with Jim Lehrer 11/29/99– 6/26/03 55.8 2.3 Newsweek 6/27/95– 6/26/03 66.3 1.8 NPR Morning Edition 1/1/92– 6/26/03 66.3 1.0 Time Magazine 8/6/01– 6/26/03 65.4 4.8 U.S. News and World Report 6/27/95– 6/26/03 65.8 1.8 USA Today 1/1/02– 9/1/02 63.4 2.7 Wall Street Journal 1/1/02– 5/1/02 85.1 3.9 Washington Post 1/1/02– 5/1/02 66.6 2.5 Washington Times 1/1/02– 5/1/02 35.4 2.7 Average 62.6 The table gives our estimates of adjusted ADA scores for media outlets, converted to the 1999 House scale. As a comparison, 50.06 is our estimate of the average American voter; this is based upon the average adjusted ADA scores of the House and Senate from 1995 to 1999 (Senate scores were population-weighted and included two extreme liberal phantom Senators for Washington, DC). The average score for Republicans was 16.1, and for Democrats, 84.3. All data for the news outlets came from news content only (i.e., editorials, letters, and book reviews were omitted). 1212 QUARTERLY JOURNAL OF ECONOMICSdata from the editorial page, surely it would appear more conservative. Second, some anecdotal evidence agrees with the result. For instance, Irvine and Kincaid [2001] note that “The Journal has had a long-standing separation between its conservative editorial pages and its liberal news pages.” Sperry [2002] notes that the news division of the Journal sometimes calls the editorial division “Nazis.” “Fact is,” Sperry writes, “the Journal’s news and editorial departments are as politically polarized as North and South Korea.” 24 Third, a recent poll from the Pew Research Center indicates that a greater percentage of Democrats, 29 percent, say they trust the Journal than do Republicans, 23 percent. Importantly, the question did not say “the news division at the Wall Street Journal.” If it had, Democrats surely would have said they trusted the Journal even more, and Republicans even less. 25 Finally, and perhaps most important, a scholarly study—by Lott and Hassett [2004]—gives evidence that is consistent with our result. As far as we are aware, this is the only other study that examines the political bias of the news pages of the Wall Street Journal. Of the ten major newspapers that it examines, it estimates the Wall Street Journal as the second-most liberal. 26 Only Newsday is more liberal, and the Journal is substantially more liberal than the New York Times, Washington Post, Los Angeles Times, and USA Today. Another somewhat surprising result is our estimate of NPR’s Morning Edition. Conservatives frequently list NPR as an egregious example of a liberal news outlet. 27 However, by our esti- 24. Other anecdotes that Sperry documents are (i) a reporter, Kent MacDougall, who, after leaving the Journal, bragged that he used the “bourgeois press” to help “popularize radical ideas with lengthy sympathetic pro?les of Marxist economists”; (ii) another Journal reporter who, after calling the Houston-based MMAR Group shady and reckless, caused the Journal to lose a libel suit after jurors learned that she misquoted several of her sources; (iii) a third Journal reporter, Susan Faludi (the famous feminist) characterized Safeway as practicing “robber baron” style management practices. 25. See http://people-press.org/reports/display.php3?ReportID215 for a description of the survey and its data. See also Kurtz [2004] for a summary of the study. 26. This comes from the estimates for the “Republican” coef?cient that they list in their Table 7. These estimates indicate the extent to which a newspaper is more likely to use a negative headline for economic news when the president is Republican. 27. Sometimes even liberals consider NPR left-wing. As Woodward notes in The Agenda [1994, p. 114]. “[Paul] Begala was steaming. To him, [OMB Director, Alice] Rivlin symbolized all that was wrong with Clinton’s new team of WashingA MEASURE OF MEDIA BIAS 1213mate the outlet hardly differs from the average mainstream news outlet. For instance, its score is approximately equal to those of Time, Newsweek, and U.S. News and World Report, and its score is slightly less than the Washington Post’s. Further, our estimate places it well to the right of the New York Times, and also to the right of the average speech by Joe Lieberman. These differences are statistically signi?cant. 28 We mentioned this ?nding to Terry Anderson, an academic economist and Executive Director of the Political Economy Research Center, which is among the list of think tanks in our sample. (The average score of legislators citing PERC was 39.9, which places it as a moderate-right think tank, approximately as conservative as RAND is liberal.) Anderson told us, “When NPR interviewed us, they were nothing but fair. I think the conventional wisdom has overstated any liberal bias at NPR.” Our NPR estimate is also consistent with Hamilton’s [2004, p. 108] research on audience ideology of news outlets. Hamilton ?nds that the average NPR listener holds approximately the same ideology as the average network news viewer or the average viewer of morning news shows, such as Today or Good Morning America. Indeed, of the outlets that he examines in this section of his book, by this measure NPR is the ninth most liberal out of eighteen. Another result, which appears anomalous, is not so anomalous upon further examination. This is the estimate for the Drudge Report, which at 60.4, places it approximately in the middle of our mix of media outlets and approximately as liberal as a typical Southern Democrat, such as John Breaux (D-LA). We should emphasize that this estimate re?ects both the news ?ashes that Matt Drudge reports and the news stories to which his site links on other web sites. In fact, of the entire 311 think-tank citations we found in the Drudge Report, only ?ve came from reports written by Matt ton hands, and represented the Volvo-driving, National Public Radio-listening, wine-drinking liberalism that he felt had crippled the Democratic Party for decades.” 28. To test that NPR is to the right of Joe Lieberman, we assume that we have measured the ideological position of Lieberman without error. Using the values in Table II and III, the t-test for this hypothesis is t  ( 7 4 . 2  6 6 . 3 ) / 1 . 0  7 . 9 . This is signi?cant at greater than 99.9 percent levels of con?dence. To test that NPR is to the right of the New York Times, we use a likelihood ratio test. The value of the log likelihood function when NPR and the New York Times are constrained to have the same score is 78,616.64. The unconstrained value of the log likelihood function is 78,609.35. The relevant value of the likelihood ratio test is 2(78,616.64 –78,609.35). This is distributed according to the Chi-Square distribution with one degree of freedom. At con?dence levels greater than 99.9 percent, we can reject the hypothesis that the two outlets have the same score. 1214 QUARTERLY JOURNAL OF ECONOMICSDrudge. Thus, for all intents and purposes, our estimate for the Drudge Report refers only to the articles to which the Report links on other web sites. Although the conventional wisdom often asserts that the Drudge Report is relatively conservative, we believe that the conventional wisdom would also assert that—if con?ned only to the news stories to which the Report links on other web sites—this set would have a slant approximately equal to the average slant of all media outlets, since, after all, it is comprised of stories from a broad mix of such outlets. 29 VI. DIGRESSION: DEFINING THE “CENTER” While the main goal of our research is to provide a measure that allows us to compare the ideological positions of media outlets to political actors; a separate goal is to express whether a news outlet is left or right of center. To do the latter, we must de?ne center. This is a little more arbitrary than the ?rst exercise. For instance, the results of the previous section show that the average New York Times article is approximately as liberal as the average Joe Lieberman (D-CT) speech. While Lieberman is left of center in the United States Senate, many would claim that, compared with all persons in the entire world, he is centrist or even right-leaning. And if the latter is one’s criterion, then nearly all of the media outlets that we examine are right of center. However, we are more interested in de?ning centrist by United States views, rather than world views or, say, European views. One reason is that the primary consumers for the twenty news outlets that we examine are in the United States. If, for example, we wish to test economic theories about whether United States news producers are adequately catering to the demands of their consumers, then United States consumers are the ones on which we should focus. A second reason is that the popular debate on media bias has focused on United States views, not world 29. Of the reports written by Matt Drudge, he cited the Brookings Institution twice (actually once, but he listed the article for two days in a row), the ACLU once, Taxpayers for Common Sense once, and Amnesty International once. On June 22, 2004, the Drudge Report listed a link to an earlier version of our paper. Although that version mentioned many think tanks, only one case would count as a citation. This is the paraphrased quote from RAND members, stating that the media tends to cite its military studies less than its domestic studies. (The above quote from PERC was not in the earlier version, although it would also count as a citation.) At any rate, we instructed our research assistants not to search our own paper for citations. A MEASURE OF MEDIA BIAS 1215views. For instance, in Goldberg’s [2002] insider account of CBS News, he only claims that CBS is more liberal than the average American, not the average European or world citizen. Given this, one de?nition of centrist is simply to use the mean or median ideological score of the United States House or Senate. We focus on mean scores since the median tends to be unstable. 30 This is due to the bimodal nature that ADA scores have followed in recent years. For instance, in 1999 only three senators, out of a total of 100, received a score between 33 and 67. In contrast, 33 senators would have received scores in this range if the scores had been distributed uniformly, and the number would be even larger if scores had been distributed unimodally. 31 We are most interested in comparing news outlets to the centrist voter, who, for a number of reasons, might not have the same ideology as the centrist member of Congress. For instance, because Washington, D.C. is not represented in Congress and because D.C. residents tend to be more liberal than the rest of the country, the centrist member of Congress should tend to be more conservative than the centrist voter. Another problem, which applies only to the Senate, involves the fact that voters from small states are overrepresented. Since in recent years small states have tended to vote more conservatively than large states, this would cause the centrist member of the Senate to be more conservative than the centrist voter. A third reason, which applies only to the House, is that gerrymandered districts can skew the relationship between a centrist voter and a centrist member of the House. For instance, although the total votes for Al Gore and George W. Bush favored Gore slightly, the median House district slightly favored Bush. Speci?cally, if we exclude the District of Columbia (since it does not have a House member), Al Gore received 50.19 percent of the two-party vote. Yet in the median House district (judging by Gore-Bush vote percentages), Al Gore received only 48.96 percent of the two-party vote. (Twelve districts had percentages between the median and mean percentages.) The fact that the latter number is smaller than the former number means that House dis- 30. Nevertheless, we still report how our results change if instead we use median statistics. See footnotes 34 and 35. 31. The year 1999 was somewhat, but not very, atypical. During the rest of the 1990s, on average, 17.6 senators received scores between 33 and 67, approximately half as many as would be expected if scores were distributed uniformly. See http://www.adaction.org/votingrecords.htm for ADA scores of senators and House members. 1216 QUARTERLY JOURNAL OF ECONOMICStricts are drawn to favor Republicans slightly. Similar results occurred in the 1996 election. Bill Clinton received 54.66 percent of the two-party vote. Yet in the median House district he received 53.54 percent. It is possible to overcome each of these problems to estimate an ADA score of the centrist voter in the United States. First, to account for the D.C. bias, we can add phantom D.C. legislators to the House and Senate. Of course, we necessarily do not know the ADA scores of such legislators. However, it is reasonable to believe that they would be fairly liberal, since D.C. residents tend to vote overwhelmingly Democratic in presidential elections. (They voted 90.5 percent for Gore in 2000 and 90.6 percent for Kerry in 2004.) For each year, we gave the phantom D.C. House member and senators the highest respective House and Senate scores that occurred that year. Of course, actual D.C. legislators might not be quite so liberal. However, one of our main conclusions is that the media are liberal compared with U. S. voters. Consequently, it is better to err on the side of making voters appear more liberal than they really are than the opposite. 32 The second problem, the small-state bias in the Senate, can be overcome simply by weighting each senator’s score by the population of his or her state. The third problem, gerrymandered districts in the House, is overcome simply by the fact that we use mean scores instead of the median. 33 In Figure I we list the mean House and Senate scores over the period 1947–1999 when we use this methodology (i.e., includ- 32. Another possible bias involves the fact that D.C. has slightly fewer people than the average House district. Using 2000 population estimates (source: Almanac of American Politics [2002 edition]), D.C. had 572,000 residents, while the average House district in the country had 646,000. We treat D.C. as one district, whereas a more appropriate analysis would treat D.C. as 572/646 of a district. Again, this will bias our results in the opposite direction of our main conclusions. Speci?cally, this will cause media outlets to appear more conservative than they really are. 33. To see this, imagine a state with three districts, each with the same symmetrical distribution of voters. (Thus, the median voter in each district has an ideology identical to the median voter of the state.) Now suppose that a Republican state legislature redraws districts so that Democratic voters are transferred from districts 1 and 2 to district 3. Suppose that Republican voters are transferred in the opposite direction. Necessarily, the increase in Democratic voters in district 3 is twice the average increase in Republican voters in districts 1 and 2. Next, suppose that the expected ideological score of a representative is a linear function of the fraction of Democratic voters in his or her district. Then it will necessarily be the case that the expected average ideological score of the representatives in this hypothetical state will be identical to the expected average before redistricting. However, the same will not be true of the median score. It will be expected to decrease (i.e., to become more conservative). A MEASURE OF MEDIA BIAS 1217ing phantom D.C. legislators and weighting senators’ scores by the population of their state). The focus of our results is for the period 1995–1999. We chose 1999 as the end year simply because this is the last year for which Groseclose, Levitt, and Snyder [1999] computed adjusted ADA scores. However, any conclusions that we make for this period should also hold for the 2000 –2004 period, since in the latter period the House and Senate had almost identical party ratios. We chose 1995 as the beginning year, because it is the ?rst year after the historic 1994 elections, where Republicans gained 52 House seats and 8 Senate seats. This year, it is reasonable to believe, marks the beginning of a separate era of American politics. As a consequence, if one wanted to test hypotheses about the typical United States voter of, say, 1999, then the years 1998, 1997, 1996, and 1995 would also provide helpful data. However, prior years would not. Over this period the mean score of the Senate (after including phantom D.C. senators and weighting by state population) varied between 49.28 and 50.87. The mean of these means was 49.94. The similar ?gure for the House was 50.18. After rounding, we use the midpoint of these numbers, 50.1, as FIGURE I Weighted-Average ADA Scores for the House and Senate, 1947–1999 1218 QUARTERLY JOURNAL OF ECONOMICSour estimate of the adjusted ADA score of the centrist United States voter. 34 A counterview is that the 1994 elections did not mark a new era. Instead, as some might argue, these elections were an anomaly, and the congresses of the decade or so before the 1994 elections are a more appropriate representation of voter sentiment of the late 1990s and early 2000s. Although we do not agree, we think it is a useful straw man. Consequently, we construct an alternative measure based on the congresses that served between 1975 and 1994. We chose 1975, because this was the ?rst year of the “Watergate babies” in Congress. As Figure I shows, this year produced a large liberal shift in Congress. This period, 1975– 1994, also happens to be the most liberal twenty-year period for the entire era that the ADA has been recording vote scores. The average ADA score of senators during the 1975–1994 period (after including phantom D.C. senators and weighting according to state population) was 53.51. The similar ?gure for the House was 54.58. After rounding, we use the midpoint of 34. A clever alternative measure, suggested to us by David Mayhew, is to use a regression-based framework to estimate the expected ADA score of a legislator whose district is perfectly representative of the entire United States. In the 2000 presidential election Gore won 50.27 percent of the two-party vote (including D.C.). Suppose that we could construct a hypothetical congressional district with an identical Gore-vote percentage. It is reasonable to believe that the expected adjusted ADA score of the legislator from such a district is a good measure of the ideology of the centrist United States voter, and this appropriately adjusts for any biases due to gerrymandered districts, exclusion of D.C. voters, and the smallstate biases in the Senate. To estimate this, we regressed (i) the 1999 adjusted ADA scores of members of Congress on (ii) Gore’s percentage of the two-party vote in the legislator’s district. In this regression we included observations from the Senate as well as the House. (Remember that adjusted scores are constructed so that they are comparable across chambers.) The results of the regression were ADA Score  46.48  1.91 Gore Vote. This implies 49.53 as the expected ADA score of a district in which the Gore vote was 50.27 percent. We repeated this analysis using, instead, adjusted ADA scores from 1998, 1997, 1996, and 1995. In the latter three years we used the Clinton share of the two-party vote, and we used Clinton’s national share, 54.74 percent as the share of the representative district. These years give the following respective estimates of the ADA score of the centrist U. S. voter: 48.83, 48.99, 47.24, and 47.41. The average of these ?ve measures is 48.40. Since this number is 1.7 points less than the mean-based measure of the centrist voter (50.1), if one believes that it is the more appropriate measure, then our main conclusions (based on the mean-based measure) are biased rightward—that is, the more appropriate conclusion would assert that the media are an additional 1.7 points to the left of the centrist voter. Yet another measure is based on median scores of the House and Senate. The average Senate median over the ?ve years was 58.19, while the average House median was 40.61. (Again, both these ?gures include phantom D.C. legislators, and the Senate score is weighted by state population.) The midpoint is 49.4, which is 0.7 points more conservative than our mean-based measure. If one believes that this is the more appropriate measure of centrist, then, once again, this implies that our media estimates are biased in the direction of making them more conservative than they really are. A MEASURE OF MEDIA BIAS 1219these two scores to de?ne 54.0 as the centrist United States voter during 1975–1994. 35 VII. FURTHER RESULTS: HOW CLOSE ARE MEDIA OUTLETS TO THE CENTER? Next, we compute the difference of a media outlet’s score from 50.1 to judge how centrist it is. We list these results in Table IV. Most striking is that all but two of the outlets we examine are left of center. Even more striking is that if we use the more liberal de?nition of center (54.0)—the one constructed from congressional scores from 1975–1994 —it is still the case that eighteen of twenty outlets are left of center. 35. If instead we use medians, the ?gure is 54.9. TABLE IV RANKINGS BASED ON DISTANCE FROM CENTER Rank Media outlet Estimated ADA score 1 Newshour with Jim Lehrer 55.8 2 CNN NewsNight with Aaron Brown 56.0 3 ABC Good Morning America 56.1 4 Drudge Report 60.4 5 Fox News’ Special Report with Brit Hume 39.7 6 ABC World News Tonight 61.0 7 NBC Nightly News 61.6 8 USA Today 63.4 9 NBC Today Show 64.0 10 Washington Times 35.4 11 Time Magazine 65.4 12 U.S. News and World Report 65.8 13 NPR Morning Edition 66.3 14 Newsweek 66.3 15 CBS Early Show 66.6 16 Washington Post 66.6 17 Los Angeles Times 70.0 18 CBS Evening News 73.7 19 New York Times 73.7 20 Wall Street Journal 85.1 The table gives our method’s rankings of the most to least centrist news outlet. The rankings are based on the distance of the outlet’s estimated ADA score (from Table III) to 50.06, our estimate of the average United States voter’s ADA score. 1220 QUARTERLY JOURNAL OF ECONOMICSThe ?rst, second, and third most centrist outlets are, respectively, Newshour with Jim Lehrer, CNN’s Newsnight with Aaron Brown, and ABC’s Good Morning America. The scores of Newsnight and Good Morning America were not statistically different from the center, 50.1. Although the point estimate of Newshour was more centrist than the other two outlets, its difference from the center is statistically signi?cant. The reason is that its margin of error is smaller than the other two, which is due primarily to the fact that we collected more observations for this outlet. Interestingly, in the four presidential and vice-presidential debates of the 2004 election, three of the four moderators were selected from these three outlets. The fourth moderator, Bob Schieffer, worked at an outlet that we did not examine, CBS’s Face the Nation. The fourth and ?fth most centrist outlets are the Drudge Report and Fox News’ Special Report with Brit Hume. Their scores are signi?cantly different from the center at a 95 percent signi?cance level. Nevertheless, the top ?ve outlets in Table IV are in a statistical dead heat for most centrist. Even at an 80 percent level of signi?cance, none of these outlets can be called more centrist than any of the others. The sixth and seventh most centrist outlets are ABC World News Tonight and NBC Nightly News. These outlets are almost in a statistical tie with the ?ve most centrist outlets. For instance, each has a score that is signi?cantly different from Newshour’s at the 90 percent con?dence level, but not at the 95 percent con?- dence level. The eighth most centrist outlet, USA Today, received a score that is signi?cantly different from Newshour’s at the 95 percent con?dence level. Fox News’ Special Report is approximately one point more centrist than ABC’s World News Tonight (with Peter Jennings) and NBC’s Nightly News (with Tom Brokaw). In neither case is the difference statistically signi?cant. Given that Special Report is one hour long and the other two shows are a half-hour long, our measure implies that if a viewer watched all three shows each night, he or she would receive a nearly perfectly balanced version of the news. (In fact, it would be slanted slightly left by 0.4 ADA points.) Special Report is approximately thirteen points more centrist than CBS Evening News (with Dan Rather). This difference is signi?cant at the 99 percent con?dence level. Also at 99 percent con?dence levels, we can conclude that NBC Nightly News A MEASURE OF MEDIA BIAS 1221and ABC World News Tonight are more centrist than CBS Evening News. The most centrist newspaper in our sample is USA Today. However, its distance from the center is not signi?cantly different from the distances of the Washington Times or the Washington Post. Interestingly, our measure implies that if one spent an equal amount of time reading the Washington Times and Washington Post, he or she would receive a nearly perfectly balanced version of the news. (It would be slanted left by only 0.9 ADA points.) If instead we use the 54.1 as our measure of centrist (which is based on congressional scores of the 1975–1994 period), the rankings change, but not greatly. The most substantial is the Fox News’ Special Report, which drops from ?fth to ?fteenth most centrist. The Washington Times also changes signi?cantly. It drops from tenth to seventeenth most centrist. Another implication of the scores concerns the New York Times. Although some claim that the liberal bias of the New York Times is balanced by the conservative bias of other outlets, such as the Washington Times or Fox News’ Special Report, this is not quite true. The New York Times is slightly more than twice as far from the center as Special Report. Consequently, to gain a balanced perspective, a news consumer would need to spend twice as much time watching Special Report as he or she spends reading the New York Times. Alternatively, to gain a balanced perspective, a reader would need to spend 50 percent more time reading the Washington Times than the New York Times. VIII. POTENTIAL BIASES A frequent concern of our method is a form of the following claim: “The sample of think tanks has a rightward (leftward) tilt rather than an ideological balance. For example, the sample does not include Public Citizen and many other “Nader” groups. (For example, the sample includes National Association of Manufacturers, the Conference of Catholic Bishops, or any number of other groups.) Consequently this will bias estimates to the right (left).” However, the claim is not true, and here is the intuition: if the sample of think tanks were, say, disproportionately conservative, this, of course, would cause media outlets to cite conservative think tanks more frequently (as a proportion of citations that we record in our sample). This might seem to cause the 1222 QUARTERLY JOURNAL OF ECONOMICSmedia to appear more conservative. However, at the same time it causes members of Congress to appear more conservative. Our method only measures the degree to which media is liberal or conservative, relative to Congress. Since it is unclear how such a disproportionate sample would affect the relative degree to which the media cite conservative (or liberal) think tanks, there is no a priori reason for this to cause a bias. In fact, a similar concern could be leveled against any regression analysis. As a simple example, consider a researcher who regresses the arm lengths of subjects on their heights. Suppose instead of choosing a balance of short and tall subjects, he or she chooses a disproportionate number of tall subjects. This will not affect his or her ?ndings about the relationship between height and arm length. That is, he or she will ?nd that arm length is approximately half the subject’s height, and this estimate, “half,” would be the same (in expectation) whether he or she chooses many or few tall subjects. For similar reasons, to achieve unbiased estimates in a regression, econometrics textbooks place no restrictions on the distribution of independent variables. They only place restrictions upon, e.g., the correlation of the independent variables and the error term. Another frequent concern of our method takes a form of the following claim: “Most of the congressional data came from years in which the Republicans were the majority party. Since the majority can control debate time, this will cause the sample to have a disproportionate number of citations by Republicans. In turn, this will cause media outlets to appear to be more liberal than they really are.” First, it is not true that the majority party gives itself a disproportionate amount of debate time. Instead, the usual convention is that it is divided equally between proponents and opponents on an issue. This means that the majority party actually gives itself less than the proportionate share. However, this convention is countered by two other factors, which tend to give the majority and minority party their proportionate share of speech time: i) many of the speeches in the Congressional Record are not part of the debate on a particular bill or amendment but are from “special orders” (generally in the evening after the chamber has adjourned from of?cial business) or “one minutes” (generally in the morning before the chamber has convened for of?cial business). For these types of speeches there are no restrictions of party balance, and for the most part, any legislator who shows up at the chamber is allowed to make such a speech. A MEASURE OF MEDIA BIAS 1223ii) Members often place printed material “into the Record.” We included such printed material as a part of any member’s speech. In general, there are no restrictions on the amount of material that a legislator can place into the Record (or whether he or she can do this). Thus, e.g., if a legislator has run out of time to make his or her speech, he or she can request that the remainder be placed in written form “into the Record.” But even if the majority party were given more (or less) than its proportionate share of speech time, this would not bias our estimates. With each media outlet, our method seeks the legislator who has a citation pattern that is most similar to that outlet. For instance, suppose that the New York Times cites liberal think tanks about twice as often as conservative think tanks. Suppose (as we actually ?nd) that Joe Lieberman is the legislator who has the mix of citations most similar to the New York Times; that is, suppose that he also tends to cite liberal think tanks twice as often as conservative think tanks. Now consider a congressional rules change that cuts the speech time of Democrats in half. Although this will affect the number of total citations that Lieberman makes, it will not affect the proportion of citations that he makes to liberal and conservative think tanks. Hence, our method would still give the New York Times an ADA score equal to Joe Lieberman’s. 36 More problematic is a concern that congressional citations and media citations do not follow the same data-generating process. For instance, suppose that a factor besides ideology affects the probability that a legislator or reporter will cite a think tank, and suppose that this factor affects reporters and legislators differently. Indeed, Lott and Hassett [2004] have invoked a form of this claim to argue that our results are biased toward making the media appear more conservative than they really are. They note: For example, Lott [2003, Chapter 2] shows that the New York Times’ stories on gun regulations consistently interview academics who favor gun control, but use gun dealers or the National Ri?e Association to provide the other side . . . In this case, this bias makes [Groseclose and Milyo’s measure of] the New York Times look more conservative than is likely accurate [p. 8]. 36. Another concern is that, although Republicans and Democrats are given debate time nearly proportional to their number of seats, one group might cite think tanks more frequently than the other. The above reasoning also explains why this will not cause a bias to our method. 1224 QUARTERLY JOURNAL OF ECONOMICSHowever, it is possible, and perhaps likely, that members of Congress practice the same tendency that Lott and Hassett [2004] have identi?ed with reporters; that is, to cite academics when they make an antigun argument and to cite, say, the NRA when they make a progun argument. If so, then our method will have no bias. On the other hand, if members of Congress do not practice the same tendency as journalists, then this can cause a bias to our method. But even here, it is not clear in which direction it will occur. For instance, it is possible that members of Congress have a greater (lesser) tendency than journalists to cite such academics. If so, then this will cause our method to make media outlets appear more liberal (conservative) than they really are. In fact, the criticism we have heard most frequently is a form of this concern, but it is usually stated in a way that suggests the bias is in the opposite direction. Here is a typical variant: “It is possible that (i) journalists care about the ‘quality’ of a think tank more than legislators do (e.g., suppose that journalists prefer to cite a think tank with a reputation for serious scholarship instead of another group that is known more for its activism); and (ii) the liberal think tanks in the sample tend to be of higher quality than the conservative think tanks.” If statements (i) and (ii) are true, then our method will indeed make media outlets appear more liberal than they really are. That is, the media will cite liberal think tanks more, not because they prefer to cite liberal think tanks, but because they prefer to cite high-quality think tanks. On the other hand, if one statement is true and the other is false, then our method will make media outlets appear more conservative than they really are. For example, suppose that journalists care about quality more than legislators, but suppose that the conservative groups in our sample tend to be of higher quality than the liberal groups. Then the media will tend to cite the conservative groups disproportionately, but not because the media are conservative, rather because they have a taste for quality.) Finally, if neither statement is true, then our method will make media outlets appear more liberal than they really are. Note that there are four possibilities by which statements (i) and (ii) can be true or false. Two lead to a liberal bias, and two lead to a conservative bias. This criticism, in fact, is similar to an omitted-variable bias that can plague any regression. Like the regression case, however, if the omitted variable (e.g., the quality of the think tank) is not correlated with the independent variable of interest (e.g., the ideology of the think tank), then this will not cause a bias. In the Appendix we examine this criticism further by introducing three variables that A MEASURE OF MEDIA BIAS 1225measure the extent to which a think tank’s main goals are scholarly ones, as opposed to activist ones. That is, these variables are possible measures of the “quality” of a think tank. When we include these measures as controls in our likelihood function, our estimated ADA ratings do not change signi?cantly. For example, when we include the measures, the average score of the twenty news outlets that we examine shifts less than three points. Further, we cannot reject the hypothesis that the new estimates are identical to the estimates that we obtain when we do not include the controls. Finally, some anecdotal evidence provides a compelling argument that our method is not biased. Note that none of the issues discussed above suggest a problem with the way our method ranks media outlets. Now, suppose that there is no problem with the rankings, yet our method is plagued with a signi?- cant bias that systematically causes media outlets to appear more liberal (conservative) than they really are. If so, then this means that the three outlets we ?nd to be most centrist (Newshour with Jim Lehrer, Good Morning America, and Newsnight with Aaron Brown) are actually conservative (liberal). But if this is true, why did John Kerry’s (George W. Bush’s) campaign agree to allow three of the four debate moderators to come from these outlets? IX. DISCUSSION: IMPLICATIONS FOR THE INDUSTRIAL ORGANIZATION OF THE NEWS INDUSTRY At least four broad empirical regularities emerge from our results. In this section we document these regularities and analyze their signi?cance for some theories about the industrial organization of the news industry. First, we ?nd a systematic tendency for the United States media outlets to slant the news to the left. As mentioned earlier, this is inconsistent with basic spatial models of ?rm location such as Hotelling’s [1929] and others. In such models if an equilibrium exists, then there is always an equilibrium in which the median ?rm locates at the ideal location of the median consumer, which our results clearly do not support. Some scholars have extended the basic spatial model to provide a theory why the media could be systematically biased. For instance, Hamilton [2004] notes that news producers may prefer to cater to some consumers more than others. In particular, Hamilton notes that young females tend to be one of the most marginal groups of news consumers (i.e., they are the most will- 1226 QUARTERLY JOURNAL OF ECONOMICSing to switch to activities besides reading or watching the news). Further, this group often makes the consumption decisions for the household. For these two reasons, advertisers are willing to pay more to outlets that reach this group. Since young females tend to be more liberal on average, a news outlet may want to slant its coverage to the left. Thus, according to Hamilton’s theory, United States news outlets slant their coverage leftward, not in spite of consumer demand, but because of it. 37 A more compelling explanation for the liberal slant of news outlets, in our view, involves production factors, not demand factors. As Sutter [2001] has noted, journalists might systematically have a taste to slant their stories to the left. Indeed, this is consistent with the survey evidence that we noted earlier. As a consequence, “If the majority of journalists have left-of-center views, liberal news might cost less to supply than unbiased news [p. 444].” Baron [2005] constructs a rigorous mathematical model along these lines. In his model journalists are driven, not just by money, but also a desire to in?uence their readers or viewers. Baron shows that pro?t-maximizing ?rms may choose to allow reporters to slant their stories, and consequently in equilibrium the media will have a systematic bias. 38 A second empirical regularity is that the media outlets that we examine are fairly centrist relative to members of Congress. For instance, as Figure II shows, all outlets but one have ADA scores between the average Democrat and average Republican in Congress. In contrast, it is reasonable to believe that at least half the voters consider themselves more extreme than the party averages. 39 If so, then a basic spatial model, where ?rms are 37. Sutter [2001] similarly notes that demand factors may be the source of liberal bias in the newspaper industry. Speci?cally, he notes that liberals may have a higher demand for newspapers than conservatives, and he cites some suggestive evidence by Goff and Tollison [1990], which shows that as the voters in a state become more liberal, newspaper circulation in the state increases. 38. Perhaps even more interesting, in Baron’s model news consumers, in equilibrium, can be in?uenced in the direction of the bias of the news outlet, despite the fact that they understand the equilibrium of the game and the potential incentives of journalists to slant the news. 39. A simple model supports this assertion. Suppose that in every congressional district, voters have ideal positions that are uniformly distributed between 1 and 1, where 1 represents the most liberal voter and 1 represents the most conservative voter. Assume that a voter is a Democrat if and only if his or her ideal position is less than 0. Four candidates, two Republican and two Democrat, simultaneously choose positions in this space. Next they compete in two primary elections, where the Republican voters choose between the two Republican candidates, and likewise for the Democratic primary. Each voter votes for the candidate who is nearest his or her ideological position, and if two candidates are equidistant, then the voter ?ips a coin. (This assumption implies that voters are myopic in the primary election. If, instead, the voters were fully rational, then it A MEASURE OF MEDIA BIAS 1227can easily be shown that the candidates will choose even more centrist positions, which means that even more voters will consider themselves more extreme than the party averages.) Assume that candidates maximize the votes that they receive in the general election (i.e., the votes they receive in the primary election are only a means to winning votes in the general election). Then this setup implies that in equilibrium both Democratic candidates will locate at .5, and both Republican candidates will locate at .5. Each winner of the primary has a 50 percent chance at winning the general election. Once this is repeated across many districts, then the expected number of voters who consider themselves more extreme than the party averages will be 50 percent. FIGURE II Adjusted ADA Scores of Selected Politicians and Media Outlets 1228 QUARTERLY JOURNAL OF ECONOMICSconstrained to charge the same exogenous price, implies that approximately half the media outlets should choose a slant outside the party averages. 40 Clearly, our results do not support this prediction. Moreover, when we add price competition to the basic spatial model, then, as Mullainathan and Shleifer [2003] show, even fewer media outlets should be centrist. Speci?cally, their two-?rm model predicts that both media ?rms should choose slants that are outside the preferred slants of all consumers. The intuition is that in the ?rst round, when ?rms choose locations, they want to differentiate their products signi?cantly, so in the next round they will have less incentive to compete on price. Given this theoretical result, it is puzzling that media outlets in the United States are not more heterogeneous. We suspect that, once again, the reason may lie with production factors. For instance, one possibility may involve the sources for news stories—what one could consider as the raw materials of the news industry. If a news outlet is too extreme, many of the newsmakers may refuse to grant interviews to the reporters. A third empirical regularity involves the question whether reporters will be faithful agents of the owners of the ?rms for which they work. That is, will the slant of their news stories re?ect their own ideological preferences or the ?rm’s owners? The conventional wisdom, at least among left-wing commentators, is that the latter is true. For instance, Alterman [2003] titles a chapter of his book “You’re Only as Liberal as the Man Who Owns You.” A weaker assertion is that the particular news outlet will be a faithful agent of the ?rm that owns it. However, our results provide some weak evidence that this is not true. For instance, although Time magazine and CNN’s Newsnight are owned by the same ?rm (Time Warner), their ADA scores differ substantially, by 9.4 points. 41 Further, al- 40. For instance, suppose that consumers are distributed uniformly between 1 and 1. Suppose that there are twenty news outlets, and suppose that consumers choose the outlet that is closest to them. It is easy to show that an equilibrium is for two ?rms to locate at .9, two ?rms to locate at .7, . . . , and two ?rms to locate at .9. 41. This difference, however, is not statistically signi?cant at the 95 percent con?dence level. A likelihood ratio test, constraining Time and Newsnight to have the same score gives a log-likelihood function that is 1.1 units greater than the unconstrained function. This value, multiplied by two, follows a Chi-Square distribution with one degree of freedom. The result, 2.2, is almost signi?cant at the 90 percent con?dence level, but not quite. (The latter has a criterion of 2.71.) We obtained similar results when we tested, the joint hypothesis that (i) NewsA MEASURE OF MEDIA BIAS 1229most half of the other outlets have scores between those of Newsnight and Time. A fourth regularity concerns the question whether one should expect a government-funded news outlet to be more liberal than a privately funded outlet. “Radical democratic” media scholars McChesney and Scott [2004] claim that it will. For instance, they note “[Commercial journalism] has more often served the minority interests of dominant political, military, and business concerns than it has the majority interests of disadvantaged social classes [2004, p. 4].” And conservatives, who frequently complain that NPR is far left, also seem to agree. However, our results do not support such claims. If anything, the government-funded outlets in our sample (NPR’s Morning Edition and Newshour with Jim Lehrer) have a slightly lower average ADA score (61.0), than the private outlets in our sample (62.8). 42 Related, some claim that a free-market system of news will produce less diversity of news than a government-run system. However, again, our results do not support such a claim. The variance of the ADA scores of the privately run outlets is substantially higher (131.3) than the variance of the two government-funded outlets that we examine (55.1). In interpreting some of the above regularities, especially perhaps the latter two, we advise caution. For instance, with regard to our comparisons of government-funded versus privately funded news outlets, we should emphasize that our sample of governmentfunded outlets is small (only two), and our total sample of news outlets might not be representative of all news outlets. night and Time have identical scores and that (ii) all three network morning news shows have scores identical to their respective evening news shows. A likelihood ratio test gives a value of 8.04, which follows a Chi-Square distribution with four degrees of freedom. The value is signi?cant at the 90 percent con?dence level (criterion  7.78), but not at the 95 percent con?dence level (criterion  9.49). Our hunch is that with more data we could show conclusively that at least sometimes different news outlets at the same ?rm produce signi?cantly different slants. We suspect that, consistent with Baron’s [2005] model, editors and producers, like reporters, are given considerable slack, and that they are willing to sacri?ce salary in order to be given such slack. 42. This result is broadly consistent with Djankov, McLiesh, Nenova, and Shleifer’s [2003] notion of the public choice theory of media ownership. This theory asserts that a government-owned media will slant news in such a way to aid incumbent politicians. If so, some reasonable theories (e.g., Black [1958]) suggest that the slant should conform to the median view of the incumbent politicians. We indeed ?nd that the slant of the government-funded outlets in the United States on average is fairly close to the median politicians’ view. In fact, it is closer to the median view than the average of the privately funded outlets that we examine. See Lott [1999] for an examination of a similar public-choice theory applied to the media and the education system in a country. 1230 QUARTERLY JOURNAL OF ECONOMICSRelated, in our attempts to explain these patterns, we in no way claim to have provided the last word on a satisfactory theory. Nor do we claim to have performed an exhaustive review of potential theories in the literature. Rather, the main goal of our research is simply to demonstrate that it is possible to create an objective measure of the slant of the news. Once this is done, as we hope we have demonstrated in this section, it is easy to raise a host of theoretical issues to which such a measure can be applied. APPENDIX We believe that the most appropriate model speci?cation is the one that we used to generate Table III. However, in this Appendix we consider alternative speci?cations. Recall that we excluded observations in which the journalist or legislator gave an ideological label to the think tank or policy group. The ?rst column of Table V lists ADA estimates when instead we include these observations, while maintaining all the TABLE V ESTIMATED ADA SCORES FOR ALTERNATIVE SPECIFICATIONS Media outlet 1 2 3 4 5 6 7 8 9 ABC Good Morning America 56.7 56.0 55.0 56.0 59.3 59.5 56.2 55.5 45.4 ABC World News Tonight 61.4 61.3 60.9 62.0 61.6 62.4 60.9 59.8 58.7 CBS Early Show 67.5 67.1 64.1 67.5 67.8 68.3 66.0 64.9 56.8 CBS Evening News 72.1 74.0 74.0 74.6 73.2 74.1 72.8 71.7 69.6 CNN NewsNight with Aaron Brown 55.8 55.8 54.8 58.0 56.0 56.4 55.5 53.3 51.7 Drudge Report 55.3 60.6 59.0 62.5 60.8 62.1 60.2 58.1 56.0 Fox News Special Report 41.5 39.0 38.8 41.2 40.5 40.6 39.8 38.8 33.4 Los Angeles Times 67.8 70.4 69.4 71.7 70.5 70.9 69.3 68.5 65.8 NBC Nightly News 62.1 61.7 63.1 63.0 61.3 62.3 61.2 60.2 60.9 NBC Today Show 64.0 64.8 64.7 65.2 65.1 66.1 63.8 62.9 55.9 New York Times 69.9 74.9 72.6 74.3 73.9 74.7 73.3 71.6 70.8 Newshour with Jim Lehrer 55.1 56.0 54.4 57.0 55.8 55.9 56.0 53.6 50.9 Newsweek 65.7 66.7 64.5 67.0 66.9 67.5 65.7 64.4 68.9 NPR Morning Edition 65.6 66.9 66.2 67.4 66.1 67.1 66.1 64.6 59.2 Time Magazine 68.2 65.5 62.4 66.2 64.3 65.4 64.2 63.3 64.7 U.S. News and World Report 65.2 65.8 65.3 67.0 65.8 66.4 64.8 63.6 65.7 USA Today 61.7 63.2 62.5 63.7 62.8 63.9 62.4 60.4 66.9 Wall Street Journal 86.1 85.1 85.8 86.2 85.5 86.4 84.8 82.5 82.1 Washington Post 64.7 67.0 65.5 67.4 66.8 67.2 66.7 64.3 56.7 Washington Times 35.7 33.8 34.4 36.2 35.3 36.2 34.8 32.9 48.0 Average of all 20 outlets 62.1 62.8 61.9 63.7 63.0 63.7 62.2 60.7 59.4 A MEASURE OF MEDIA BIAS 1231other assumptions that we used to create Table III; e.g., that we use 44 actual think tanks and 6 mega think tanks, etc. In column 2 we report the results when we exclude citations of the ACLU (while we maintain all the other model speci?cations we used to construct Table III, including the decision to omit labeled observations). In columns 3 to 8 we report the results when, instead of using 44 actual think tanks and 6 mega think tanks, we use 48 (respectively, 47, 46, 45, 43, and 42) actual and 2 (respectively, 3, 4, 5, 7, and 8) mega think tanks. In column 9 we use sentences as the level of observation, instead of citations. One problem with this speci?cation is that the data are very lumpy; that is, some quotes contain an inordinate number of sentences, which cause some anomalies. One anomaly is that some relatively obscure think tanks become some of the most-cited under this speci?cation. For instance, the Alexis de Tocqueville Institute, which, most readers would agree, is not one of the most well-known and prominent think tanks, is the thirteenth most-cited think tank by members of Congress when we use sentences as the level of observation. It is the ?fty-eighth most-cited, however, when we use citations as the level of observation. 43 A related problem is that these data are serially correlated. That is, for instance, if a given observation for the New York Times is a citation to the Brookings Institution, then the probability is high that the next observation will also be a citation to the same think tank (since the average citation contains more than one sentence). However, the likelihood function that we use assumes that the observations are not serially correlated. Finally, related to these problems, the estimates from this speci?cation sometimes are in stark disagreement with common wisdom. For instance, the estimates imply that the Washington Times is more liberal than Good Morning America. For these reasons, we base our conclusions on the estimates that use citations as the level of observation, rather than sentences. In columns 1 to 4 of Table VI we report the results when, instead of using 44 actual think tanks and 6 mega think tanks, we 43. Nunberg [2004], in a critique of an earlier version of our paper, deserves credit for ?rst noting the problems with the sentence-level data involving the Alexis de Tocqueville Institute. Our earlier version gave approximately equal focus to (i) estimates using citations as the level of observation and (ii) estimates using sentences as the level of observation. Partly due to his critique, the current version no longer focuses on sentences as observations. We did not have the same agreement with the rest of his criticisms, however. See Groseclose and Milyo [2004] for a response to his essay. 1232 QUARTERLY JOURNAL OF ECONOMICSuse 54 (respectively, 64, 74, and 84) actual think tanks and 6 mega think tanks. That is, we let the total number of think tanks that we use change to 60, 70, 80, and 90. Columns 5–9 of Table VI address the concern that our main analysis does not control for the “quality” of a think tank or policy group. To account for this possibility, we constructed three variables that indicate whether a think tank or policy group is more likely to produce quality scholarship. The ?rst variable, closed membership, is coded as a 0 if the web site of the group asks visitors to join the group. For instance, more activist groups—such as the NAACP, NRA, and ACLU—have links on their web site that give instructions for a visitor to join the group; while the more scholarly groups—such as the Brookings Institution, the RAND Corporation, the Urban Institute, and the Hoover Institution—do not. Another variable, staff called fellows, is coded as 1 if any staff members on the group’s web site are given one of the following titles: fellow (including research fellow or senior fellow), researcher, economist, or analyst. Both variables seem to capture the conventional wisdom about which think tanks are known for quality scholarship. For instance, TABLE VI ESTIMATED ADA SCORES FOR ALTERNATIVE SPECIFICATIONS Media outlet 1 2 3 4 5 6 7 8 9 ABC Good Morning America 56.9 59.9 60.2 60.3 63.2 60.9 62.5 63.9 61.7 ABC World News Tonight 61.6 62.4 62.9 62.9 61.7 58.8 60.6 62.1 59.3 CBS Early Show 67.1 68.9 68.9 69.0 66.0 63.0 64.5 66.1 63.1 CBS Evening News 74.0 74.3 75.0 75.0 77.6 74.2 76.3 78.6 75.3 CNN NewsNight with Aaron Brown 56.2 56.6 57.3 57.3 55.2 52.4 53.3 55.0 52.2 Drudge Report 60.2 61.1 61.0 60.7 63.1 60.6 62.2 63.6 61.0 Fox News Special Report 41.7 42.2 42.5 42.3 40.5 38.7 38.6 40.0 38.1 Los Angeles Times 69.5 70.0 70.1 69.8 71.4 68.2 69.9 70.9 68.0 NBC Nightly News 63.3 63.4 63.6 63.5 63.9 61.4 62.5 64.6 62.1 NBC Today Show 65.2 66.6 66.4 66.6 67.3 64.1 66.0 68.0 64.9 New York Times 74.1 75.0 75.3 74.9 75.7 72.7 74.5 76.3 73.4 Newshour with Jim Lehrer 58.1 58.5 59.0 59.3 60.3 56.4 58.3 59.8 56.0 Newsweek 66.9 67.6 68.4 68.0 68.7 65.0 67.3 68.2 64.9 NPR Morning Edition 67.2 67.9 68.3 68.2 68.9 65.6 67.6 69.3 66.1 Time Magazine 65.6 65.7 65.9 65.7 64.9 61.5 63.9 64.6 61.7 U.S. News and World Report 66.1 67.2 68.4 68.5 69.7 66.3 68.1 69.9 66.7 USA Today 63.3 64.5 64.9 65.0 69.5 65.7 68.0 69.1 65.6 Wall Street Journal 84.9 86.6 86.9 86.8 86.8 83.4 85.2 87.2 83.8 Washington Post 66.2 66.8 66.9 66.8 68.9 66.0 67.6 69.8 66.8 Washington Times 35.4 35.8 36.1 35.1 41.2 40.0 39.3 40.8 39.4 Average of 20 outlets 63.2 64.1 64.4 64.3 65.2 62.2 63.8 65.4 62.5 A MEASURE OF MEDIA BIAS 1233of the top-25 most-cited groups in Table I, the following had both closed membership and staff called fellows: Brookings, Center for Strategic and International Studies, Council on Foreign Relations, AEI, RAND, Carnegie Endowment for Intl. Peace, Cato, Institute for International Economics, Urban Institute, Family Research Council, and Center on Budget and Policy Priorities. Meanwhile, the following groups, which most would agree are more commonly known for activism than high-quality scholarship, had neither closed membership nor staff called fellows: ACLU, NAACP, Sierra Club, NRA, AARP, Common Cause, Christian Coalition, NOW, and Federation of American Scientists. 44 The third variable that we constructed is off K street. It is coded as a 1 if and only if the headquarters of the think tank or policy group is not located on Washington, D.C.’s K Street, the famous street for lobbying ?rms. 45 Recall that in the estimation process for Table III, we estimated individual aj ’s and bj ’s only for the 44 think tanks that the media cited most. All other think tanks were placed into one of six mega think tanks. It is not clear how one should code the quality variables for the mega think tanks. For example, should a mega think tank be coded as one if most of the actual think tanks that comprise it have closed membership? Alternatively, should it receive the average closed-membership score of the think tanks that comprise it? If so, should such an average be weighted by the number of times that the media cite the actual think tanks? Should instead such weights include the number of times that legislators cite it? Another complicating factor is that a few of the more minor think tanks no longer have web sites, which made it impossible for us to code the quality variables for them. Instead, we altered our analysis so that we only used data from the top 50 most-cited think tanks, and we did not include any mega think tanks in this analysis. These think tanks comprised approximately 88 percent of the media citations in our total sample. So that we are comparing apples with apples, we construct baseline estimates for comparing the effect of the quality variables. These estimates, listed in column 5 of Table VI, use data 44. Despite its name, the Federation of American Scientists is more of a lobbying group than a scholarly think tank. Indeed, like most other well-known lobbying groups, its address is on K Street in Washington, D.C. 45. Only four of the 50 most-cited groups had an address on the street. These were Center for Strategic and International Studies, Federation of American Scientists, Employee Bene?t Research Institute, and People for the American Way. 1234 QUARTERLY JOURNAL OF ECONOMICSonly from the top 50 most-cited think tanks and do not exploit any quality variables as controls. Note that this speci?cation causes the media to appear more liberal than our main analysis: compared with the estimates of Table III, the average media score is approximately 2.6 points higher. Next, we incorporate quality variables in the likelihood function. In Table VI, column 6, we use a likelihood function that assumes that the probability that media outlet i will cite think tank j is (5) expaj  bj ci  d1 closed membershipj    k1 J expak  bk ci  d1 closed membershipk . The likelihood function still uses (2) as the probability that a member of Congress cites think tank j; i.e., it sets d1 to zero for the congressional observations. Thus, d1 measures the extent to which a media outlet is more likely than a legislator to cite a think tank with closed membership. Columns 7 and 8 of the table give estimates when we substitute staff called fellows and off k street for c los ed membe r ship in (5). Column 9 of the table gives estimates when we include all three of the control variables in the likelihood function. As these columns show, when we include the quality variables, this causes the media scores to appear slightly more conservative. However, the change has very little substantive signi?cance. For instance, in none of the three speci?cations does the average score change by more than three ADA points. Further, the change is less than the effect of using only data from the top 50 think tanks. That is, when we compare these estimates with those in Table III, we see that if we (i) use data only from the top 50 most-cited think tanks and (ii) include quality variables, then the net effect of these two decisions is to make the media appear more liberal. The change from including the quality variables also has very little, if any, statistical signi?cance. For instance, with each speci?cation listed in columns 6 –9, we reestimated the likelihood function, while constraining the media estimates to the values listed in column 5 (while allowing the estimates for the quality variables to reach their optimum values). Using a likelihood ratio test, even at p-values of 30 percent, we could never reject the null A MEASURE OF MEDIA BIAS 1235hypothesis that the quality variables cause no change to the estimated ADA scores. DEPARTMENT OF POLITICAL SCIENCE, UNIVERSITY OF CALIFORNIA AT LOS ANGELES DEPARTMENT OF ECONOMICS AND TRUMAN SCHOOL OF PUBLIC AFFAIRS, UNIVERSITY OF MISSOURI REFERENCES Alterman, Eric, What Liberal Media? The Truth about Bias and the News (New York: Basic Books, 2003). Baron, David, “Persistent Media Bias,” Journal of Public Economics, LXXXIX forthcoming (2005). Black, Duncan, The Theory of Committees and Elections (London: Cambridge University Press, 1958). Bozell, L. B., and B. H. Baker, That’s the Way It Isn’t: A Reference Guide to Media Bias (Alexandria, VA: Media Research Center, 1990). Crouse, Timothy, Boys on the Bus (New York: Ballantine Books, 1973). Djankov, Simeon, Caralee McLiesh, Tatiana Nenova, and Andrei Shleifer, “Who Owns the Media?” Journal of Law and Economics, XLVI (2003), 341–381. Franken, Al, Lies and the Lying Liars Who Tell Them: A Fair and Balanced Look at the Right (New York: Dutton, 2003). Goff, Brian, and Robert Tollison, “Why Is the Media so Liberal?” Journal of Public Finance and Public Choice, I (1990), 13–21. Goldberg, Bernard, Bias: A CBS Insider Exposes How the Media Distort the News (Washington, DC: Regnery, 2002). Groeling, Tim, and Samuel Kernell, “Is Network News Coverage of the President Biased?” Journal of Politics, LX (1998), 1063–1087. Groseclose, Tim, Steven D. Levitt, and James M. Snyder, Jr., “Comparing Interest Group Scores across Time and Chambers: Adjusted ADA Scores for the U. S. Congress,” American Political Science Review, XCIII (1999), 33–50. Groseclose, Tim, and Jeffrey Milyo, “Response to ‘ “Liberal Bias,” Noch Einmal,’ ” Language Log, viewed December 20, 2004, http://itre.cis.upenn.edu/myl/ languagelog/archives/001301.html. Hamilton, James, All the News That’s Fit to Sell: How the Market Transforms Information into News (Princeton, NJ: Princeton University Press, 2004). Herman, Edward S., and Noam Chomsky, Manufacturing Consent: The Political Economy of the Mass Media (New York: Pantheon Books, 1988). Hotelling, Harold, “Stability in Competition,” Economic Journal, XXXIX (1929), 41–57. Irvine, Reed, and Cliff Kincaid, “Post Columnist Concerned About Media Bias,” Accuracy in Media, viewed September 17, 2001, http://www.aim.org/media_ monitor/A900_0_2_0_C/. Jamieson, Kathleen Hall, Everything You Think You Know About Politics . . . and Why You’re Wrong (New York: Basic Books, 2000). Judge, George G., W. E. Grif?ths, R. Carter Hill, Helmut Lutkepohl, and TsoungChao Lee, The Theory and Practice of Econometrics (New York: John Wiley and Sons, 1985). Kurtz, Howard, “Fewer Republicans Trust the News, Survey Finds,” Washington Post (June 9, 2004), C01. Lichter, S. R., S. Rothman, and L. S. Lichter, The Media Elite (Bethesda, MD: Adler and Adler, 1986). Lott, John R., Jr., “Public Schooling, Indoctrination, and Totalitarianism,” Journal of Political Economy, CVII (1999), S127–S157. ——, The Bias Against Guns (Washington, DC: Regnery Publishing, Inc., 2003). Lott, John R., Jr., and Kevin A. Hassett, “Is Newspaper Coverage of Economic Events Politically Biased?” manuscript, American Enterprise Institute, 2004. McChesney, Robert, and Ben Scott, Our Unfree Press: 100 Years of Radical Media Criticism (New York: The New Press, 2004). 1236 QUARTERLY JOURNAL OF ECONOMICSMcFadden, Daniel, “Conditional Logit Analysis of Qualitative Choice Behavior,” in Frontiers in Econometrics, P. Zarembka, ed. (New York: Academic Press, 1974). Mullainathan, Sendhil, and Andrei Shleifer, “The Market for News,” manuscript, Harvard University, 2003. Nunberg, Geoffrey, “ ‘Liberal Bias,’ Noch Einmal,” Language Log, viewed December 20, 2004, http://itre.cis.upenn.edu/myl/languagelog/archives/001169.html. Parenti, Michael, Inventing Reality: The Politics of the Mass Media (New York: St. Martin’s Press, 1986). Povich, Elaine, Partners and Adversaries: The Contentious Connection Between Congress and the Media (Arlington, VA: Freedom Forum, 1996). Sperry, Paul, “Myth of the Conservative Wall Street Journal,” WorldNetDaily, viewed June 25, 2002, www.worldnetdaily.com/news/article.asp?ARTICLE_ ID28078. Sutter, Daniel, “Can the Media Be So Liberal? The Economics of Media Bias,” The Cato Journal, XX (2001), 431– 451. ——, “Advertising and Political Bias in the Media,” American Journal of Economics and Sociology, LXI (2002), 725–745. ——, “An Indirect Test of the Liberal Media Thesis Using Newsmagazine Circulation,” manuscript, University of Oklahoma, 2004. Weaver, D. H., and G. C. Wilhoit, American Journalist in the 1990s (Mahwah, NJ: Lawrence Erlbaum, 1996). Woodward, Bob, The Agenda: Inside the Clinton White House (New York: Simon & Schuster, 1994). A MEASURE OF MEDIA BIAS 1237