Productivity and Impact of Optical Telescopes

In 2001, about 2100 papers appearing in 18 journals reported and/or analyzed data collected with ground‐based optical and infrared telescopes and the Hubble Space Telescope. About 250 telescopes were represented, including 25 with primary mirror diameters of 3 m or larger. The subjects covered in the papers divide reasonably cleanly into 20 areas, from solar system to cosmology. These papers were cited 24,354 times in 2002 and 2003, for a mean rate of 11.56 citations per paper, or 5.78 citations per paper per year (sometimes called impact or impact factor). We analyze here the distributions of the papers, citations, and impact factors among the telescopes and subject areas and compare the results with those of a very similar study of papers published in 1990–1991 and cited in 1993. Some of the results are exactly as expected. Big telescopes produce more papers and more citations per paper than small ones. There are fashionable topics (cosmology and exoplanets) and less fashionable ones (binary stars and planetary nebulae). And the Hubble Space Telescope has changed the landscape a great deal. Some other results surprised us but are explicable in retrospect. Small telescopes on well‐supported sites (La Silla and Cerro Tololo, for instance) produce papers with larger impact factors than similar sized telescopes in relative isolation. Not just the fraction of all papers, but the absolute numbers of papers coming out of the most productive 4 m telescopes of a decade ago have gone down. The average number of citations per paper per year resulting from the 38 telescopes (2 m and larger) considered in 1993 has gone up 38%, from 3.48 to 4.81, a form, perhaps, of grade inflation. And 53% of the 2100 papers and 38% of the citations (including 44% of the papers and 31% of the citations from mirrors of 3 m and larger) pertain to topics often not regarded as major drivers for the next generation of still larger ground‐based telescopes.


INTRODUCTION
The ideas that science has a reward structure that corresponds reasonably well to the value of the work and that one can be quantitative about how and how well the science is done are older even than the senior author and are associated with the name of one of the giants of 20th century sociology, Robert K. Merton (e.g., Merton 1942, 1969. Counting citations to a given paper in later papers as part of the quantitative process arose with Eugene Garfield's development in 1964 of the Science Citation Index (SCI), which he has described (Garfield 1979).
Helmut Abt (1981Abt ( , 1985 was a pioneer in applying these methods to astronomy. His goal was to find out whether the large, publicly operated American telescopes of the time were yielding as much and as valuable data as the privately owned and operated ones. He made use of information available to him as editor of the Astrophysical Journal, counting papers and citations to them. The answer was (and still is) yes, in so far as papers and citations measure whatever it is one wants to know. We plunged into the field four years later (Trimble 1985), wondering how citation rates might differ among subfields of astronomy and between individuals. The main conclusions were that it paid to be a mature, prize-winning theorist working on high-energy astrophysics or cosmology at a prestigious institution. It also paid to be male, though not by much. There is probably still a good deal of truth in these observations. A larger set of American telescopes, papers published in American journals, and impact factors appear in Trimble (1995), and this was expanded (Trimble 1996) to consider all 38 optical and infrared telescopes with mirror diameters of 2 m or larger that had contributed to the world literature between 1990 January and 1991 June in 14 journals, and the citations to them in 1993. There were 1164 papers and 4052 citations, for an average annual rate of 3.48 citations per paper. The largest total numbers of papers and citations came from the CFHT and AAT, with the CTIO 4 m a close runner-up. The largest impact factors belonged to the University of Hawaii's 2.2 m (5.50) and the Multi-Mirror Telescope in Arizona (4.97), then the world's third largest collecting area. Telescopes in the Soviet Union and Eastern Europe typically had impact factors of one citation per paper per year or smaller.
More recently, studies focusing on one or two telescopes have become common (Meylan et al. 2004 on HST;Crabtree &Bryson 2001, 2003 on CFHT andUKIRT). These generally give full credit to the telescope under consideration for all papers (and the corresponding citations) that report or use data from them, even for papers with significant input from two or more (up to at least 12) different telescopes. Because there is some correlation among citation rates, paper lengths, and use of multiple telescopes, this method can easily give the impression that all our children are above average.
The present paper is intended as a decadal update of the 1990-1993 project. The main drivers were (1) curiosity about how HST and 8 m mirrors might have changed things, and (2) the unexpected advent of someone who was willing to carry out the portions of the work impossible for the senior author. The most important internal difference from 1990-1993 is the inclusion of all papers reporting or analyzing data from ground-based optical and infrared telescopes (plus HST), even those of very small diameters. All telescopes with mirrors of 1.86 m or larger were tracked individually, and smaller ones by size class or purpose.

METHODS
An analysis of this sort cannot be done in real time. The interval from first light to routine operation of a telescope is at least a year or two. Important programs generally require multiple observing runs. The papers must be written, shepherded through the refereeing and publication process, and then read and cited, with peak citation rates generally occurring a couple of years after publication. The decision to consider papers published in 2001 and citations from 2002 and 2003 was thus somewhat arbitrary, given our start date of 2004 May and completion date of 2004 September. For certain kinds of comparisons, which you might want to make (but we do not), the VLT (Very Large Telescope) should therefore be thought of as consisting of one 8 m mirror (not four) and Keck as consisting of about 1.2 ten-meter mirrors (not two, based on the numbers of papers mentioning each). Gemini North, Magellan, and the Hobby-Eberly Telescope each appeared in a very few papers, but are not really part of the database. The Italian Galileo Telescope (TNG) in the Canary Islands is probably also not yet fully represented. At the other end of telescope lifetimes, the 2.5 m Hooker at Mount Wilson had nearly disappeared from the literature, and the 2.6 m at Byurakan, Armenia, entirely so.
The next decision required is which papers to look at. The thousand most cited over a number of years or those appearing in a very high profile journal are possible choices (Benn & Sanchez 2001;Sanchez & Benn 2004). Our choice of a complete year arose from the recognition that, throughout the sciences, citations are most often to average papers by average authors and are not concentrated either on a few acknowledged superstars or on the esoteric and obscure (White 2004;Zuckerman 1988).
With this in mind, V. T. went page by page through all the issues of 18 journals published in 2001 and identified all the papers that reported or analyzed data from any ground-based optical or infrared telescope, HST, or JCMT (the James Clark Maxwell Telescope, which hosts the submillimeter SCUBA detector, an accidental interloper). The rule was simply that it had to be possible to discern from the paper itself which telescopes were involved. Thus, uses of the Palomar/ESO/SERC Schmidt sky surveys were credited to those telescopes, the Hubble Deep Field to HST, and 2MASS infrared samples to that project. In contrast, papers that drew a sample of QSOs from an existing catalog compiled from many sources were not, for instance, included.
The journals and paper yields were A&A (including Letters,  (17), Science (11), Acta Astronomica (9), Ap&SS (8, excluding conference proceedings), Astron. Nachr. (7), J. Astrophys. Astron. (3), and JRASC (2). Of these, Science, PASJ, Astron. Nachr., J. Astrophys. Astron., and Icarus were not included in the 1990-1993 analysis. Icarus, in particular, was added because the discovery of Kuiper Belt objects seems to have increased the number of solar system papers using large telescope data. The only journal lost was Astrofizica, published in Armenia. Journals that might have been included but were not, because they were not available in the UC Irvine library, include Revista Mexicana Astronomia y Astrofisica, and Earth, Moon, and Planets.
For each paper, the following information was recorded: name of first author, number of additional authors, volume and page number, total number of pages, subject matter (about 25 possibilities, a few later merged), and the identity of all the telescopes contributing data to the paper in the order they were mentioned by the authors. For a few papers, it was not possible to determine which telescopes were used, and for a few others, the subject was unclear. These appear in the grand total averages only.
Assignment of subject was based on what the authors said they had in mind. Thus, a study of Cepheid variables or of supernovae would be counted as "stars" or "supernovae and their remnants" if the goal was an understanding of the objects themselves, but counted as "cosmology" if the primary goal was calibrating distance scales. QSOs were counted as "cosmology" if they were being used to probe the large scale distribution of Lya clouds or the intergalactic D/H ratio, as "galaxies" if the analysis pertained to the properties of the galaxies containing the gas responsible for absorption features, and as "AGNs" if the authors were focused on the objects themselves, their properties and processes. Several classes of papers, each small in number, reporting site conditions, calibrations of instruments, astrometric catalogs, and so forth were eventually collected as "service," but remained credited to the telescopes used. The projects kept separate from their particular telescopes included 2MASS, DENIS, SDSS, OGLE, MACHO, and a couple of other lensing surveys.
Authors P. Z. and T. B. then went to the online SCI database and recorded the numbers of citations to the first 2/3 (P. Z.) and last 1/3 (T. B.) of the papers published in 2001 that had been tabulated in 2002 and 2003. An overlap of about 100 papers at the interface showed consistency, except that T. B. perhaps found a slightly larger fraction of citations in which the citing author had gotten something wrong (volume, page numbers, initials). This seems to be about a 2% effect, comparable to uncertainties in assigning papers to the correct subject areas and telescopes.
The most difficult decisions were how to apportion papers and citations among the telescopes used for a single paper and which telescopes to keep full track of. The first decision was to give equal credit to all telescopes used for a paper, according to the authors. Yes, this sometimes meant 1/7th shares, although halves and thirds were much more common. HSTϩKeck and HSTϩVLT were frequent combinations, as were two, three, or four telescopes in the Canary Islands or at La Silla. Equal credit in this sense parallels what was done a decade ago, but is very different from what others have chosen when looking at individual telescopes. Citations were similarly divided equally, except that even the chief bean counter drew the line at assigning 1/7th of a citation to a 2.1 m telescope. Division was as equal as integers could make it, with one extra given to each of the first few telescopes mentioned in the paper to make up the total. Thus, 14 references to three telescopes were divided as 5, 5, 4. For a paper deriving from nine telescopes and getting only five citations, the last four had to cry wee, wee all the way home. This was actually quite rare.
Which telescopes to keep track of individually? Obviously, all the ones with primary mirror diameters of 2 m or larger (for consistency with the earlier study) and a few interesting ones that just missed the cut: the 1.9 meters at Haute Provence and SAAO in South Africa (the former having risen to prominence with the discovery of extra-solar-system planets), the 1.8 meters at the historically important DDO and DAO, and the relatively new National Astronomy Observatory in Japan. There were 40 of those. Next, each site with a "large" telescope also received an "other" category, although we did not attempt to determine precisely how many 36 inch telescopes at KPNO or 1 m telescopes at La Silla were operating at any given time. There were 13 "other" groups. Astrometric telescopes, prototype interferometers, refractors, and automated photometric telescopes were collected as four groups. The special-purpose facilities and programs kept track of were 2MASS and DENIS (infrared surveys), OGLE, MACHO, EROS, and MOA (searches for gravitational microlens events that also reveal eclipsing binaries and other variables), and the 48 inch Schmidt surveys of the northern and southern skies, done under the auspices of the Palomar Observatory, ESO, and SERC (UK). Finally, all the rest were grouped by mirror diameter, as 1.6-1.84 m, 1.5-1.6 m, 1.0-1.49 m, and less than 1 m.
How many telescopes are represented in the 2001 literature? About 250, but this has an uncertainty of ‫.51-01ע‬ Did the three small Greek telescopes used in one paper include or not include the 0.3 m in Crete used for another? Just how many Meade 14 inch 'scopes do the Backyard Astronomers deploy?
And so forth. At the 2 m level there are no ambiguities: Catanea p Haro, a 2.1 m in Mexico. Catania is something smaller in Italy. The 1.5 m class ones are not so clear. We think Cassini p Loiana p Bologna, though sometimes described by authors as 1.52 m and sometimes as 1.5. Ambiguities of this sort make no difference unless you are trying to make the case that a given number of square inches of glass is most effective when divided one way rather than another. We do not attempt this, being reasonably sure that the quality of the site, the level of support, and the ingeniousness of the users are all more important.
In what ways are these choices more or less fair than others? (1) Selecting papers from a single year of the literature is bad for telescopes that may have been having problems a few years before. In a world where none of the three of us had other responsibilities, we could provide some sort of running mean based on 3-4 years of publications and their subsequent citations.
(2) Choosing citations only 2-3 years post-publication discriminates against survey telescopes and projects whose worth to the community unfolds over a very long period of time. The Palomar Observatory Sky Survey of the 1950s is a golden example. (3) The choice of citation numbers from the SCI database discriminates against anything that was published, or significantly cited, in some language other than English, or whose largest impact is for some reason to be found in conference proceedings. Work done on telescopes in the former Soviet Union and in Eastern Europe will be affected, and the senior author is engaged in a perpetual campaign to bring the Bulletin of the Astronomical Society of India into the fold. But there is no better data set available. ADS numbers seem to be somewhat less complete, given that we find many fewer zerocitation papers than do Meylan et al. (2004). (4) Assigning equal credit to each telescope used in each paper is surely unfair to some facility in every multi-telescope paper, but probably not to the same ones in all cases. Full credit to every telescope used would be unfair in the opposite direction. The global truth, if there is any to be had, must fall in between for the most useful telescope in a given paper and below the credit we give for the less useful ones. It would, we suspect, be counterproductive to expect every author team to include in their methods section just how much of the information came from each contributing telescope.

RESULTS
These are grouped under headings that reflect what may be common misconceptions.

Most Papers Aren't Read by Anybody
This may actually be true, but there were only 133 (6.3%) for which no citations had been recorded. Science, Nature, and Acta Astronomica published no such papers (out of 11, 17, and 9, respectively), while there were 10% or more uncited papers in Astronomische Nachrichten and Astrophysics and Space Science (which may not surprise you) and Icarus (which surprised  . We do not report citations per paper as a function of the journal in which the paper appears. To do so invariably invites discussion (not very useful or enlightening) on whether Americans are more parochial than other astronomers. In fact, papers in any given journal are more likely to cite other papers that appeared in that journal than a neutral mix would dictate (Trimble 1993; Garfield 1988).
Counting zeros (let alone assigning, say, 1/6th of a zero to a particular telescope) is apparently not quite so easy as you might suppose. The 133 uncited papers are not, of course, uniformly distributed among telescopes. HST, for instance, scores (prorated) 10 out of 346 , or 3.1%, very close to the 2 1 3 3 value to be read from Figure 5 of Meylan et al. (2004). But for astronomy as a whole, and a set of journals reasonably close to ours, they find 25% of all papers uncited 2-3 years after publication, compared to 6.3% of the optical observation papers compiled here. Perhaps one just has to conclude that there are an awful lot of under-recognized theorists out there, or perhaps that their database has missed a good many citations in obscure journals that give relatively obscure papers 1 to 5 each, rather than zero.

Well, At Least We're Holding Our Own
This seems to be true on average for the 38 telescopes considered in 1990-1993, in the following sense. The number of papers found then was 1163 in 18 months of journals, or 775 for 1 year. And it was 735 for 2001 from the same 38 telescopes and a slightly larger set of journals. The mean impact factor has actually gone up, from 3.48 to 4.81 citations per paper per year, which is perhaps a sort of grade inflation arising as ever more cautious authors strive to cite ever larger portions of the potential referee pool.
The leaders of a decade ago are not quite holding their own. A bit of recounting is needed, because at that time credit for papers and citations was divided equally among all the "large" telescopes involved, and ones of less than 2 m diameter were simply ignored. Thus, we recounted the 2001 sample the same way. Table 1 shows the results for the leaders of 1990-1993 (CFHT, AAT, and the CTIO Blanco, plus the NOAO Mayall, ESO 3.6 m, Palomar and Lick and, out of idle curiosity, UKIRT and IRTF). Remember that the factor 1.5 accounts for 18 months of papers in 1990-1991 and 12 months in 2001. It seems that the 4-5 m optical telescopes are not yielding as many papers as they did a decade ago. Lick has fared even worse. Crabtree & Bryson (2001) had spotted this for CFHT and attributed it to a change in focal plane instrumentation that makes data reduction and processing more complex. But it seems to be a part of a general trend, except that the two infrared facilities, IRTF and UKIRT, are now contributing more papers than before. The changes in both directions are ‫,%02-%01ע‬ and you might want to blame small-number statistics despite the generality of the phenomenon.
Please choose your own explanation before reading ours, which is that important projects that would once have been carried out on the 4 m telescopes are now done more efficiently with 8 m class mirrors. If so, then, a few years hence, the IR-optimized Gemini should also have clobbered UKIRT and IRTF.

500 Pound Gorillas
The most notorious of these is, of course, the Hubble Space Telescope, though we might suggest that it is only a 400 pound gorilla, who has lunch with almost anyone he wants to. Meylan et al. (2004) reported about 455 papers published in 2001, giving full credit for every paper that used HST data in any way. With credit divided among all telescopes used, this drops to 346.3 papers in 2001. It is also possible that there were a few papers whose use of HST data could not be discerned just by reading them, but the custom of mentioning HST and often STScI in acknowledgements as well as footnotes, abstracts, and main text means these are few.
These 346.3 papers were cited 4747 times (again shared among all telescopes used for each paper) for a mean number of 13.73 citations per paper, compared to 11.56 citations per paper for our  Fig. 3) show how the mean total number of citations per paper grows with time, and their number for "three years since publication" is about 12.5, so we are all measuring more or less the same thing.
There is no doubt that the gorilla paper of 2001 was Freeman et al. (2001), the summary of the HST Key Project on cosmic distance scales and the Hubble constant. It weighed 370 pounds (sorry, citations) as Meylan et al. (2004) went to press, and 443 by the time we looked. There is then a factor of more than 2 drop to the second most cited paper. This is a steeper drop than would be expected from Zipf's (1949, andvarious prediscoveries, including Lotka 1925) Law, which describes various rank-ordered distributions, like GNP per country from large to small, members per scientific society from many to few, and citation rates of papers from high to low. Table 2 lists the chimpanzees and baboons by decreasing numbers of citations, telescope, journal, and subtopic. These top 16 papers account for 10.5% of all the citations.
At the level of 50-100 citations in the 2 years, there are 34 papers coming from a wider range of telescopes and addressing a wider range of subjects, including star clusters, white dwarfs, brown dwarfs, exoplanets, and the structure of the Milky Way.

I Don't See Very Many Interesting Papers There
This is a (bowdlerized) version of words spoken about a high-impact journal by a colleague who specializes in, shall we say, the lithium content of extra-solar-system comets. Yes, the superstar papers come from a small range of subdisciplines that do not fully reflect what many members of our community actually do for a living. Table 3 shows numbers of papers, citations, and impact factors for the somewhat arbitrary 20 subdisciplines. The last two columns are fractions of all the published papers that pertain to that subdiscipline and the fraction of all papers deriving from 3 m and larger telescopes that pertain to that subdiscipline.
Indeed, there are topics that rank high or low in impact factor compared to the average of 11.56 citations per paper in 2 years. There is not a strong correlation between impact and the size of the community working on the topic, if total numbers of papers are a suitable proxy for community size. Compare galaxies (many papers, high impact) with stars (many papers, lower impact), and gamma-ray bursts (few papers, high impact) with solar system, cataclysmic variables, and other binaries (intermediate numbers of papers, low impact). Obviously, much of the difference derives from different customs in the subdisciplines about how many papers you should cite in yours and, probably, how long papers should be. We intend to collect data on this topic later, and perhaps the information should be used as some sort of normalization factor. But if one corrects for all systematic effects, the result will be to make all papers average, in somewhat the same way that fine-tuning insurance rates for risk factors will result in each family paying its own medical expenses plus 20% expenses and profit to the insurance company. In the interim, we note that the mean length of galaxy papers is 17 pages, and of star papers 11 pages, excluding "letters" in each case.
The topics whose numbers are starred in the last two columns are those generally not considered as major goals for the next generation of very large optical telescopes (National Research  Council 2001). These include gamma-ray bursters and active galactic nuclei for their own sakes (vs. as cosmic probes), individual stars beyond the formation stage, binaries, cataclysmic variables, planetary nebulae, supernova remnants, and all. They make up 53% of the total paper inventory and 46% even of papers from the larger telescopes. Table 4 is, at long last, the compilation of papers (published in 2001) and citations to them (from 2002-2003) credited to 47 individual telescopes and 33 classes. HST comes first, as befits an alpha primate. The rest are grouped, first by site or administrative unit, ordered by the largest mirror in each (and with the telescopes at a given site also more or less ordered by size). Next come special purpose facilities and their products; and the smaller telescopes, grouped by size, bring up the rear.

My Telescope Can Beat Your Telescope
There is, we suspect, something to offend every observer and observatory director in these numbers. And statisticians can be offended that nearly all the ratios of citations per paper are given to three or four places, when only two are justified. Clearly, big mirrors trump small ones in general. Well-supported sites with multiple telescopes outperform ones maintained under more difficult circumstances. A long-standing tradition of optical astronomy in the supporting and host countries probably also helps. Some sites are obviously also intrinsically better than others in weather, seeing, and so forth. Large telescopes tend to be built in good places (though Mount Maidanak is apparently very good), confounding causes. But the well-known deterioration of Mount Hamilton could be a factor in the 120 inch having slipped further than the 4-5 m telescopes of Table 1.
Rightly or wrongly, the Sloan Digital Sky Survey rather quickly has replaced the Australian Two-Degree Field (2dF) and Las Campanas redshift surveys as the source of many data samples analyzed by astronomers not closely connected to the original surveys. And there are many traces of the fact that the US controls a big chunk of observing facilities, grant money, and journals.
The 15 m James Clerk Maxwell Telescope on Mauna Kea would be indistinguishable from a 4 m optical telescope, if you didn't know what it did. Compare, for instance, the JCMT and CFHT numbers. One implication is that analysis of this sort could easily be extended into the regime of ground-based radio astronomy, since those antennas generally last as long as optical telescopes.
Extension to space-based observatories is much more difficult, unless they have the lifetime of an HST. In 2001, large numbers of papers reported or used data from ISO or Chandra. IUE and ROSAT were gone, but data were still being reported. CGRO was nearly silent under the ocean, and XMM-Newton was not yet quite up to strength. A different method of counting papers will be needed to make even approximate comparisons of productivity and impact between short-lived missions and between those and ground-based installations, which almost never end up beneath the seas (with a small correction for the observatories of Atlantis).

CONCLUSIONS AND FURTHER WORK
In 2001, about 2100 papers published in 18 refereed journals reported or analyzed data from optical telescopes (including HST, UKIRT, IRTF, and by accident JCMT). These were cited 24,354 times in 2002 and 2003 in journals that form part of the Science Citation Index database. Quite generally, expensive telescopes produce more papers and more frequently cited papers than less expensive telescopes. The expensive class comprises HST, 8 m mirrors, and to a lesser extent smaller mirrors on well-supported sites (Gilmozzi & Melnick 2004, who also report paper numbers for several more telescopes, counted in the full-credit-to-each mode).
Other trends include: (1) citation rates for various subdisciplines ranging from more than 30 (cosmology) down to fewer than five (binary stars), (2) the most productive 4 m telescopes of a decade ago having not quite held their own in total productivity, (3) the two established infrared telescopes having more than held their own, (4) a sizable number of papers (567, or 27%) coming from telescopes of less than 2 m diameter and receiving an average of 6.06 citations per paper (14% of the total), (5) larger impact factors for small telescopes (less than 2 m) that share well-supported sites with larger mirrors (8.77 citations per paper) than for those that stand alone or share poorly supported sites (4.14 citations per paper), and (6) sizable citation rates for redshift surveys (SDSS but also 2dF) and microlensing search papers.
What else might one look at with these data? (1) correlations of citation rates with numbers of authors and lengths of papers; these were positive in the past (Abt 1984(Abt , 1985Trimble 1995Trimble , 1996 and presumably remain so, (2) citation rates for various journals as an indicator of the American multi-pound gorilla, (3) citation rates versus nationality of authors (ditto). These could be done without further journal mining. Additional data collection would be needed to look for possible effects of customs in different subdisciplines concerning how many citations there should be in each paper (a number that has increased enormously with time; just look at an ApJ or MNRAS from the 1920s, if you doubt this). Expansion into the radio regime would also require further work, but no new ideas.
The referee, whose under-ground axes are different from ours, asked for further consideration of the starred topics in Table 3, those generally not listed among the science drivers for the 20-100 m optical telescopes of the future. The accompanying remark that "stars" encompass a wide range of research topics with very little overlap and very few "big questions" that are being pursued is a sentiment with which we simply disagree. The point being skirted around was that non-driver subjects get rather little large telescope time now and don't make good use of what they get. Table 5 contains some of the numbers requested, but it is important to recognize that causality could flow either direction. At least from the time of the Bahcall report (National Research Council 1991), emphasis on some branches of astronomy over others (about which V. T. protested at the time) has constituted handwriting on the wall, and it now takes a fairly independentminded or unambitious young astronomer to decide to make a career out of cataclysmic variables or peculiar A stars. The comparison of "stars" and "galaxies" indeed shows that star papers are not only "undercited" in general, but contribute an even smaller fraction of the total citations (15,252 prorated as usual) to work on telescopes of 3 m and larger. The numbers for gamma-ray bursts and neutron stars (etc.), however, show just the opposite, with more than their proportionate share of large optical telescope (LOT) citations. The large topic of active galaxies (for their own sakes, not as probes of cosmology or intervening stuff) doesn't do as well as non-active galaxies in LOT citations, but it accounts for 36% of all citations to SDSS, a major program of long-term importance.
Looking ahead, we suppose that a sample of papers published in, say, 2005 and cited in 2007-2008 might be examined for effects of queue scheduling and use of adaptive optics and the impact of Gemini North on smaller IR telescopes. Still further ahead, the competitiveness of "light bucket" style telescopes (HET and SALT) might be evaluated.
Division of fractional credit among telescopes cannot be done in proportion to the information coming from each. The vast majority of papers do not (and probably should not) provide the requisite information.
Extending the analysis of papers and citations to space-based missions will require some very different way of counting, because most of them don't last very long. One might select the year of maximum productivity of each mission and put it up against a recent good year for the various ground-based facilities. Thus, for instance, choose ISO papers from 1999, Chandra from 2001, and XMM from 2002, giving them the fractions of the papers and citation 2-3 years later that they have "earned," while ignoring the fractions from other facilities in those papers, unless it is also "their year." Not impossible, but a decade or more of papers would have to be examined, with more than 2000 papers per year, given the wider wavelength coverage, and the citations then chased down.
We are grateful to Major Dawn and Colonel Jim Deshafy of the US Air Force Reserve, who, by remarkable chance, brought the first two authors together, and to Kip Thorne, who not only suggested to the third author that UC Irvine might be a good place for her graduate work, but who also formulated a rule for what almost-Ph.D.s should do between writing their theses and being examined on them that brought her into the collaboration. Dr. Richard Green recommended, and NOAO provided, a generous contribution to our page charges.