Anthropology and Computing: The Challenges of the 1990s

The use of computers in the analysis of fieldwork, texts, and socioculturally embedded systems is examined in relation to advances in problems of cultural anthropology, including synthesis of trait with textual and network analysis within a more general conception of cultures as distributed systems. Keywords: ethnography, cognitive testing, representation and inference generators, hypertext and text analysis, hypermedia, spatial autocorrelation, scaling, consensus analysis, modular and didactic programs.

Anthropology and Computing: The Challenges of the 1990s Douglas R. White and Gregory F. Truex The use of computers in the analysis of fieldwork, texts, and socioculturally embedded systems is examined in relation to advances in problems of cultural anthropology, including synthesis of trait with textual and network analysis within a more general conception of cultures as distributed systems. Keywords: ethnography, cognitive testing, representation and inference generators, hypertext and text analysis, hypermedia, spatial autocorrelation, scaling, consensus analysis, modular and didactic programs.
Anthropology as a field is sharply distinguished from other disciplines, not only by the richness and multiplexity of its data, but by the way in which these data are collected. More than in any other social science, fieldworkers in anthropology are central to the entire process of data acquisition. Each anthropologist assembles an intricately layered body of data from human settings, whether from living populations or from archaeological sites. He or she is the chief instrument of data collection, assessment, and review, as an intensely sensitive recorder and interpreter of events and experiences and as a decision maker in research.
The role of the computer in the analysis of field data therefore remains unsettled and, for some, unsettling. Many anthropologists are incredulous that computers should become central to the discipline; others have feared that the introduction of the computer to the center of the field will inevitably decenter the anthropologist. However, it is very clear that anthropologists at one level or another increasingly face problems that make the use of computing imperative for many aspects of their craft. Disciplinary computing in the 1980s builds heavily on the work of earlier decades but simultaneously seeks to redress weaknesses and refine anthropological conceptualizations of earlier periods. Here we raise these conceptual questions as the foreground of our review, focusing on issues central to cultural anthropology but relevant as well to physical, archaeological, and linguistic anthropology. The importance of the computer has increased with the growing sophistication of and reflexivity in anthropological theory. Anthropologists have learned to view with suspicion claims as to the nature of cultural realities and to suspect that the commonsense world is complex and not directly penetrable. Consider, as the simplest case, that the practice of anthropology is a concern with ways of accounting. Anthropologists give accounts of what people say, what they do, of the incongruities between saying and doing, and of the commonalities or divergences in symbolic representation by different actors, including those between native collaborator and observer. Rarely do they encounter &dquo;the&dquo; other in contemporary textual exegesis; rather, anthropologists generally encounter multiple concrete others and compare and contrast the texts of and about many persons. They are increasingly skeptical of one-sided narratives. The number of people they encounter is often relatively large, and various models can be constructed and tested to explain variations recorded in various modalities.
Moreover, culture as a whole is implicated at many contrasting levels of analysis. The anthropologist wants to elicit patterns at different scales of spatial resolution, such as the interface between civilizations (world-systems) and local communities; cultures embedded in their ecologies; social networks comprising the local frameworks of social action (hence of social structure and organization); distributed knowledge ; rhythms of action and discourse; stereotyped patterns of behavior, including emotive reactions and considered response; and the psychological and biological configurations of individuals (the latter the domain of physical anthropology). The timeframes of the anthropologist's analysis vary, too, from immediate event sequences, event clusters, daily rhythms, and the weekly, monthly, seasonal, or yearly cycles observable by fieldworkers, to longer term processes such as year-to-year economic and migratory fluctuations, cycles of political succession and warfare, intergenerational succession, ecological cycles and historical change, cycles of cultural formation and dissolution-even the amalgamation and succession of civilizations.
The problems of representing elements and their relationships in the data field are more intricate-involving levels, embeddings and contexts, relationships, contingencies-than in other social sciences, and the subjects are as discerning as the observer. In discerning and validating complex patterns and their intertwined relationships, the anthropologist needs to make explicit and test a great deal. The computer is a means of analyzing, if not displaying, these various temporal and spatial patterns.
Before characterizing more precisely the role computers and computation play in anthropology, we must clarify what anthropologists attempt to do as scientists. The folk model of anthropology suggests in essence that field anthropologists gather data, both observations and interviews, on which to base analysis and general statements about a given culture (in fact, much ink has been spilled over the adequacy of such data as reflections of the cultural reality to which they are con-nected in a complex, interpenetrating world). Implicit in much discussion of the use of computers has been the fact that they greatly facilitate the acquisition, recording, storage, and transport of such data (Wemer & Schoepfle, 1987b, pp. 43-71).
Word processors, database managers, and spreadsheets dominate as the instruments with which this kind of anthropology is done. Though useful, this form of computation represents only a limited part of the computer's potential impact on the scientific anthropological enterprise (Wemer & Schoepfle, 1987a, pp. 237-238, 382-383). Some suggest that the linearization process, through which ethnographies are produced from raw data, is the primary source of cultural distortion (Howard, in press). A multiplexity of less mediated data and less manipulated media, coupled with less constrained presentational orderings, is seen to be a major benefit of new technologies. White (1972, in press) has emphasized the importance of concatenated models and the need to reconcile the concerns of historical particularism with universal perspectives on human behavior. While Howard's &dquo;hypermedia&dquo; seem somewhat remote, currently available hypertext programs (e.g., Pc-HYPERTEXT and OWL GUIDE) present practical options. Themes and recurrent patterns can be found through simultaneous access to multiple data sources.
If, however, one thinks that the major task of anthropologists in the field is to collect information with which to create the stereotyped facts that are the actual subject of scientific discourse, then the potential advantage of using computers and computation is even greater. Much of the computer work reviewed here facilitates the investigation of traditional anthropological issues by enabling researchers more effectively to carry out some of the more programmatic agendas for fieldwork.
With computers data can be chunked and rechunked for input into very different analytical schemes, subjected to widely divergent perspectives, and disseminated in a variety of formats and styles. This quantum leap in throughput leads to the illusion that the data stream captures reality itself. Our view is that computation has not brought field data into a closer relationship with the cultural realities with which they deal. It is true nevertheless that anthropologists, like the natives themselves, must use these data-stories and stories about stories-to account for cultural reality. They need to be able to approach them in a structured and systematic manner, to use the data in multiple models. What has changed, a little, and promises to change a lot, with the most profound impact on the future of anthropology, is the ability of anthropologists, through continuous feedback, to construct, manipulate, modify, and amplify their accounts.
Validity in Internal and External Accounts One of anthropology's great strengths and sustained contributions to the social sciences is its having founded theories not only on measurements and data collected by external observers but on cross-validation with local meanings and the views of participants in the phenomena observed. An example of advances in cross-validation made possible by computing follows.
Twenty years ago, Truex (1968) attempted a field-based account of the incipient class structure of a Zapotec village. While in the field, he surmised that the census of material assets he was organizing corresponded in remarkable ways to the natives' claims about invidious social distinctions in the village. Upon returning to the university, Truex used Fisher exact test tables and other archaic statistical analyses to demonstrate a broad correspondence between certain assets and the villagers' social classification. Recently, Sue Weller (interview, May 1988), studying health-seeking activities in a malarial area of coastal Guatemala, used the microcomputer to formulate a pertinent definition of social class that was an important part of the ethnographic input into a World Health Organization project on health seeking, behavior that is often based on relative access to resources.
For the anthropologist, the precise role social class plays in social behavior, while undoubtedly important, is highly problematic. Distinctions about relative social and economic standing are obviously made not just by observers but intemally and often invidiously, in the case of Weller's Guatemalans, from the rich to the poorest. In what sense do such distinctions constitute a folk definition of class? Could native claims of relative access to resources be used to understand how people organize and carry out their search for alleviation from pain and disease?
Rather than simply perform the standard epidemiological social survey, using explicit but exogenous variables, Weller produced her account from endogenous variables she elicited directly from native informants. Computer-based cognitive testing allowed Weller to evaluate the structure of the natives' perception of social class relations. She con-, structed a culturally sensitive index of social class by correlating survey and folk data. She found, for example, that educational attainment did not ensure access to resources, while the class index did, and the natives agreed. This index could then play a significant role in further research on health-seeking behavior.
In the past, anthropologists could opt for a little of both approaches by organizing a survey and by interviewing extensively. Back from the field, they would evaluate the relation between the two, sometimes quantitatively. Their ability to present preliminary findings to native collaborators was limited by time and money constraints; thus, the brunt of establishing the supposed relationship between survey and folk data was borne by the anthropologist, who must then present an uncertain account, giving the best brief possible. If the natives heard the account, they might laugh or, worse, shrug their shoulders. To say simply that they had another version of their own ethnography was unsatisfactory. As translators between data, for which we bear responsibility, and their reality, the natives must at least take the account seriously; but the interface of accounts also needs study.
A higher standard calls upon the anthropologist to provide something in the accounting process that ordinarily is not available to the natives themselves. To succeed we must say something about the natives' reality that satisfies their criteria, and that contains levels and connections of which they may be unaware. A proper analysis may also provide insight into the native's reality that the anthropologist has not accessed in the field (Agar, 1980, pp. 77-81 ). Computers promote a participatory relationship between anthropologists and native collaborators. Salinas Pedraza (1978), Russell Bemard's Otomi colleague, produced, within the general framework of Murdock's (1938) Outline of Cultural Materials, an account of his own culture that could be both searched and rechunked by computer. When this chunking and rechunking occurs in the course of actual fieldwork, as in Weller's work, the outcome is a research enterprise that is manifestly richer for all concerned.

Representation and Inference
The difference that the computer is making has implications for both representation and inference. The procedures for assembling the texts from which anthropologists draw their accounts are changing profoundly. Texts have been treated as somewhat sacrosanct by field anthropologists ; analysis of them has been viewed as a necessarily distorting process. The &dquo;facts&dquo; of scientific discourse are filtered from inchoate, unexamined transactions between anthropologist and collaborator. The computer forces us now to recognize that texts themselves are variable, not &dquo;real,&dquo; and that our apprehension of them can and should be probed in its own right.
The use of computers in the field not only promises but has already begun to integrate the collection of data, constructions of accounts, and formulations of higher level explanations on site. Michael Fischer has embarked on a program of fieldwork in Pakistan that, through the use of artificial intelligence, integrates both the knowledge (data-) base and the inferential mechanisms that create the database. The process is interactive ; native research collaborators validate findings in the field (Fischer, 1987a(Fischer, , 1987c(Fischer, , 1987d. There is much promise in this line of work. Dwight Read and Clifford Behrens (1988) have developed the KIN-SHIP ALGEBRA EXPER T SYSTEM, KAES, which guides the process by which formal models of kinship terminology systems are produced. Many programs already exist for handling genealogical data (see O'Neil, 1987aO'Neil, , 1987b, but these types of programs are mainly for data management, not analysis. Entailment analysis (White & McCann, 1988;Burton, Brudner, & White, 1977), developed by Douglas White, scans binary data on the presence or absence of features in a corpus of occurrences for multiple embedded Guttman scales as well as rules of co-occurrence, embedding (if... then ... ), or complementary distribution, that form inference structures. David Heise ( 1988) has produced a more generalized program for analysis of qualitative data, ETHNO, which produces graphs for studying conceptual structures and grammars for interpreting events. Though analytical, ETHNO is not field oriented; validations and refutations occur through examination of the data, although they could also occur through field inquiry and informant consultation.

Variability and Stereotypy
The integration of the perspective of individual actors in anthropological accounts has played an important role in much theorizing over the last several decades. Issues of psychological reality and etic versus emic viewpoints have given way to the systematic measurement of variation.
Cognitive anthropologists, among the first to exploit computers in cultural anthropology, have developed a range of methods now being elegantly implemented on microcomputers that can be taken to the field. James Boster, for example, has created SIMPAK, a set of programs that produces randomized triads, including complete and incomplete balanced block designs. By using a subject identification as the randomizing seed, the program designs scrambled orders for each subject, avoiding position bias, and unscrambles the triads for analysis.
The significance of such computer techniques is most apparent in the field, where the researcher can analyze individual variation to find convergent validity for models and these models in turn can open the way to questions that otherwise could not have been formulated. For example, knowing while still in the field that Zapotecs lump male cousins and brothers would have made possible both confirmatory consultation at the ethnographic level and exploration of intriguing economic and network relations, such as land co-tenancy (Truex, 1981 ).
Modeling cognition in a complex rule environment presents remarkable difficulties, which computers are beginning to help straighten out. The tremendous effort in the 1950s and 1960s to formalize anthropological domains, like systems of folk knowledge (ethnoscience and ethnosemantics), kinship, behavioral schemata, and decision rules, relied on assumptions concerning cultural sharing and on oversimplification of cultural modalities. One of the works that topped this precomputer genre was Berlin and Kay's (1969) demonstration that the way the lexicons of different languages divide the color spectrum-purportedly the example par excellence of cultural relativity-in fact exhibited universal rule-govemed properties.
The issue of intracultural variability was raised much earlier (e.g., by Roberts, 1951Roberts, , 1987 but was not suited to analysis until the advent of anthropological computing in the 1960s. The old absolutes of formal analysis eventually gave way to statistical reduction techniques based on empirical theories of cognition (e.g., Romney, Weller, & Batchelder, 1986). Other transitions were more difficult. To deal with individual variations in color terminologies, Berlin and his associates at the Quantitative Anthropology Laboratory used a wide range of computer techniques and programs to help define interaction of perception and culture. They and others gave color terms the most extensive treatment any domain (except perhaps kinship) received in anthropology.
Paradigms Lost But if computers have made it possible to redefine such issues as kinship and color systems, they have also undermined the paradigms that informed earlier work. While the salient color terms in all cultures are monolexemic, and monolexemes corresponding to the panhuman physiology of color perception are efficiently distributed throughout human languages, the salience of polylexemic color terms in individual expression may be quite high. Informant variation in colorcoding makes problematical the validity of assumptions concerning cultural markedness. The perceptions of informants need to be addressed in themselves. The complexity of the problem appears to have arrested this research at the dissertation and working-paper stage. Because of their importance for theory at this impasse, however, data from color studies need to be packaged for export to other anthropologists or social scientists, and computing makes such transport possible.
The collapse of the color-term paradigm demonstrates that principles that hold at one level, as lexemic salience does for a language or cultural system, may not apply at another level, such as that of the individual (Bateson, 1980). Under the scrutiny of the multilevel modeling of variability made possible by computing, other anthropological paradigms, such as culture and personality, national character, and the axiom that culture is shared by &dquo;its&dquo; individual &dquo;members,&dquo; have simply come apart and need to be replaced by other models.
Anthropologists are finding that culture is not constructed in the image of the computer, as if cultures were sets of plans and programs that unfolded into observed behavior and discourse, as Goodenough (1964) and Geertz (1973), from very different perspectives, asserted. A decade ago biologists discovered, to their great surprise, that the ontogenetic assembly of organisms from DNA proceeds not by programmed steps but by distributed and contextually driven processes (Jacob, 1977); now culture too can be viewed as an assembly of distributed processes. Evolution and bricolage are closer to the roots of these processes than plans and programs.

Applications
Computer applications in anthropology run from micros and minis to mainframes and supercomputers. Boone and Wood (1988) contains the most extensive listing of current projects, available software, and training programs. The Computer Assisted Anthropology Newsletter, edited by James Dow (Oakland University), provides ongoing and updated reviews of products and developments (see Feldesman, 1986;Kohler, 1987;Trotter, 1986). The Bulletin of Information on Computing in Anthropology, edited by J. Davis (Kent University, Canterbury), has a lengthy series of articles on computing and applications. Ellen and Fischer (1987) give a practical discussion of computers in the field. O'Neil (1987c) provides an excellent general review that is fairly com-prehensive for archaeology. World Cultures, edited by D. R. White (University of California, Irvine), has to date published 22 diskettes with cultural and physical anthropology and primatology databases, extensive anthropological shareware, and computing reviews (see White, 1987, for a general review of electronic journals). Beyond these sources we attempt here to give only a broad perspective of accomplishments in anthropological computing; the enormous wealth of materials, however scattered they may be, prohibits exhaustive cataloging, much less exhaustive review.

Structuring Text
Computer-oriented anthropologists have been much interested in programs for handling texts, specifically in drawing the structure out of a text by properly chunking and coding it. There are a number of ethnographic text-handling programs beyond ordinary word processors. Wood (1987), whose criterion is essentially the ability to quickly and effectively search and move chunks of text, finds the major text management and analysis programs he reviewed both &dquo;good and useful.&dquo; The main problem with such text managers is that they require extensive coding. Coding as a standard in the field has been taught and practiced much less than praised. Some text-indexing programs (ANYWORD, for example, which is not reviewed by Wood) do index multiple text files without precoding and are very useful for static text databases such as field notes or narratives.
The recent development of hypertext programs opens up possibilities for ethnographic text generation and handling that would transcend the data management and analysis tools in current use. Michael Agar (interview, May 26, 1988) suggests that among the text management options he would find most useful is the ability to simultaneously access 20 or 30 different chunks of text in order to find the thematic patterns and linkages that form the basis of an ethnography. One supposes that the themes and linkages are &dquo;there in the text,&dquo; though they surely reflect to some degree the intervention of the anthropologist in both the production and analysis of the text. Hypertext programs invite anthropologists to splice linkages onto the text itself. Through constructive patteming, chunking, and linkage, the anthropologist who uses hypertext can make his or her intuitions part of the open process of building stereotyped facts for scientific discourse and evaluation.
Hypertext retains the context of discourse and observation without subordinating it to a preconditioned scheme. The hypertext framework calls into question the notion that field notes are linear sequences; it admits the possibility that conversations transcribed as data might be sequenced differently, with equal meaning and coherence. Encoding, superimposing, chunking, embedding, tagging, and linking, all fundamental aspects of text creation, are made explicit. One use of hypertext might be to take the textual bases of written ethnographies and organize them nonlinearly at the start. Intuitions and recurrent patterns would emerge from this process. Only at the end would a linear organization be chosen from the many pathways opened up.
Coding and Abstracting Stereotyped Facts Traits do not require theoretical explanation. The rectangular array common in quantitative anthropology is simply a tableau of traits that, until massaged under some set of guiding assumptions, remain scientifically uninteresting. Much of the quantitative work in anthropology over the last few decades has, necessarily, focused on moving from rectangular arrays to the stereotyped facts that are the grist of our theories. Addressing these facts, or algorithms, has become one of the principal uses of computers in anthropology. Products range from standard commercial statistical packages widely used in classrooms and laboratories to the more tightly constructed toolkits of network (Freeman, 1987), consensus Romney, Freeman, Romney, & Freeman, 1987), and scaling analysis (see &dquo;Software Cited&dquo;). Among the most useful for current research are Boster's SIMPAK and his modified QUADRATIC ASSIGNMENT PROCEDURE, Harpending's ANTANA statistical analyses, Weller's SCALER and ANOVAREL, Borgatti's AL (succeeded by NETPAC and ANTHROPAC), the UCINET package composed by Mac-Evoy and Freeman, Whlte'S ENTAILMENT package, the Dow, Burton, White, and Reitz (1984) autocorrelation programs, and McCleary and Stiger's (1988) confirmatory spatial modeling by Poisson regressions.
Just as functionalism was criticized for treating cultures as isolates in pseudoequilibrium, systematic data collection and analysis through trait-based rectangular datasets have often been regarded as overly abstracted, out of context, and irrelevant to the more fundamental problems of relational and contextual interdependencies among elements (persons, families, and other social units). So many empirical bricks do not necessarily make a wall, as C. Wright Mills was wont to say of abstract empiricism. Some of the most exciting theoretical work in anthropology rectifies this problem with a combination of trait-based and network-based research, whose elegant complexity has called forth a host of computational procedures. James Boster, A. K. Romney, Susan Weller, and several of their colleagues, for example, have embarked on an ambitious research program to demonstrate how the distribution of knowledge and degree of sharing of distributive elements of culture are functions of individual and group position in social networks. Trait data and relational network data are shown to be intertwined in ways that vigorously suggest an empirical basis for reconceptualization of much of what we know about culture (relatively full citations are given in Boster, Weller, 1987, andFreeman, Freeman, &Michaelson, 1987). Malcolm Dow, D. R. White, M. L. Burton, and several of their colleagues have also combined network conceptualizations of linkages within and between social systems with problems of modeling causal systems that affect trait distributions. Massive autocorrelation effects in causal modeling have been found to apply to virtually all problems in sociocultural anthropology, physical anthropology, archaeology, and other social sciences (see Burton & White, 1987;Dow, 1987;and Dow etal.,1984).
The general principle of these conceptual breaks from the trait and attribute framework that has weakened the validity of generalizations throughout the social sciences is to combine the study of linkages (network datasets) with a systematic study and characterization of the standard units of analysis at various levels (e.g., individuals, societies, and networked civilizations). These studies are well complemented by the new generation of relational database systems using standard query language (SQL), which find increasing application in processing multiple texts, rectangular and network databases involved in the management of fieldsite data.
The Comparative Database: Ethnology and Ethnolinguistics Although the dominance of a simple language analogy in culture theory has passed, the apparent parallels between the activities of linguists and those of anthropologists remain strong. Linguists are acutely aware that their texts are drawn from data whose stereotypic factuality is problematic. However fine the transcription, the phonetic data are not physical reality-sound. A second pass, in which the linguist is willing to use all sorts of cues, is necessary to make sense of the analytic accounting process at any level, particularly at higher levels. In ethnology, this second pass is of the greatest interest in world systems, where accounts are being reanalyzed, with the help of computers, in a grandly historical context (White, Burton, Bradley, & Moore, in press). The substance of the account, whether functional, historical, or some combination of the two, is seen to depend on the perspective from which it is taken. Each and every fact is questioned in alternative ways. Terrence Kaufman and Brent Berlin's South American Indian Languages Documentation Project, with its proposed software mapping package (SAPIR/, demonstrates the way in which computers have made the second pass over comparative materials central both to gathering further data in the field and to theorizing in general. Both structural and genetic relations will be available through call-up of items across multiple languages, with projection maps simultaneously onscreen, or callup of classes of relations (for example, languages with six-vowel systems).
Similarly, the computer makes it possible to relate the community level of stereotyped facts in ethnology to a range of analytical perspectives, like world-systems. Data from the community level can be analyzed in regional comparisons. Community data can be clustered by time. The stereotyped facts can be related by making different cuts, looking at different alternatives. The computer frees the anthropologist from the obligation of taking just the first plausible function.
Because of the possibility of simultaneously testing hypotheses from different levels of analysis, anthropologists have gotten interested in analyzing the levels that lie above the traditional fieldsite, that is, in analyzing the social structure of the world economy as a prelude to the study of how position in this larger structure affects social change in the local community. Douglas White and David Smith have done such an analysis on the CRAY supercomputer in San Diego (Jovanovic, 1988). Similarly, Pauline Kolenda is using the CRAY to analyze network patterns in a 30,000-person regional marriage connubium in southern India that is completely endogamous.
The computer has made comparative analysis widely available; it has also made reconsideration of some basic kinds of comparative analysis possible. Terrence Kaufman (interview, May 1988) has informally proposed that with computers glottochronology can be done right. Use of the full range of appropriately documented languages extant in writing for more than a thousand years, rather than the 13 originally used by Swadesh, and expansion of the 100-word list, determined by hand-computation constraints, would make the underlying sampling and mathematical model of glottochronology more convincing scientifically.
Thus Kaufman proposes to use the computer to create a superior set of stereotyped facts for scientific discourse about the genetic relationships among languages.
Finally, Allen Johnson and Oma  have begun to produce comparative time allocation studies of world societies, and the electronic journal World Cultures has made comparative databases commonly available.

Computers in the Classroom
As computers come to dominate more and more of current anthropological research, they will be found more often on the training agenda of the discipline. In time, a common core of computer experience will no doubt emerge. At present, most courses are regularly revised in the light of changing needs and available options. The most common elements are the teaching of text-and data-handling skills and a general introduction to the potential and actual uses of computers in human social research (Wood, 1988;Fischer, 1987b). The availability on floppy disk of extensive, coded data for further analysis has had a great impact on the use of computers in the anthropology classroom. Valerie Wheeler (California State University, Sacramento) has developed a course using the World Cultures programs and databases in which students experience the process of cross-cultural comparative methodology from beginning to end. Many other anthropologists are implementing similar courses at other universities. The National Science Foundation Anthropology Program is contemplating funding summer programs in the years 1990 to 1992 to train instructors in ethnological research and didactic software, including computer analysis and coding of ethnographic texts and advanced methods in comparative research.
Didactic uses of computers in anthropology have been made for decades at rare schools such as Dartmouth, but most universities have had to await the advent of the Pc. While statistical tutorials and demonstrations predominate, new and innovative approaches are beginning to ap-pear. One of the exciting ones has been the development by Behrens, Daza, Moret, and Savas (1987) of a culturally sensitive, multimedia (almost hypermedia) course in conversational Quechua. O'Neil (1987c) reviewed videodisk multimedia applications by Jerome Smith (University of South Florida) and CAI applications in cultural and physical anthropology by C. Smith and K. Beals (Oregon State). D. White (Irvine) has made extensive use of hypertext systems for undergraduate ethnographic research projects.
The Software Development/Coordination Problem Among those we interviewed in preparing our review, Dwight Read (UCLA) (interview, May 1988), and to some extent his colleague Behrens, argued that the most important changes in anthropological computing will result from the dominance of the new hypermedia technologies of integrated text, sound, and visuals from videodisk. Howard (in press) seems to share this viewpoint; he sees hypermedia as a means of effectively simulating the reality of ethnographic experience. Although hypermedia are in many respects just over the horizon, James Boster (interview, May 1988) took the position that few anthropologists will be able to afford the luxury of working in an area that does not produce results that can be evaluated by traditional criteria. In our view, the impact of hypermedia (like that of films and videos today) will be relatively minor in relation to problems posed by central issues in the field (cultural reality is unlikely ever to be captured in the field and brought home; it still pays to send out the ethnographer). Remote sensing presents a more plausible possibility, but it is an extremely labor-intensive and high-technology enterprise, beyond the reach of today's low-end microcomputer alternatives.
Of course, computer-generated visual tools have already been used, by A. K. Romney and his associates (Romney, 1980;Whiting, Burton, Romney, Moore, & White, in press), who have worked through multidimensional and optimal scaling (Nishisato, 1980;Greenacre, 1984) as anthropological tools. Although visual, aural, and other hypermedia may present obstacles to one's career goals, as Boster noted, the other route to gaining credit for work with computers in anthropology is to solve theoretical problems that others will then write about and attribute to you. A more pressing difficulty is the lack of reward for getting software solutions to a level of development at which others can use them. Many anthropologists are solving and re-solving the same software problems. Because there is no reward for dissemination, most solutions are left in the workable but incompletely documented stage. We are aware of fundamental reduplication of effort in a surprising number of critical instances because there is no standard method of circulating and crediting solutions.
Computing in anthropology has also suffered from a severe brain drain. Some of the best people have found work in business, industry, and other areas of the social sciences. In part the drain is due to citation and acknowledgment practices in the field, which generally differ from those common to the hard sciences and areas of the social sciences with longer experience in formal modeling and analysis. In the absence of relevant citation and acknowledgment practices, major contributors to computing and computational model development are severely penalized. The considerable loss of computer-trained anthropologists is particularly regrettable at this stage of development in anthropological theory and method, when anthropology has so much to contribute to our understanding of a dramatically changing world.
To encourage further foundational work in computing and anthropology, adequate crediting procedures, such as formats for citation, are needed, as well as discipline-wide pressure to make software citations standard. Publishing the documentation for software creates bibliographic citations and encourages development. Appropriate journals should be urged to provide outlets for these publications, and anthropology journals should be encouraged to review computer software developed by and for anthropologists, along with books and films.
The most effective way to ensure due credit, however, would be to make useful programs &dquo;tum-key&dquo; operations, where anthropologists with limited computer skills and interests could be guided with little hassle through a menu-driven set of options to an analysis of uniformly encoded data. This is already being done by Steve Borgatti, whose AN-TI-rROPAC contains a user-oriented data manager; a CONSENSUS program which produces a complete Romney-Weller-Batchelder (1986) consensus model based on true-false, multiple-choice, and other data types; a uDs ( &dquo;unidimensional scaling&dquo;) module for Guttman and Likkert scaling, scoring, and reliabilities; an LORDS module with options for nonmetric multidimensional scaling, MD-pref, or optimal scaling; and TRIADs or PILESORT routines for processing raw field data prior to scaling analysis. Borgatti's (1988) NETPAC software contains user-oriented routines for a variety of ROLE, GROUP, CENTRALITY, and GRAPH analysis and uses the same DATAMANdata manager as ANTHROPAC. These packages have been used instructionally at the NSF summer workshop in primary data construction in cultural anthropology.
Standardization of input files for anthropological software along the lines already followed in uCINET would be a step toward shareable software. Shareable software can be provided with a standard &dquo;front-end&dquo; for users fairly easily. Modular programming in languages such as Pascal, c, and FORTRAN can facilitate the development of integrated anthropological workstations. Setting up standards for shareable programs is in the best interests not only of those who use the software but of those writing it, since it is only by sharing software that such work can become recognized and credited A modular approach to sharing software solutions (White, 1988) in constructing integrated public-domain anthropological software packages for a wide variety of research and didactic purposes has been developed by a group of anthropologists led by Chad McDaniel (University of Maryland). The prototype is ucINET, which represents the programming contributions of dozens of authors in network research, all inte-grated around a standard programming language and standard data input files.

Conclusion
Computing in anthropology has had an uneven history. Anthropology has been slow to develop its own software and even slower to reward those who have made major and foundational contributions. Development has been encouraged as much from outside the discipline as from within. Computing in anthropology has built on the contributions of many people from the 1960s on, but change, in a field often highly resistant to it, for both good and questionable reasons, has been painful.
As a result of National Science Foundation support and the ease with which methodologies can now be transported to the field as well as displayed in the classroom, cultural anthropologists have recently begun to come together in groups to develop and disseminate software and training. In 1985 the Anthropology Program of NSF called a meeting of professionals to discuss the construction of primary data in cultural anthropology. This was followed by a highly acclaimed joint article on the subject (Bernard et al., 1986), followed by an NSF summer training workshop. A similar process has stimulated parallel developments in comparative research methodology and training (Ember & White, in press). NSF support of electronic conferencing in the 1970s helped to stimulate the development of social networks research, which has now contributed an enormous wealth of software to the social science community.
In the 1980s anthropology has taken off in areas previously underexplored, areas in which developments have been prompted by the closer encounters with data that computing has made possible. Progress has not been adopted by everyone, but specialists at the leading edge of this area have made great strides in producing new programs and methods. Indeed, there is now considerable replication of programs and software in some well-studied areas, but there is also considerable movement into new problem areas. Computing in anthropology, though controversial, is one of the most vital aspects of the discipline. Note Douglas White is editor of World Cultures electronic journal and professor at the University of California, Irvine. Gregory Truex is professor of anthropology at California State University, Northridge. The authors acknowledge the contributions of the following persons: Michael Agar, Cliff Behrens, James Boster, Robert Dewar, Bob Hsu, Terrence Kaufman, Bert Pelto, Dwight Read, Susan Weller, and Valerie Wheeler, all of whom found time for telephone interviews and provided more leads than could be followed, and some of whom provided documents and programs as well; Margaret Boone, who generously provided a draft of Boone and Wood (1988); and Lilyan Brudner-White, who performed a critical reading of the manuscript.