ial is a refereed journal managed by scholars in the field of applied linguistics. Our aim is to publish outstanding research from faculty, independent researchers, and graduate students in the broad areas of second language acquisition, language socialization, language processing, language assessment, language pedagogy, language policy, making use of the following research methodologies (but not limited to): discourse analysis, conversation analysis, critical discourse analysis, critical race theory, and psychophysiology. ial publishes articles, book reviews, and interviews with notable scholars.
Volume 1, Issue 2, 1990
Six months ago, in our inaugural issue, Issues in Applied Linguistics called for responses from our readers to two questions: What is applied linguistics? What should applied linguistics be? We were motivated to pose these fundamental questions as founders of a new journal in an emerging field, whose own graduate program in applied linguistics was in the process of becoming an independent department. This transition has raised important issues concerning our academic identity and research agendafor the future, not only for ourselves but for the larger academic community with whom we interact and exchange expertise. Fourteen replies were received in response to our questions from graduate students and researchers in the U.S. and from as far away as Brasil, Finland, Romania, and Israel. In addition to geographical diversity, the respondents represent various departmental affiliations, including sociology, Germanic languages, English, health services, linguistics, psycholinguistics brainresearch,and applied linguistics. Moreover,the views expressed in the contributions not only reflect different ways of approaclUng the questions, they embody many of the current emphases encompassed by our interdisciplinary field. Roundtable possible. lAL would like to thank all the contributors for helping make this
This study examines the accuracy of transliterated messages produced by sign language interpreters in university classrooms. Causes of interpreter errors fell into three main categories: misperception of the source message, lack of recognition ofsourceforms, and failure to identify a target language equivalent. Most errors were found to be in the third category, a finding which raises questions not only about the preparation these interpreters receivedfor tertiary settings, but more generally about their knowledge of semantic aspects of the American Sign Language (ASL) lexicon. Deaf consumers' perceptions of problems with transliteration in the classroom and their strategies for accommodating various kinds of interpreter error were also elicited and are discussed. In support of earlier research, this study' s finding that transliteration may not be the most effective means of conveying equivalent information to deaf students in the university classroom raises questions about the adequacy of interpreters'preparationfor this task.
This paper discusses the development, implementation, and evaluation of a semi-direct test of oral proficiency: the Rhetorical Task Examination (RTE). Many of the commonly used speaking instruments assess oral proficiency in terms of either discrete linguistic components-fluency, grammar, pronunciation, and vocabulary-or in terms of a single, global ability rating. The RTE proposes a compromise approach to rating oral skills by having two scales: one which ascertains the functional ability to accomplish a variety of rhetorical tasks, the other to address the linguistic competence (Canale & Swain, 1980) displayed in the performance. On audiotape in a language laboratory setting, 52 students representing three levels of a university ESL program performed six tasks related to the rhetorical modes covered in their coursework: short questions and answers, description, narration, process (giving directions), opinion, and comparison- contrast. The construction and justification of both the instrument and the rating scales are explained; data obtained from administering the RTE across classes as well as before and after instruction are presented; and the relevant measurement characteristics of the test are discussed. Results of this study indicate that the Rhetorical Task Examination is promising as a measure of oral proficiency in terms of practicality, reliability, and validity.
The Intelligibility of Three Nonnative English-Speaking Teaching Assistants: An Analysis of Student-Reported Communication Breakdowns
The intelligibility of nonnative English-speaking teaching assistants (NNSTAs) is an issue that concerns researchers, administrators, teacher-trainers, and undergraduates. Based primarily on the work by Smith & Nelson (1985), this paper offers a novel method of looking at intelligibility—first recording undergraduates' immediate feedback on communication breakdowns while watching three NNSTA presentations, then following with an analysis of those communication break downs by a group of ESL specialists. The analysis in this study yielded a taxonomy offactors affecting the intelligibility of the NNSTAs. This study also found pronunciation to be the main cause of unintellgibility in the three NNSTA presentations, whether in isolation or in combination with vocabulary misuse, nonnative speech flow, or poor clarity of speech, a finding which confirms students' perceptions of the language problems of NNSTAs reported by Hinofotis & Bailey (1981) and by Rubin & Smith (1989).