The military has always been interested in language teaching and testing, even though there is little literature on the topic (Green and Wall, 2005). Perhaps it's just all too secret? But it is hard to deny that in all conflicts and peace-keeping operations communication is essential, between military personnel and local people, between military personnel who speak different languages, and between liaison officers in a variety of roles. One of my favourite quotations comes from Kaulfers (1944: 137):
The nature of the individual test items should be such as to provide specific, recognisable evidence of the examinee's readiness to perform in a life-situation, where lack of ability to understand and speak extremporaneously might be a serious handicap to safety and comfort, or to the effective execution of military duty.
Kaulfers was recommending language for specific purposes performance tests, and the creation of rating scales that reflect the kind of performance generated by specific purpose tasks. After the language teaching experiences of World War II, work began on the construction of such tests, and the first rating scale was produced by the Foreign Service Institute (FSI) in 1952. Since then many developments in the assessment of speaking have been linked to military needs.
The Defense Langauge Institute (DLI) is the language training wing of the United States Military. This video is embedded from YouTube, explaining the role of the DLI in training military personnel in languages, and assessing them using the Defense Language Proficiency Test (DLPT). However, the story of language training and testing is much larger. The Army Specialized Training Program (APT) was established in 1942 and taught 140,000 learners before the end of the war (Angiolillo, 1947; Velleman, 2008).
Work on the FSI rating scale for speaking was picked up again during the Korean War (1950 - 1953), and a new expanded scale published in 1958. During the 1960s the FSI work was adopted by the DLI and many other government and military agencies, which led to a common standardized scale called the Interagency Langauge Roundtable (ILR). For a fuller account of this, see Chalhoub-Deville and Fulcher (2003) and Fulcher (1997). The aim in the military has always been the same: to certify practical language skills that are directly related to performance in the field.
This link between advancement in language testing and training, and the language needs of the military, is frequently a subject of political comment. In The Washington Post on October 23rd, 2001, p. A23, Senator Paul Simon wrote:
In every national crisis from the Cold War through Vietnam, Desert Storm, Bosnia and Kosovo, our nation has lamented its foreign language shortfalls. But then the crisis 'goes away', and we return to business as usual. One of the messages of September 11 is that business as usual is no longer an acceptable option.
Much investment and great effort has been put into military langauge training and testing over the years. This is a link to a YouTube video from the company that produces language training software. It sets up scenarios in which soldiers might find themselves, allowing the learners to participate in simulations of the kinds of interactions in which they will have to engage in the field once they are deployed. The software also provides the language elements needed to achieve the objectives of the simulations.
For Discussion
Make a list of possible consequences of miscommunication in a military context.
Look at the websites for the Tactical Language and Culture Training System, and the FSI, below. How do they compare in their approach to learning a language for the specific purpose of military use?
With reference to the video (above), evaluate the role of technology in assessing effective military communication. Before you do this, you may wish to watch the video on technology in language testing by Carol Chapelle for a theoretical framework for your analysis.
In the links below you will find information about both the United Kingdom's and United State's approaches to both teaching and assessing military language. To what extent are these similar or different? Which approach do you think is likely to produce the most effective results? Or do you think they are likely to be equaly effective?
Look at the NATO proficiency levels, and the ILR rating scales, to which you will find links below. What do you think is being tested by each scale? (List the constructs if possible). Do you think the scales are fit for purpose? If not, how might they be improved?
Look at the links on the right side of this web page (if there are any). Choose one that looks interesting to you. What issues does it raise for the training and testing of military personnel?
Angiolillo, P. (1947). Armed forces foreign language teaching. New York: Vanni.
Chalhoub-Deville, M. and Fulcher, G. (2003). The Oral Proficiency Interview: A Research Agenda. Foreign Language Annals 36(4), 498 - 506.
Fulcher, G. (1997). The Testing of Speaking in a Second Language. In Clapham, C. and Corson, D. (Eds.) Encyclopedia of Language and Education, Volume 7: Language Testing and Assessment (pp. 75 - 85) Dordrecht: Kluwer Academic Publishers.
Kaulfers, W. V. (1944). War-time developments in modern language achievement tests. Modern Language Journal 28, 136 - 150.
Green, R. and Wall, D. (2005). Language testing in the military: problems, politics and progress. Language Testing 22, 3, 379 - 398.
Velleman, B. L. (2008). The 'Scentific Linguist' Goes to War: The United States A.S.T. Program in Foreign Languages. Historiographia Linguistica 25, 3, 385 - 416.