Название | The Concise Encyclopedia of Applied Linguistics |
---|---|
Автор произведения | Carol A. Chapelle |
Жанр | Языкознание |
Серия | |
Издательство | Языкознание |
Год выпуска | 0 |
isbn | 9781119147374 |
24 Lynch, T. (1998). Theoretical perspectives on listening. Annual Review of Applied Linguistics, 18, 3–19.
25 Major, R., Fitzmaurice, S. F., Bunta, F., & Balasubramanian, C. (2002). The effects of non‐native accents on listening comprehension: Implications for ESL assessment. TESOL Quarterly, 36(2), 173–90.
26 Min‐Young, S. (2008). Do divisible subskills exist in second language (L2) comprehension? A structural equation modeling approach. Language Testing, 25(4), 435–64.
27 Ockey, G. J. (2007). Construct implications of including still image or video in web‐based listening tests. Language Testing, 24(4), 517–37.
28 Ockey, G. J. (2009). Developments and challenges in the use of computer‐based testing (CBT) for assessing second language ability. Modern Language Journal, 93, 836–47.
29 Ockey, G. J., & French, R. (2016). From one to multiple accents on a test of L2 listening comprehension. Applied Linguistics, 37(5), 693–715.
30 Ockey, G. J., Papageorgiou, S., & French, R. (2016). Effects of strength of accent on an L2 interactive lecture listening comprehension test. International Journal of Listening, 30(1–2), 84–98.
31 Ockey, G. J., & Wagner, E. (2018). Assessing L2 listening: Moving towards authenticity. Philadelphia, PA: John Benjamins.
32 Shin, D. (1998). Using video‐taped lectures for testing academic language. International Journal of Listening, 12, 56–79.
33 Sueyoshi, A., & Hardison, D. (2005). The role of gestures and facial cues in second language listening comprehension. Language Learning, 55, 661–99.
34 Suvorov, R. (2015). The use of eye tracking in research on video‐based second language L2 listening assessment: A comparison of context videos and content videos. Language Testing, 32(4), 463–83.
35 Tauroza, S., & Luk, J. (1997). Accent and second language listening comprehension. RELC Journal, 28, 54–71.
36 Wagner, E. (2010). The effect of the use of video texts on ESL listening test‐taker performance. Language Testing, 27(4), 493–515.
37 Wu, Y. (1998). What do tests of listening comprehension test? A retrospection study of EFL test‐takers performing a multiple‐choice task. Language Testing, 15, 21–44.
Suggested Readings
1 Flowerdew, J. (Ed.). (1994). Academic listening: Research perspectives. Cambridge, England: Cambridge University Press.
2 Geranpayeh, A., & Taylor, L. (2013). Examining listening: Research and practice in assessing second language listening. Cambridge, England: Cambridge University Press.
Assessment of Pragmatics
CARSTEN ROEVER
The assessment of second language pragmatics is a relatively recent enterprise. This entry will briefly review the construct of pragmatics, discuss some major approaches to testing pragmatics, and highlight some of the challenges for pragmatics assessment.
The Construct
The concept of pragmatics is far reaching and is commonly understood to focus on language use in social contexts (Crystal, 1997). Subareas include deixis, implicature, speech acts, and extended discourse (Mey, 2001). In the second language area, pragmatics is represented in major models of communicative competence (Canale & Swain, 1980; Canale, 1983; Bachman & Palmer, 1996), which inform second language assessment, but it has not been systematically assessed in large language tests. However, assessments have been developed for research purposes, and they follow either a speech act approach, or an interactional approach, leading to the coexistence of two distinct assessment constructs.
Assessments situated within a speech act approach are informed by speech act pragmatics (Austin, 1962; Searle, 1976; Leech, 1983) and, to a lesser extent, work on implicature (Grice, 1975). Following Leech (1983), these assessments consider pragmatic ability as consisting of sociopragmatic and pragmalinguistic knowledge and ability for use. Sociopragmatics relates to social rules, whereas pragmalinguistics covers the linguistic tools necessary to express speech intentions. In language testing, the conceptualization of pragmatic competence outlined by Timpe Laughlin, Wain, and Schmidgall (2015) is situated within this approach.
Assessments following an interactional approach are usually informed by the construct of interactional competence (Kramsch, 1986; Young, 2008; Galaczi & Taylor, 2018), which in turn heavily relies on conversation analysis (CA) (for overviews, see Schegloff, 2007; Clift, 2016). Interactional competence is the ability to engage in extended interaction as a listener and speaker, including display of recipiency of interlocutor talk, turn taking, repair, sequence organization, turn formation, as well as the configuration of these generic features of talk for the enactment of social roles in specific contexts (Hall & Pekarek Doehler, 2011). Galaczi and Taylor (2018) describe this approach from a language testing perspective.
Assessment instruments in second language (L2) pragmatics do not usually cover all possible subareas of pragmatics but rather focus on sociopragmatics, pragmalinguistics, or interactional competence. While testing of L2 pragmatics is not (yet) a component of large‐scale proficiency tests, some interactional abilities are assessed as part of such tests as part of the speaking construct (Galaczi, 2014), and implicature as part of the listening construct, e.g., in Test of English as a Foreign Language (TOEFL) (Wang, Eignor, & Enright, 2008).
Tests Under the Speech Act Construct
The first comprehensive test development project for L2 pragmatics was Hudson, Detmer, and Brown's (1992, 1995) test battery. They focused on sociopragmatic appropriateness for the speech acts request, apology, and refusal by Japanese learners of English, and designed their instruments around binary settings of the context variables power, social distance, and imposition (Brown & Levinson, 1987). Hudson et al. (1992, 1995) compared several different assessment instruments, but, like many studies in interlanguage pragmatics (Kasper, 2006) relied heavily on discourse completion tests (DCTs). A DCT minimally consists of a situation description (prompt) and a gap for test takers to write what they would say in that situation. Optionally, an opening utterance by an imaginary interlocutor can precede the gap, and a rejoinder can follow it. Figure 1 shows a DCT item intended to elicit a request.
Hudson et al.'s (1995) instrument included traditional written discourse completion tests (DCTs); spoken DCTs, where the task input was in writing but test takers spoke their response; multiple‐choice DCTs; role plays; and two types of self‐assessment questionnaires. Test taker performance was rated on a five‐step scale for use of the correct speech act, formulaic expressions, amount of speech used and information given, formality, directness, and politeness. This pioneering study led to several spin‐offs. Yamashita (1996) adapted the test for native‐English‐speaking learners of Japanese, Yoshitake (1997) used it in its original form, and Brown and Ahn (2011) report on an adaptation for Korean as a target language. In a review, Brown (2001, 2008) found good reliability for the role plays, as well as the oral and written DCTs and self‐assessments, but the reliability of the multiple‐choice DCT was low. This was disappointing as the multiple‐choice DCT was the only instrument in the battery that did not require raters, which made it the most practical of all the components. In subsequent work, Liu (2006) developed a multiple‐choice DCT for first language (L1) Chinese‐speaking learners of English and reported high reliabilities. Tada (2005) used video prompts to support oral and multiple‐choice DCTs and obtained reliabilities in the mid .7 range.
A more recent sociopragmatically oriented test battery was developed by Roever, Fraser, and Elder (2014). While the focus of this battery was also on measuring test takers' perception and production of appropriate language use, it was delivered through an online system and designed