The Concise Encyclopedia of Applied Linguistics. Carol A. Chapelle

Читать онлайн.
Название The Concise Encyclopedia of Applied Linguistics
Автор произведения Carol A. Chapelle
Жанр Языкознание
Серия
Издательство Языкознание
Год выпуска 0
isbn 9781119147374



Скачать книгу

discourse. Depending on the learners' goals, the texts may represent conversations, lectures, train station announcements, or video clips; and newspapers, novels, college textbooks, or e‐mails. Comprehension tasks oblige the test takers to process the vocabulary in real time, which means they need both automatic recognition of high‐frequency words and the ability to process the input in chunks rather than word by word. This constraint is more obvious with respect to listening tasks, given the fleeting and ephemeral nature of speech, but it also applies to reading if the learners are to achieve adequate comprehension of the overall text. The test takers also need to understand lexical items in a rich discourse context, rather than as independent semantic units.

      One important step in selecting texts for comprehension assessment is to evaluate the suitability of the vocabulary content for the learners' level of proficiency in the language, since it is unreasonable to expect them to understand a text containing a substantial number of unknown lexical items. Traditionally, this step is assisted by applying a standard readability formula, such as the Flesch Reading Ease score or the Flesch–Kincaid Grade Level score (both available in Microsoft Word), incorporating word frequency as a core component. Another approach is to submit the text to the VocabProfile section of the Compleat Lexical Tutor (www.lextutor.ca), which offers both color coding and frequency statistics to distinguish common words from those that occur less frequently. It should be noted that both these approaches are word based, so they may underestimate the lexical difficulty of texts containing idiomatic or colloquial expressions.

      Vocabulary assessment for comprehension purposes is embedded, in the sense that it engages with a larger construct than just vocabulary knowledge or ability. In practical terms, this means that the vocabulary test items form a subset of items within the test. In addition, the focus of the items changes from simply eliciting evidence of the ability to recognize or recall word meanings to contextual understanding, through reading items like these:

      The word “inherent” in line 17 means. ..

      Find a phrase in paragraph 3 that means the same as “analyzing.”

      Items may also assess lexical‐inferencing ability by targeting vocabulary items that the test takers are unlikely to know, but whose meaning can reasonably be inferred by clues available in the surrounding text.

      Use

      Use refers to the ability of learners to draw on their vocabulary resources in undertaking speaking or writing tasks, like giving a talk, participating in a conversation or discussion, composing a letter, writing an essay, or compiling a report. This represents a more genuine sense of production than a recall task such as supplying a content word to complete a gap in a sentence. One characteristic of vocabulary use tasks which distinguishes them from the other three approaches outlined above is that the task designer cannot normally target particular lexical items by requiring the learners to incorporate specific words into what they produce. Thus, the choice of words can only be influenced indirectly by the choice of task and topic, or by the selection of appropriate input material: source texts, pictures, diagrams, and so on.

      As with comprehension tasks, use tasks can be assessed purely as measures of the learners' vocabulary ability or as measures of a larger speaking or writing construct in which vocabulary is embedded as one component. Vocabulary researchers have devised a variety of statistics to evaluate the lexical characteristics of texts: How many different words are used? What percentage of the words are low‐frequency items? What percentage are content words? Until now, the statistics have not been very practical tools for assessment purposes but recent advances in automated writing assessment (Carr, 2014), in which lexical measures play a prominent role, mean that automated ratings complement human judgments in the assessment of writing in the Internet‐based Test of English as a Foreign Language (TOEFL), and they completely replace human raters in the Pearson Test of English (Academic).

      For now the more common practice in speaking and writing tasks is for the teacher, or the rater in the case of a more formal test, to assess the learners' use of vocabulary by means of a rating scale. For example, in the speaking module of the International English Language Testing System (IELTS), lexical resource is one of four criteria that the examiners apply to each candidate's performance, along with fluency and coherence, grammatical range and accuracy, and pronunciation. Highly proficient candidates are expected to use a wide range of vocabulary accurately and idiomatically, whereas those with more limited speaking proficiency are restricted to talking about familiar topics and lack the ability to paraphrase what they want to say. Thus, such assessments are based on raters' perceptions of general lexical features of the test takers' task performance, rather than on any individual vocabulary items.

      SEE ALSO: Assessment of Reading; Assessment of Writing; Corpus Linguistics in Language Teaching; Formulaic Language and Collocation; Teaching Vocabulary; Vocabulary and Language for Specific Purposes

      1 Ackermann, K., & Chen, Y. (2013). Developing the academic collocations list (ACL): A corpus‐driven and expert‐judged approach. Journal of English for Academic Purposes, 12(4), 235–47.

      2 Brezina, V., & Gablasova, D. (2015). Is there a core general vocabulary? Introducing the new general service list. Applied Linguistics, 36(1), 1–22.

      3 Browne, C., Culligan, B., & Phillips, J. (2013). A new general service list. Retrieved April 2, 2019 from www.newgeneralservicelist.org/

      4 Carr, N. T. (2014). Computer‐automated scoring of written responses. In A. J. Kunnan (Ed.), The companion to language assessment (Vol. 2, chap. 64). Chichester, England: Wiley‐Blackwell.

      5 Chang, A. C., & Read, J. (2006). The effects of listening support on the listening performance of EFL learners. TESOL Quarterly, 40(2), 375–97.

      6 Coxhead, A. (2000). A new academic word list. TESOL Quarterly, 34(2), 213–38.

      7 Dang, T. N. Y., Coxhead, A., & Webb, S. (2017). The academic spoken word list. Language Learning, 67(4), 959–97.

      8 Elgort, I. (2011). Deliberate learning and vocabulary acquisition in a second language. Language Learning, 61(2), 367–413.

      9 Gardner, D., & Davies, M. (2014). A new academic vocabulary list. Applied Linguistics, 35(3), 305–27.

      10 Nation, I. S. P. (2013). Learning vocabulary in another language