Название | The Concise Encyclopedia of Applied Linguistics |
---|---|
Автор произведения | Carol A. Chapelle |
Жанр | Языкознание |
Серия | |
Издательство | Языкознание |
Год выпуска | 0 |
isbn | 9781119147374 |
14 Leki, I., & Carson, J. (1997). “Completely different worlds”: EAP and the writing experiences of ESL students in university courses. TESOL Quarterly, 31(1), 39–69.
15 Li, J. (2014). Examining genre effects on test takers' summary writing performance. Assessing Writing, 22, 75–90.
16 Plakans, L. (2009). Discourse synthesis in integrated second language assessment. Language Testing, 26(4), 561–87.
17 Plakans, L., & Gebril, A. (2017). Exploring the relationship of organization and connection with scores in integrated writing assessment. Assessing Writing, 31, 98–112.
18 Plakans, L., Liao, J., & Wang, F. (2018). Integrated assessment research: Writing‐into‐reading. Language Teaching, 51, 430–4.
19 Read, J. (1990). Providing relevant content in an EAP writing test. English for Specific Purposes, 9, 109–21.
20 Sawaki, Y., Quinlan, T., & Lee, Y. (2013). Understanding learner strengths and weaknesses: Assessing performance on an integrated writing task. Language Assessment Quarterly, 10, 73–95.
21 Wang, C., & Qi, L. (2013). A study of the continuation task as a proficiency test component. Foreign Language Teaching and Research, 45, 707–18.
22 Weigle, S. C., Yang, W., & Montee, M. (2013). Exploring reading processes in an academic reading test using short answer questions. Language Assessment Quarterly, 10, 28–48.
23 Wolfersberger, M. (2013). Refining the construct of the classroom‐based writing‐from‐readings assessment: The role of task representation. Language Assessment Quarterly, 10, 49–72.
24 Yang, H. C. (2014). Toward a model of strategies and summary writing performance. Language Assessment Quarterly, 11, 403–31.
25 Yang, H. C., & Plakans, L. (2012). Second language writers' strategy use and performance on an integrated reading‐listening‐writing task. TESOL Quarterly, 46, 80–103.
26 Yu, G. (2008). Reading to summarize in English and Chinese: A tale of two languages? Language Testing, 25, 521–51.
27 Zhu, X., Li, X., Yu, G., Cheong, C. M., & Liao, X. (2016). Exploring the relationships between independent listening and listening–reading–writing tasks in Chinese language testing: Toward a better understanding of the construct underlying integrated writing tasks. Language Assessment Quarterly, 13, 167–85.
Suggested Readings
1 Doolan, S. M., & Fitzsimmons‐Doolan, S. (2016). Facilitating L2 writers' interpretations of source texts. TESOL Journal, 7, 716–45.
2 Grabe, W., & Zhang, C. (2013). Reading and writing together: A critical component of English for academic purposes teaching and learning. TESOL Journal, 4, 9–15.
3 Kim, A. Y., & Kim, H. J. (2017). The effectiveness of instructor feedback for learning‐oriented language assessment: Using an integrated reading‐to‐write task for English for academic purposes. Assessing Writing, 32, 57–71.
4 Sawaki, Y., Stricker, L. J., & Oranje, A. H. (2009). Factor structure of the TOEFL Internet‐based test. Language Testing, 26(1), 5–30.
5 Yu, G. (2009). The shifting sands in the effects of source text summarizability on summary writing. Assessing Writing, 14, 116–37.
Assessment of the Linguistic Resources of Communication
JAMES E. PURPURA AND JEE WHA DAKIN
The sociocultural, economic, and geopolitical forces in education, the workplace, and in our daily lives have significantly increased the linguistic competencies needed to function successfully in today's world. As language users, we need a range of linguistic resources to understand and express propositions for a variety of purposes in written, spoken, and visual forms to interact and cooperate with others. We also need linguistic resources to establish and maintain relationships, and collaborate in multicultural teams, often online. We need linguistic resources to conduct, navigate, and negotiate everyday transactions. And we draw from the same set of resources to process information, analyze it, categorize it, critically evaluate it to reason from evidence, learn, solve problems, and make decisions. In short, the linguistic resources needed to use a second or foreign language (L2) to communicate accurately, meaningfully, and appropriately, while performing tasks that span a variety of topics and contexts, have over the years become increasingly more complex. This complexity in the target of assessment concomitantly presents important challenges for those interested in measuring the L2 linguistic resources needed to communicate in today's world.
Representing the Linguistic Resources of Communication
To measure the linguistic resources of communication, language testers need a means of defining them. What exactly are linguistic “resources of communication” and how can they be represented? Is a list of grammatical structures that learners need to master an adequate representation for assessment? Are the resources forms that are associated with literal meanings? Are they forms that can vary in meaning depending on the context in which they are used? Or are they an amalgam of independent form‐meaning mappings that conspire to convey propositions in context?
These questions remain important in the assessment of grammar despite the fact that L2 educators have always acknowledged the importance of linguistic resources, specifically the grammatical resources of communication. Fundamental questions remain because of lack of agreement on how to represent linguistic resources as well as how they can best be taught, tested, and researched. As a result, for decades, testers have proposed and refined models of L2 knowledge, each specifying an explicit grammatical component (e.g., Lado, 1961; Canale & Swain, 1980; Bachman, 1990; Bachman & Palmer, 1996; Purpura, 2004, 2016). These models are introduced as reflecting two distinct conceptualizations of language proficiency: one based on knowledge of grammatical form and the other based on a set of linguistic resources for creating contextualized meaning. Both conceptualizations of L2 knowledge are effectively used today as a basis for designing assessments for a range of purposes, but their differences are important because they affect score interpretation and use.
Conceptualizing L2 Proficiency as Knowledge of Grammatical Forms
Drawing on structural linguistics and discrete‐point measurement, Lado (1961) proposed a L2 proficiency model in which L2 knowledge was conceptualized in terms of linguistic forms, occurring in some variational distribution, that are needed to convey linguistic, cultural, and individual meanings between individuals. While his model highlighted the relationship between grammatical forms and their communicative meaning potential, he prioritized form over meaning, thereby operationalizing proficiency as the accuracy of discrete, linguistic elements (phonology, syntax, lexicon) of language use (reading, listening, speaking, writing). In other words, L2 proficiency was defined solely in terms of discrete grammatical forms, with no categorization of the forms and little explicit acknowledgment of their relationship to meaning. This “traditional” approach to L2 assessment (i.e., grammar assessment) is usually based on a principled list of possible forms that might be measured. Figure 1 displays a traditional list of grammatical forms. Figure 2 shows a list of phonological forms based on Ellis and Barkhuizen (2005) and Bonk and Oh (2019).
Figure 1 List of grammatical forms based on Celce‐Murcia and Larsen‐Freeman (1999)