Kalantzis and Cope on New Media and New Assessments


Assessing Social Cognition

Traditionally, assessment has focused on things in your individual head, or even more narrowly, on things that can be remembered. In every part of the world outside of tests, however, knowledge is always at hand and accessible—in communities of practice, in readily available knowledge sources and through search and calculation algorithms. Our society also depends on a division of intellectual labour by means of which, although we may become experts in some things, we can never be experts in everything and so habitually rely on others’ expertise. This is a deep, structural characteristic of modern societies, a fundamental characteristic of our conditions of epistemic existence that conventional assessments chose to ignore. The social web and ubiquitous connectivity make knowledge even more immediately sociable. Our contemporary communications environment makes the cognitive-epistemic assumptions of traditional tests even stranger. Memory tests have forever been the butt of the old-school folklore about learning by rote and cramming for exams. Today’s social web makes the always-ridiculed all the more ridiculous.

The most basic premises of conventional testing as we have come to know it are peculiar to testing itself—the idea that there is an exteriorized body of objective knowledge, the facts and logics of which can be transferred to memory. In reality, learning involves social communication (learning in a community of practice). It consists of a series of prompts and reference points into social knowledge. This means that what needs to be learned is not the supposed content of external knowledge, but how to access and deploy knowledge resources. Today’s cognitive science tells us that learning is inextricably social. Knowledge is intrinsically distributed. It is necessarily the result of social interaction. So, assessment should now also attempt to measure this irrepressibly sociable thing—knowledge—and measure it on its own, social terms.

Here are some possible social knowledge assessment scenarios. The world is an open book. There is nothing to be memorised, only strategies for access. What we need to assess, then, is how we can locate and use available knowledge resources. Or several students may be undertaking an assessment together, and the assessable outcome may not even be a single script, but we can nevertheless assess differential contributions to a shared, digital document. In these assessment frames, the heroic individual with memory capacities may well be good at Trivial Pursuit. But this can no longer be our measure of epistemic excellence. Rather than attempting to calculate individual cognitive competence, we will assess collaborative knowledge competence. The focal interest may still be an individual, but the measure of their learning will be how they construct their understandings within socially embedded knowledge ecologies. To assess this, we allow the test-taker access to as much information as they can get, give them as many tools of search and synthesis as they can use, and afford them as much opportunity as they need to ask others for “answers” or respond to their requests for assistance. Finally, rather than having every isolated individual on the same page of the same test at the same time, we might have individuals collaborate in knowledge ecologies that work because they are based on the logic of differential and complementary rather than identically templated and replicated expertise.

Assessing Metacognition

Research demonstrates that higher levels of academic performance result from the development of metacognition, or the capacity to think about one’s thinking processes.[1] Alongside situated, social cognition, assessment should also have a metacognitive focus. Experts in a subject domain typically organise knowledge into schemas and make sense of new information through processes of pattern recognition. Such knowledge representations are useful tools for understanding, knowledge making and knowledge communication. Furthermore, thinking is more efficient and effective when accompanied by the process of monitoring and reflecting upon one’s own thinking.

Traditional testing measures one’s schema-absorbing capacities less than one’s schema-forming capabilities. And it measures cognition, bit by bit, at the expense of metacognition or the ability to think holistically about one’s thinking. This has always been a deficiency in conventional assessment—its mundane epistemic narrowness. However, such tests have become even more anachronistic in an era when our new information order provides peculiar tools for navigating the enormous body of available text. In our textual journeys through the social web we encounter multiple ersatz identifications in the form of icons, links, menus, file names and thumbnails. We work over the algorithmic manifestations of databases, mashups, structured text, tags, taxonomies and folksonomies in which no one ever sees the same data represented the same way. The person browsing the social web is a machine-assisted fabricator and analyst of meanings.

The social web, in other words, is not just a pile of discoverable information. Users can only navigate their way through its thickets by understanding its architectural principles and by working across layers of meaning and levels of specificity/generality. This is a new cognitive order, some elements of which arise in earlier modern times, but which in their intensity and extensiveness require a peculiarly abstracting sensibility. They also demand a new kind of critical literacy (not “comprehension”) in which fact is moderated by dialogues about the status of knowledge and critical discussions of authorial interests (in the ‘edit history’ pages in wikis or the comments in blogs, for instance). Meanings and knowledge are more manifestly modal, contingent and conditional—not that serious knowledge has ever been anything but this, despite the implicit assumptions of tests. It’s just that this is less avoidably the case in the era of the social web. We need metacognition to get around in today’s textual and knowledge environments, and we need ways to measure this metacognition.

Assessing the Use and Creation of Multimodal Texts

The communications environments in which we now live have transformed the ways in which students communicate—via email, Facebook, Twitter, instant text and image messaging, blogs, videos and the like. As we have argued many times in this book, this has repeatedly thrown into question the nature of what we conventionally understand to be ‘literacy’. What we do in schools under the rubric of literacy, and particularly what we measure in our literacy assessments, has not caught up with these profound changes. In many of the subject or knowledge areas that use writing for representation, to use words alone is simply not enough. It is inconceivable, for instance, that a written report of a scientific experiment or a social studies field report could be adequate without reference to a range of tables, diagrams, graphs, images, or audio or video representations. Inexpensive ubiquitous recording and new social media sites for assembling and sharing multimodal texts make this cheap and easy to do.

For these reasons, effective assessment today needs to be in spaces in which learners can represent their knowledge multimodally—involving a mix of written text, oral record, image, audio and videos of gestural and spatial relations. The web is the first medium of representation and communication to provide such an accessible space for multimodal expression. It is ideally suited for knowledge representation on the part of learners, and assessment of those representations. The result is that curricula and assessments in their traditional formats and media are in need of updating in order to make optimal use of the affordances of these digital spaces, and to create learning and assessment environments which are manifestly contemporary in the communicative options they allow.

Assessment ‘For Learning’, Not Just ‘Of Learning’

The utility of assessment is enhanced if it also provides students and their teachers with specific feedback, not just a holistic score. Key questions as we attempt to create assessments that play a more constructive role in learning include: how can assessments be constructed which make learning goals clearer? Which interpret student performance in a way that is more meaningful to learners, teachers and parents? And which suggest what an individual learner or a group of learners still needs to learn? Also, how can assessment be made more interactive and dynamic (as opposed to static and product-based), providing assistance as part of the assessment process—designed in other words to show learner potential by influencing and helping to change their performance? How can assessment fulfil the promise of school reform by connecting more closely to daily curriculum activity with mandated standards?[2]

Researchers have attempted to address these questions in the area of formative assessment. Considerable empirical support exists for the educational effectiveness of assessment closely linked to instruction.[3] Black and Wiliam provide a meta-analysis of over 250 studies of classroom-based and formative assessment, concluding that students learn better when they receive systematic feedback on the particular qualities of their work with a view to improving that work.[4] Rapid feedback on ‘doing’ provides more powerful learning outcomes.[5]

Specifically in the case of the assessment of writing, one of the demonstrated benefits of the use of technology in formative assessments is the possibility of immediate or rapid feedback. Educators can also use technology-based assessments to make sense of large amounts of data arising from complex performance, bringing to bear psychometric analyses not practically applicable by teachers in a normal classroom situation. Technology can also create new opportunities for seamlessly integrated assessment in support of self-paced instruction. And it can offer unprecedented possibilities for diagnostic assessment to meet the varied needs of diverse learners.[6]

The possibilities for thorough formative assessment in the context of the ubiquitous, social web are such that, some day in the not too distant future, we may be able to abolish summative assessments, or at least reduce the distinction between summative and formative assessments to a mere formality, all summative data being created from aggregations of formative data. Working over the noisy data of machine-mediated feedback, the social web environments of our near future will be able to build more accurate views of individual student progress over time and capabilities in relation to demographic groups, however defined, such as an individual in relation to a class group or a demographic category. Our goal should be to make assessment integral to all learning insofar as continuous feedback would be provided and progress tracked. Assessment would then become so pervasive that it all-but disappears. Literacies are an ideal site for the development of such a seamlessly integrated learning and assessment environment.

What if we move all assessment data collection into the space of learning? Then all we would ever measure is the substance of learning. There could be no need for after-the-event inferences, because what we are doing is measuring learning itself, at its source. And the more data we collect, and the more we view this data through the greatest variety of lenses, the more valid and reliable our assessments will become.

Behind the often heated and at times ideologically gridlocked debate is a genuine challenge to address gaps in achievement. There is an urgent need to lift whole communities and cohorts of students out of cycles of underachievement. The critics of the current system tell us the tests don’t do what they purport to do, and what they do, on their own terms, they do badly. Inasmuch as this claim is true, our task is to transform fundamentally our systems of educational measurement. That may mean we do more measurement and do it more accurately. We certainly need to measure different things and measure them differently. To the extent that we can transform assessment, we may also be able to transform learning.


Adapted from: Cope, Bill, Mary Kalantzis, Sarah McCarthey, Colleen Vojak, and Sonia Kline. 2011. “Technology-Mediated Writing Assessments: Paradigms and Principles.” Computers and Composition 28:79-96.

[1] Bransford, John D., Ann L. Brown, and Rodney R. Cocking. 2000. “How People Learn: Brain, Mind, Experience and School.” edited by N. R. C. Commission on Behavioral and Social Sciences and Education. Washington D.C.: National Academy Press.

[2] Resnick, Lauren B. 2006. “Making Accountability Really Count.” Educational Measurement, Issues & Practice 25:33-37. Baker, Eva L. 2004. “Aligning Curriculum, Standards, and Assessments: Fulfilling the Promise of School Reform.” National Center for Research on Evaluation, Standards, and Student Testing (CRESST), University of California, Los Angeles, Los Angeles.

[3] Black, P., R. McCormick, M. James, and D. Pedder. 2006. “Learning How to Learn and Assessment for Learning: A Theoretical Inquiry.” Research Papers in Education 21:119-132. OECD Centre for Educational Research and Innovation. 2005. “Formative Assessment: Improving Learning in Secondary Classrooms.” Organisation for Economic Co-operation and Development, Paris.

[4] Black, P. and D. Wiliam. 1998. “Assessment and Classroom Learning.” Assessment in Education 5:7-74.

[5] Cumming, J. Joy , Claire Wyatt-Smith, John Elkins, and Mary Neville. 2006. “Teacher Judgement: Building an Evidentiary Base For Quality Literacy and Numeracy Education.” Centre for Applied Language, Literacy and Communication Studies, Griffith University, Brisbane.

[6] Mislevy, Robert J. 2006. “Cognitive Psychology and Educational Assessment.” pp. 257-305 in Educational Measurement, edited by R. L. Brennan. New York: Praeger. Baker, Eva L. 2005. “Improving Accountability Models by Using Technology-Enabled Knowledge Systems (TEKS).” National Center for Research on Evaluation, Standards, and Student Testing (CRESST), University of California, Los Angeles, Los Angeles.

Articles on ‘Big Data’ in Education

Cope, Bill and Mary Kalantzis. 2015. "Sources of Evidence-of-Learning: Learning and Assessment in the Era of Big Data." Open Review of Educational Research 2:194–217. | download

Cope, Bill and Mary Kalantzis. 2015. "Interpreting Evidence-of-Learning: Educational Research in the Era of Big Data." Open Review of Educational Research 2:218–239. | download