Critical Thinking and Policy

For my final post of this semester, I’d like to treat the question of how we assess critical thinking in the writing classroom. Hillocks notes that although state writing tests claim to prioritize critical thinking, their prompts include no specific concept of what that actually means (2). Indeed, Florida State University President Eric Barron highlighted it as a critical component of the new curriculum that FSU will be adopting starting in the fall of 2015. The problem with defining critical thinking by DasBender in the online book Writing Spaces as a term which “makes you draw a blank every time you think about what it means” (1). It is a term which the liberal have classically claimed yet which no student would list on their resume as a skill.

One of our goals for educational policy, then, should be to work toward a writing construct which provides a clear definition of what we mean when we say we want critical thinking, but most importantly what it looks like. Is it simply the ability to question assumptions and synthesize sources? Is it even something that can be directly taught, or is something learned over time through repeated exposure to people engaged in it as a practice?

More importantly, even if critical thinking can be defined, it remains to be seen whether it can be institutionalized on a state assessment in the way that Hillocks sees it used. If critical thinking can best be developed within a community of practice, the distant, impersonal nature of state assessment seems a very bad place to try and assess it.


Access and Assessment – Who Is still left out?

In this post, I want to expand on the claims made by Diane Penrod in the chapter “Access Before Assessment.” The central claim of this chapter is that despite the growing presence of digital technology in the lives of mostly white, middle-class households, there remain significant gaps in access to these technologies that create inequities across income, race, and region. I want to focus specifically on region, and raise some questions for how that can impact the way we think about assessment. Penrod notes that most lower-class families rely on public libraries to facilitate their access to the Internet and computers and that “in many communities throughout the United States, libraries are regionally located, and poor or working-class families outside of America’s urban centers may not have the transportation or the time to get children to libraries on a regular basis.” In short, many rural communities continue to lack not only access to the traditional materials of literacy, such as books and newspapers, but the emerging capabilities of digital literacy as well.

            Indeed, the FCC reported earlier this year that 14.5 million Americans in rural areas still lack broadband internet access. This number, which seems small at first glance, seems much larger when we consider how much of our educational lives are now structured around constant access to high speed Internet. Assessment practices notwithstanding, university life in America presumes not only access to but competency with the basic tools of digital communication such as e-mail, web browsing, and using a search engine, even Blackboard. We assume that if we send out a mass e-mail to our class the night before a meeting, for instance, that not only will our students read it but that they will even think to check it in the first place. It’s clear that these are assumptions that may be typical but are certainly not universal.

            The question I want to pose, then, is: whose responsibility is it to make sure students enter the composition classroom with basic digital literacy skills? Is it the job of high schools, parents, university orientation programs? And what do we do when these agents fail in their responsibility? With the Internet a constant in the lives of so many students and educators, I don’t think we can take for granted that we all agree on how to answer the questions.

Reduction and Rubrics

This semester, I decided to stop using rubrics as part of my writing assessment pedagogy. In the past, I had employed a rubric for all four papers of my ENC 1102 course. The experience was disheartening because I found that students wrote to fit the rubric rather than to construct their own argument or to conduct their own research. Once, I even allowed my students to create their own rubric. To my chagrin, the rubric they ended up writing looked almost exactly like the one I had designed in the first place

Rubrics claim to isolate and measure discrete writing skills, but what I have found is that they make writing too easy. By offering students a list of exact points to hit, the rubric has two detrimental side effects: 1) it makes students too easily satisfied with their work and 2) it shifts the responsibility for the written work from the student to the rubric. In my experience, when a student hit the points on the rubric he or she needed to get the grade they desired, they stopped. Similarly, both success and failure are subsumed by the rubric – if a student does well, it’s not because he or she came up with an original or creative piece of writing, but because he or she completed a formula and connected a few dots. Similarly, if a student doesn’t do so well, they can shrug off their responsibility for their writing by blaming some aspect or another of the rubric. I want my students to feel self-motivated and responsible for both their successes and failures, and so I decided this semester that I would not use rubrics.

An Underlying Tension

Beneath this week’s readings on validity and reliability lies a tension that I’m not sure how to resolve, a tension that speaks to our contemporary concerns about writing assessment. In their detailed analysis of the concepts, Cherry and Meyer also date the concepts to the early 20th century testing movement, the movement that informed modern day psychometric testing (30). The question this brings to mind speaks to the congruence of these concepts with the anti-psychometric stance that informs the holistic assessment that our field tends to prefer, that is: how have we as a field positioned these concepts, which stem from premises about people, writing, and assessment with which we disagree, within our paradigm? How do we warrant their inclusion?

            Take something like portfolio grading, a method that many of our colleagues use and which resists breaking writing down into discrete, analyzable units. How does one measure validity or reliability within portfolio grading, or can you at all? My point here is not to challenge these concepts or portfolio grading; rather, it is to examine both the tension between one school and the other, and to examine how concepts are appropriated between schools of thought. When we use these concepts, do we alter the premises out of which they developed, and how does that change the way we apply them to the assessment methods we design? Perhaps some of you more versed in the literature could direct me to some work which seeks to resolve this – or perhaps I have created conflict where this none to be had. I welcome any resources you all might have.

A Gap in Knowledge

The readings for this unit have focused on the competing models of assessment, but have not addressed to a great extent the tensions underlying theory of writing which informs these models of assessment. Although Behizadeh and Engelhard take up the issue when they discuss the competing writing theories of form, idea and content, and sociocultural context, the other readings for this unit emphasized competing theories of assessment and the issues which inform them. This leaves me with a gap in knowledge – with a basic grasp on assessment theories, what they measure, why they measure it, and who designed them, but with little knowledge of the answer to the question “What is writing?” This is significant because that answer seems like it should come before any enumeration of what we’re assessing and why we’re assessing it.

The central tension seems to be between writing conceived as the transcription of thought from the mind onto the page or screen (as we see in standardized testing movements and machine scoring movements, for instance) and writing as conscious, creative process (such as teacher-driven and holistic scoring movements). But that sets up a dualism which provides an oversimple answer to the question. For surely we often do have thoughts before we sit down to write, just as we often develop new thoughts before we write.

Yet even combining these two theses into a “best of both worlds” definition seems to oversimplify the question. Within these concepts, as within the assessment theories which flow from them, there are doubtless mutually exclusive claims about the human mind and brain, the influence of culture and social upbringing, and the role of writing in society writ small and large. And so the gap remains, and I think I would benefit from reading some more focused research which tries to address “What is writing?” on its own, so that I can more clearly connect theories of writing to theories of assessment.