Автор работы: Пользователь скрыл имя, 22 Апреля 2014 в 13:51, доклад
The labyrinthine project of which this research is a part represents an ongoing activity for us, something we engage in because we like to work together, have a long friendship, and share many interests. As we worked on this error research together, however, we started somewhere along the line to feel less and less like the white-coated Researchers of our dreams and more and more like characters we called Ma and Pa Kettle--good-hearted bumblers striving to understand a world whose complexity was more than a little daunting.
Frequency of Formal Errors in Current
College Writing, or
Ma and Pa Kettle Do Research
Robert J. Connors and Andrea A. Lunsfbrd
Proem: In Which the Characters Are Introduced
The labyrinthine project of which this research is a part represents an ongoing activity for us, something we engage in because we like to work together, have a long friendship, and share many interests. As we worked on this error research together, however, we started somewhere along the line to feel less and less like the white-coated Researchers of our dreams and more and more like characters we called Ma and Pa Kettle--good-hearted bumblers striving to understand a world whose complexity was more than a little daunting. Being fans of classical rhetoric, prosopopoeia, letteraturizzazione, and the like, as well as enthusiasts for intertextuality, plaisir de texte, differance, etc., we offer this account of our travails--with apologies to Marjorie Main and Percy Kil-bride.
Exordium: The Kettles Smell a Problem
Marking and judging formal and mechanical errors in student papers is one area in which composition studies seems to have a multiple-personality disorder. On the one hand, our mellow, student-centered, process-based selves tend to condemn marking formal errors at all. Doing it represents the Bad Old Days. Ms. Fidditch and Mr. Flutesnoot with sharpened red pencils, spilling innocent blood across the page. Useless detail work. Inhumane, perfectionist standards, making our students feel stupid, wrong, trivial, misunderstood. Joseph Williams has pointed out how arbitrary and context-bound our judgments of formal error are. And certainly our noting of errors on student papers gives no one any great joy; as Peter Elbow says, English is most
Robert J. Connors is an associate professor of English at the University of New Hampshire. The author of many articles on rhetorical history and theory, he was the winner of the 1982 Richard Braddock Award. Andrea A. Lunsford is professor of English and Vice-Chair for Rhetoric and Composition at The Ohio State University. She has published numerous articles and was co-winner, with Lisa Ede, of the 1985 Richard Braddock Award. Bob and Andrea are coauthors of the forthcoming St. Martin's Handbook.
College Composition and Communication, Vol. 39, No. 4, December 1988 395
396 College Composition and Communication 39 (December 1988)
often associated either with grammar or with high literature--"two things designed to make folks feel most out of it."
Nevertheless, very few of us can deny that an outright comma splice, its/ it's error, or misspelled common word distracts us. So our more traditional pedagogical selves feel a touch guilty when we ignore student error patterns altogether, even in the sacrosanct drafting Stage of composing. Not even the most liberal of process-oriented teachers completely ignores the problem of mechanical and formal errors. As Mina Shaughnessy put it, errors are "unintentional and unprofitable intrusions upon the consciousness of the reader. ... They demand energy without giving back any return in meaning" (12). Errors are not merely mechanical, therefore, but rhetorical as well. The world judges a writer by her mastery of conventions, and we all know it. Students, parents, university colleagues, and administrators expect us to deal somehow with those unmet rhetorical expectations, and, like it or not, pointing out errors seems to most of us pan of what we do.
Of course, every teacher has his or her ideas of what errors are common and important, but testing those intuitive ideas is something else again. We became interested in error-frequency research as a result of our historical studies, when we realized that no major nationwide analysis of actual college essays had been conducted, to our knowledge, since the late 1930s. As part of the background for a text we were writing and because the research seemed fascinating, we determined to collect a large number of college student essays from the 1980s, analyze them, and determine what the major patterns of formal and mechanical error in current student writing might be.
Narratto: Ma and Pa Visit the Library
Coming to this research as historians rather than as trained experimenters has given us a humility based on several different sources. Since we are not formally trained in research design, we have constantly relied on help from more expert friends and colleagues. Creating a sense of our limitations even more keenly, however, have been our historical studies. No one looking into the history of research on composition errors in this country can emerge very confident about definitions, terms, and preconceptions. In almost no other pedagogical area we have studied do the investigators and writers seem so time-bound, so shackled by their ideas of what errors are, so blinkered by the definitions and demarcations that are part of their historical scene. And, inelucta-biy, we must see ourselves and our study as history-bound as well. Thus we write not as the torchbearers of some new truth, but as two more in the long line of people applying their contemporary perspectives to a numbering and ordering system and hoping for something of use from it.
The tradition of research into error patterns is as old as composition teaching, of course, but before the growth of the social-science model in education
frequency of formal Errors in
Current College Writing
it was carried on informally. Teachers had "the list" of serious and common errors in their heads, and their lists were probably substantially similar (although "serious" and "common" were not necessarily overlapping categories).' Beginning around 1910, however, teachers and educational researchers began trying to taxonomize errors and chart their frequency. The great heyday of error-frequency seems to have occurred between 1915 and 1935. During those two decades, no fewer than thirty studies of error frequency were conducted.2 Unfortunately, most of these studies were flawed in some way: too small a data sample, too regional a data sample, different definitions of errors, faulty methodologies (Harap 440). Most early error research is hard to understand today because the researchers used terms widely understood at the time but now incomprehensible or at best strange. Some of the studies were very seriously conducted, however, and deserve further discussion later in this paper.
After the middle 1930s, error-frequency research waned as the progressive-education movement gained strength and the "experience curriculum" in English replaced older correctness-based methods. Our historical research indicates that the last large-scale research into student patterns of formal error was conducted in 1938-39 by John C. Hodges, author of the Harbrace College Handbook. Hodges collected 20,000 student papers that had been marked by 16 different teachers, mainly from the University of Tennessee at Knoxville. He analyzed these papers and created a taxonomy of errors, using his findings to inform the 34-part organization of his Harbrace Handbook, a text which quickly became and remains today the most popular college handbook of writing.
However Hodges may have constructed his study, his results fifty years later seem problematic at best. Small-scale studies of changes in student writing over the past thirty years have shown that formal error patterns have shifted radically even since the 1950s. The kinds and quantities of formal errors revealed in Mina Shaughnessy's work with basic writers in the 1970s were new and shocking to many teachers of writing. We sensed that the time had come for a study that would attempt to answer two questions: (1) what are the most common patterns of student writing errors being made in the 1980s in the United States?, and (2) which of these patterns are marked most consistently by American teachers?
Confirmatio I: The Kettles Get Cracking
The first task we faced was gathering data. We needed teacher-marked papers from American college freshmen and sophomores in a representative range of different kinds of schools and a representative range of geographic areas. We did not want to try to gather the isolated sample of timed examination-style writing that is often studied, although such a sample would probably have been easier to obtain than the actual marked papers we sought. We wanted "themes in the raw," the actual commerce of writing courses all across Amer-
398 College Composition and Communication 39 (December 1988)
ica. We wanted papers that had been personally marked or graded, filled with every uncontrolled and uncontrollable sign of both student and teacher personalities.
Gathering these papers presented a number of obstacles. In terms of ideal methodology, the data-gathering would be untouched by self-selection among teachers, and we could randomly choose our sources. After worrying about this problem, we finally could conceive of no way to gather upwards of 20,000 papers (the number of papers Hodges had looked at) without appealing to teachers who had marked them. We could think of no way to go directly to students, and, though some departments stockpile student themes, we did not wish to weight our study toward any one school or department. We had to ask composition teachers for help.
And help us they did. In response to a direct mail appeal to more than 1,500 teachers who had used or expressed interest in handbooks, we had received by September 1985 more than 21,500 papers from 300 teachers all across America.3
To say that the variety in the papers we were sent was striking is a serious understatement. They ranged in length from a partial page to over 20 pages. About 30% were typed, the rest handwritten. Some were annotated marginally until they looked like the Book of Kells, while others merely sported a few scrawled words and a grade. Some were pathologically neat, and others look dashed off on the jog between classes. Some were formally perfect, while others approximated Mina Shaughnessy's more extreme examples of basic writing. Altogether, the 21,500+ papers, each one carefully stamped by paper number and batch number, filled approximately 30 feet of hastily-installed shelving. It was an imposing mass.
We had originally been enthusiastic (and naive) enough to believe that with help we might somehow look over and analyze 20,000 papers. Wrong. Examining an average paper even for mechanical lapses, we soon realized, took at the very least ten busy minutes; to examine all of them would require over 3,000 Ma-and-Pa-hours. We simply could not do it. But we could analyze a carefully stratified sample of 3,000 randomly chosen papers. Such an analysis would give us data that were very reliable. Relieved that we would not have to try to look at 20,000 papers, we went to work on the stratification.4 After stratifying our batches of papers by region, size of school, and type of school, we used the table of random numbers and the numbers that had been stamped on each paper as it came in to pull 3,000 papers from our tonnage of papers. Thus we had our randomized, stratified sample, ready for analysis.
Confutatio: Ma and Pa Suck Eggs
But--analyzed using what? From very early on in the research, we realized that trying to introduce strict "scientific" definitions into an area so essentially
Frequency of Formal Errors in Current College Writing
values-driven as formal error marking would be a foolhardy mistake. We accepted Joe Williams' contention that it is "necessary to shift our attention from error treated strictly as an isolated item on a page, to error perceived as a flawed verbal transaction between a writer and a reader" (153). Williams' thoughtful article on "The Phenomenology of Error" had, in fact, persuaded us that some sort of reader-response treatment of errors would be far more useful than an attempt to standardize error patterns in a pseudo-scientific fashion based on Hodges' or any other handbook.
We were made even more distrustful of any absolutist claims by our further examination of previous error-frequency research. Looking into the history of this kind of research showed us clearly how teachers' ideas about error definition and classification have always been absolute products of their times and cultures. What seem to us the most common and permanent of terms and definitions are likely to be newer and far more transient than we know. Errors like "stringy sentences" and "use of would fot simple past tense forms" seemed obvious and serious to teachers in 1925 or 1917 but obscure to us today.5
While phenomena and adaptable definitions do continue from decade to decade, we knew that any system we might adopt, however defensible or linguistically sound it might seem to us, would someday represent one more historical curiosity. "Comma splice?" some researcher in the future will murmur, "What a strange term for Connors and Lunsford to use. Where could it have come from?"6 Teachers have always marked different phenomena as errors, called them different things, given (hem different weights. Error-pattern study is essentially the examination of an ever-shifting pattern of skills judged by an ever-shifting pattern of prejudices. We wanted to try looking at this situation as it existed in the 1980s, but clearly the instrument we needed could not be algorithmic and would not be historically stable.
We settled, finally, on several general understandings. First, examining what teachers had marked on these papers was as important as trying to ascertain what was "really there" in terms of formal error patterns. Second, we could only analyze for a limited number of error patterns--perhaps twenty in all. And finally, we had no taxonomy of errors we felt we could trust. We would have to generate our own, then, using our own culture- and time-bound definitions and perceptions as best we could.
Confirmatio II: Ma and Pa Hit the Road
Producing that taxonomy meant looking closely at the papers. Using the random number tables again, we pulled 300 papers from the remaining piles. Each of us took 150, and we set out inductively to note every formal error pattern we could discover in the two piles of papers. During this incredibly boring and nauseating part of the study, we tried to ignore any elements of paper content or organization except as they were necessary to identify errors. Every
400 College Composition and Communication 39 (December 1988)
error marked by teachers was included in our listing, of course, but we found many that had not been marked at all, and some that were not even easily definable. What follows is the list of errors and the numbers of errors we discovered in that first careful scrutiny of 300 papers:
Error or Error Pattern
450 138 124 102 99 90 87 83 82 75 59 50 49 46 42 39 38 35 34 31 31 31 28 27 24 21 17 14 11 11 9 6 6 6 4 4
Spelling
No comma after introductory element Comma splice Wrong word
Lack of possessive apostrophe Vague pronoun reference No comma in compound sentence Pronoun agreement Sentence fragment
No comma in non-restrictive phrase Subject-verb agreement
Unnecessary comma with restrictive phrase Unnecessary words/style rewrite Wrong tense
Dangling or misplaced modifier Run-on sentence
Wrong or missing preposition Lack of comma in series Its/it's error Tense shift
Pronoun shift/point of view shift Wrong/missing inflected endings Comma with quotation marks error Missing words Capitalization
"Which/that" for "who/whom" Unidiomatic word use Comma between subject and verb Unnecessary apostrophe after "s" Unnecessary comma in complex sentence Hyphenation errors Comma before direct object Unidiomatic sentence pattern Title underlining Garbled sentence Adjectival for adverbial form--"lyIn addition, the following errors appeared fewer than 4 times in 300 papers:
Wrong pronoun
Wrong use of dashes
Confusion of a/an \
Missing articles (the)
Missing question mark
Wrong verb form
Frequency of Formal Errors in Current College Writing
Lack of transition
Missing/incorrect quotation marks Incorrect comma use with parentheses Use of comma instead of "that" Missing comma before "etc." Incorrect semicolon use Repetition of words Unclear gerund modifier Double negative
Missing apostrophe in contraction Colon misuse Lack of parallelism
As expected, many old favorites appear on these lists. To our surprise, however, some errors we were used to thinking of as very common and serious proved to be at least not so common as we had thought. Others, which were not thought of as serious (or even, in some cases, as actual errors), seemed very common.
Our next step was to calibrate our readings, making certain we were both counting apples as apples, and to determine the cutoff point in this list, the errors we would actually count in the 3,000 papers. Since spelling errors predominated by a factor of 300% (which in itself was a surprising margin), we chose not to deal further with spelling in this analysis, but to develop a separate line of research on spelling. Below spelling, we decided to go arbitrarily with the top twenty error patterns, cutting off below "wrong inflected ending." These were the twenty error patterns we would train our analysts to tote
up-Now we had a sample and we had an instrument, however rough. Next we needed to gather a group of representative teachers who could do the actual analysis. Fifty teaching assistants, instructors, and professors from the Ohio State University English Department volunteered to help us with the analysis. The usual question of inter-rater reliability did not seem pressing to us, because what we were looking for seemed so essentially charged with social conditioning and personal predilection. Since we did not think that we could always "scientifically" determine what was real error and what was style or usage variation, our best idea was to rationalize the arbitrariness inherent in the project by spreading out the analytical decisions.
On a Friday afternoon, in January 1986 we worked with the fifty raters, going over the definitions and examples
we had come up with for the "top twenty," as we were by then
calling them. It was a grueling Friday and Saturday. We trained raters to recognize error patterns all Friday afternoon
in the dusty, stuffy old English
Library at OSU--the air
of which Thurber must have breathed, and probably the very same air,
considering how hard the windows were to open. On returning to our Hotel that night, we found
it occupied
by the Ohio chapter of the Pentecostal
Youth, who had been given permission to run around the hotel giggling
and shouting until 3:30 a.m. In
402 College Composition and Communication 39 (December 1988)
despair, we turned our TV volumes all the way up on white-noise stations that had gone off the air. They sounded like the Reichenbach Falls and almost drowned out the hoo-raw in the hallway. After 3:30 it did indeed quiet down some, and we fell into troublous sleep. The next day the Pentecostal Youth had vanished, and Ma & Pa had research to do.
Ampltficatio: Ma and Pa Hunker Down
The following day, rating began at 9:00 a.m. and, with a short lunch break, we had completed the last paper by 5:00 p.m. We paused occasionally to calibrate our ratings, to redefine some term, or to share some irresistible piece of student prose. (Top prize went to the notorious "One Night," one student's response to an assignment asking for "analysis." This essay's abstract announced it as "an analysis of the realm of different feelings experienced in one night by a man and wife in love."7) The rating sheets and papers were reordered and bundled up, and we all went out for dinner.8
The results of this exercise became real for us when we totaled up the numbers on all of the raters' sheets. Here was the information we had been seeking, what all our efforts had been directed toward. It was exciting to finally see in black and white what we had been wondering about. What we found appears in Table 1.
Peroratio: The Kettles Say, "Aw, Shucks"
The results of this research by no means represent a final word on any question involving formal errors or teacher marking patterns. We can, however, draw several intriguing, if tentative, generalizations.
First, teachers' ideas about what constitutes a serious, markable error vary widely. As most of us may have expected, some teachers pounce on every "very unique" as a pet peeve, some rail at "Every student . . . their . . ." The most prevalent "error," failure to place a comma after an introductory word or phrase, was a bete noire for some teachers but was ignored by many more. Papers marked by the same teacher might at different times evince different patterns of formal marking. Teachers' reasons for marking specific errors and patterns of error in their students' papers are complex, and in many cases they are no doubt guided by the perceived needs of the student writing the paper and by the stage of the composing process the paper has achieved.
Second, teachers do not seem to
mark as many errors as we often think they do. On average, college English
teachers mark only 43% of the most serious errors in the papers they
evaluate. In contrast to the popular picture of English teachers mad
to mark up every error, our
results show that even the most-often marked errors are only marked
two-thirds of the time. The less-marked patterns (and remember, these
are the Top Twenty error patterns overall) are
in 3000 |
total |
marked |
marked |
# of errors |
SL | ||
Error or Error Pattern |
papers |
errors |
by teacher |
by teacher marked by teacher tfl | |||
1. |
No comma after introductory element |
3,299 |
11.5% |
995 |
30% |
2 |
§ |
2. |
Vague pronoun reference |
2,809 |
9.8% |
892 |
32% |
4 |
S' |
3. |
No comma in compound sentence |
2,446 |
8.6% |
719 |
29% |
7 |
n |
4. |
Wrong word |
2,217 |
7.8% |
1,114 |
50% |
1 |
|
5. |
No comma in non-restrictive element |
1,864 |
6.5% |
580 |
31% |
10 |
5 |
6. |
Wrong/missing inflected endings |
1,679 |
5.9% |
857 |
51% |
5 |
»» 0 |
7. |
Wrong or missing preposition |
1,580 |
5.5% |
679 |
43% |
8 |
0 ;~- |
8. |
Comma splice |
1,565 |
5.5% |
850 |
54% |
6 |
€ |
9. |
Possessive apostrophe error |
1,458 |
5.1% |
906 |
62% |
3 |
^ sS |
10. |
Tense shift |
1,453 |
5.1% |
484 |
33% |
12 |
1 |
11. |
Unnecessary shift in person |
1,347 |
4.7% |
410 |
30% |
14 |
»»' ir |
12. |
Sentence fragment |
1,217 |
4.2% |
671 |
55% |
9 |
On |
13. |
Wrong tense or verb form |
952 |
3.3% |
465 |
49% |
13 |
|
14. |
Subject-verb agreement |
909 |
3.2% |
534 |
58% |
11 |
|
15. |
Lack of comma in series |
781 |
2.7% |
184 |
24% |
19 |
|
16. |
Pronoun agreement error |
752 |
2.6% |
365 |
48% |
15 |
|
17. |
Unnecessary comma with restrictive |
693 |
2.4% |
239 |
34% |
17 |
|
|
element |
|
|
|
|
|
|
18. |
Run-on or fused sentence |
681 |
2.4% |
308 |
45% |
16 |
|
19. |
Dangling or misplaced modifier |
577 |
2.0% |
167 |
29% |
20 |
|
20. |
Its/it's error |
292 |
1.0% |
188 |
64% |
18 |
|
|
|
|
|
|
|
|
^ |
|
|
|
|
|
|
|
0 w |