Monday, July 06, 2015

Pass the MOOCs -- Part 4 (STEM vs Humanities)

Last update: Monday 7/6/15
Part 2 of this series noted my surprise at the beginning of the MOOCs that I was taking -- MOOCs that were offered by a couple of the nation's leading universities. I was surprised when I realized that their descriptions of their courses substantially underestimated the prerequisite knowledge that was required and the hours of study per week that were required to pass their courses. By contrast, this current note conveys my surprise at the end of the first four MOOCs. At that point I found that their use of student peer evaluations of final projects was (a) appropriate and (b) effective. (Note: only the Hopkins MOOCs offered via Coursera used peer evaluations; the M.I.T. MOOC offered via edX did not.)

Preconceptions
In hindsight my surprise reflects my unconscious acceptance of some plausible, but unsubstantiated "conventional wisdom" about STEM. 
  • There is a widespread notion that "massive" courses, courses enrolling hundreds, and possibly thousands of students, are workable in STEM but not in the humanities because a question in STEM has a "correct" answer, whereas the subjective nature of the humanities allows for many "correct" answers. 
     
  • Indeed, a student's final paper in a humanities course might develop a plausible theme that provides a "correct" answer that no one had ever thought of before -- a highly unlikely occurrence in physics, chemistry, or math.
     
  • It's just a few short hops from these assumptions to conclusions that STEM courses can produce plausible assessments of a student's understanding via a series of carefully designed multiple choice questions that can be graded by computers; whereas this method is obviously inappropriate for humanities courses, with the possible exception of remedial courses or courses at introductory levels. 
     
  • Therefore MOOCs in STEM can enroll of thousands of students; whereas enrollments in humanities courses must stay within the traditional limits of the number of students whose essays and other open ended projects can be fairly assessed by their instructors. So we are talking tens, perhaps scores of students, but certainly not hundreds or thousands.
This logic is correct up to a point, but what I have just described goes way too far. Yes, it might be argued that a student's understanding of the fundamentals of a field, the content of most introductory STEM courses, could be assessed via a series of carefully designed multiple choice questions. However intermediate and advanced STEM courses also strive to enhance their students' capacities to address real problems in their fields. 

There may be ultimate agreement that a particular solution of a real STEM problem is "correct". Nevertheless, real problems also afford many "correct" solutions that are developed using a range of methods with varying trade-offs between the particular mix of methods embodied in one solution vs. the mix embodied in another. So multiple choice questions should not be the only methods used to asses a student's comprehension in intermediate or advanced STEM courses.

When I was a classroom instructor, it never occurred to me to make final assessments via multiple choice questions in my upper level courses because open ended questions were obviously more appropriate and my classes were always small enough for me to carefully read and reread my students' responses.  But the instructors at Hopkins were committed to teaching intermediate courses to large numbers of students. They couldn't read all of the projects themselves ... and they couldn't just use multiple choice questions ... so they also used ...

Peer evaluations
I was surprised when I encountered this feature of the Hopkins MOOCS in their course descriptions, and all the more surprised to find that peer evaluations would provide 40 percent of the final grades. Ironically, my most recent encounters with discussions of student peer evaluations was in the context of experimental courses being developed for MOOCs in the humanities. 

I resented the possibility that my work would be assessed by students who probably knew as little about the subjects as I did ... and possibly less. On the other hand, the notion of my assessing other students' choices of solution strategies with respect to subject matter in which I was still a novice was equally repulsive. So I decided not to participate in the peer evaluation process in the first MOOC I took ...

... until I read the penalty clause. Students who didn't evaluate other students would be penalized with a loss of 20 percent of the points they had earned. Ugh!!! ... OK ... no problem. I'll just ... go through the motions ... which I did ... for my first two MOOCs.

For some reason, I decided to fully participate in the peer evaluation process in my third MOOC, which was far more difficult than the first two. As I had anticipated, I addressed the assigned problems using techniques that I fully understood, whereas other students sometimes used techniques that I barely understood. When I did more reading, I sometimes concluded that their approaches were plausible; but in other cases when I did more reading, I still wasn't sure. 

Grade inflation
When I was teaching subjects that I thoroughly understood, I had no problem assigning low grades to students whose work showed low levels of understanding. But I couldn't give low grades to fellow students whose work might really be displaying a far higher level of understanding than I was able to perceive because I did not thoroughly understand the subject myself yet. So I gave all of the other students higher grades than I thought they really deserved (with one exception described below). That's the bad news.

An unexpected learning opportunity
The good news for me personally was my discovery of other dimensions to the old adage, "If you want to achieve better understanding of a subject, try teaching it." Preparing my class notes, presentations, and exams required extensive reviews of the subject matter that invariably deepened my understanding of the subjects I was teaching. And engaging in conversations with peers also broadened my perspectives but, perhaps too often, I walked away from conversations with my peers thinking and sometimes saying, "Let's just agree to disagree" ...

... but Hopkins wasn't going to let me use that dodge ... As a peer evaluator, I was now (temporarily) part of the course teaching staff, so my assessments would become part of the summative feedback the other students would receive. But here's the irony. When I didn't understand what another student was saying, now was the "teachable moment" for me, now was the time for me to do more reading to try to learn what the other student already knew, and I didn't. And when I thought I understood, but disagreed with what another student said, given my own limited understanding of the subject I had to consider the possibility that it might be because I really didn't understand what he or she was saying; so once again I was faced with a "teachable moment" for me.

However, the best news came when I confronted a student's analysis with which I thoroughly agreed. He or she (I couldn't tell from their unfamiliar first name) used the "same" approach that I did, and came to the "same" conclusions but ... their analysis was so much more persuasive, so much more coherent, so much more elegant than mine. Wow!!! When a hotshot instructor with a PhD in a field and ten to fifteen years experience doing research makes an elegant presentation, I say, "Of course. If I had done that much study and had that much experience, of course". But when another student with, presumably, similar knowledge and experience as me, produces such an elegant report -- so simple, but so powerful -- that was the teachable moment in which I learned that what I already knew could be used far more effectively than I had previously imagined

A "C" student
A good friend of mine likes to tell the story of a colleague who had a student --  let's call the student "Bill" -- in three undergraduate courses, and in each course he had given Bill a final grade of C. At the start of the next semester Bill asked why he always got C's. The colleague explained that Bill was a C student because the work he did was C-quality work. Bill was discouraged; nevertheless, he enrolled in one more course with the colleague and worked very hard. Indeed, he received A's on all of his quizzes, A on the midterm, and A on the final exam. So he was surprised and deeply disappointed when he received his final grade, another C. When he confronted his teacher, he demanded to know "How can you give me a final grade of C, when you gave me A's on all my quizzes, and A for my mid-term, and an A for my final exam???" ... The colleague paused for a moment, then replied, "Yes, I was also perplexed when I reviewed the grades for your quizzes, midterm, and final exam." "So why did you give me a final grade of C???" "I gave you a C because of what I learned in the three previous courses you took from me." "What did you learn?" asked Bill. "I learned that you're a C student"

My final grade
Of course, the point of that little fable is one professor's over zealous resistance to grade inflation, e.g., the grade inflation in the Hopkins MOOC to which I added my own small contribution. Peer evaluations were 40 percent of the grade, but I received perfect scores on all of my quizzes; so I already had 60 points in the bag before the final projects were graded. Believe me, I sweated each of those 60 points, points that were allocated via multiple choice tests. But my grasp of the subject was not, and still is not firm. If I had any doubts about the quality of my understanding, all doubts vanished when I confronted the elegant report from the student who implemented my approach to a project so much better than I did. That's the kind of high quality work that A-level understanding can produce. By contrast, at this point in this particular subject matter, I'm a C student. So I wasn't surprised to receive my final grade. 100!!! Perfect score ... because I received perfect scores for my projects from my student peers. Of course. The grade inflation gravy ladle pours both ways.

Final comments
Grade inflation is a bad thing, but my limited experience in the Hopkins MOOC suggests that the learning benefits of student peer evaluations in upper level STEM courses far outweigh the liabilities of grade inflation. Indeed, participation in peer evaluations enables students to make more realistic assessments of their own understanding of a subject. In my case, without my active participation in the peer evaluation process in my third MOOC, I would not have attained such a clear perspective as to how much I really understood about the subject matter of the MOOC and how much more I need to learn. So I know that I'm really a C student for now, but not for long ... :-)

________________________
Related notes on this blog:

No comments:

Post a Comment

Thank you!!! Your comments and suggestions will be greatly appreciated ... :-)