|jump to content | main menu | tips on using this site | site map|
Many instructors use only the students’ end-of-semester course evaluations for feedback about their teaching. Many do so only because the university requires it, and often, the information is ignored. If the process is an honest one, there would be much information to be gotten after the course is finished. At this time students can see the big picture and appreciate the overall structure and flow of the class sessions. Material which may have seemed difficult when first encountered may now seem simple.
By the end of the course, it is too late to ask them about material and course presentations given in the first weeks of the course. But more general questions, especially about the pacing of the material, articulation of the material with labs and other activities, and connection with previous courses or courses in their major areas would be useful in planning the next class.
Any time we use observational data to come to conclusions—especially cause-and-effect statements—we are entering dangerous territory. Much of scientific enterprise nowadays is not truly experimental and relies on methods developed to improve collecting, analyzing, and interpreting observations so that they will stand up to scientific scrutiny. We know, for instance, that data from a single (ex post facto) study using such data rarely indicate cause-and-effect and that larger differences have more impact than smaller ones. Sources of error may be due to imprecision and to bias.
The implications are direct for our purposes. First, reports from a single class cannot stand alone since there are too many confounding factors involved. We must have multiple data sets to make any conclusions, more than one semester, more than one course, more than one method. We know the data are fallible and action must be taken to improve it. It is probably best to study the relationships of the data with other influences before we rely on it alone: course difficulty, major area, student disaffection, instructor popularity, and so forth. We should not fall into the trap of settling for what we have when the procedure can be improved. It is incumbent on you to collect as much data as possible and do so in as precise a way as possible to document your efforts.
A beginning instructor would be wise to attend to the results of the assessment instruments because they will be used in retention, tenure, and promotion decisions. Typically, if a decision is to be made about retention, for example, instruction is one of the aspects that will be judged. Part of the file you prepare will contain the summaries of the evaluations you have received. You can (and should) also include other assessments—for example, summaries of observations of colleagues or staff at the office of instructional effectiveness, work you have done to promote your teaching effectiveness. Although you should be concerned about the student evaluation component, remember that the people looking at your file are seasoned veterans whose job is to weigh all sources of information; they do not expect perfection.
|© CET, SFSU 2003||
this is the end of the page.