What Do You Think?
AP teachers Do Not Measure Up
Why Colleges Think They're Better Than AP
By Jay Mathews, Washington Post Staff Writer, December 14, 2004
I have written many columns lately about college-level courses in high school, and I apologize for offering one more. Those who are sick of the topic should read no further, but instead e-mail this column to any Advanced Placement (AP) teachers they know, since I am about to reveal that many of them don't measure up and should seek some other line of work.
This is not my opinion, but the view of former college philosophy professor William Casement, expressed in his article "Declining Credibility for the AP Program" in the Fall 2003 issue of the Princeton, N.J.-based journal Academic Questions, published by the National Association of Scholars. And believe me, if you chat up a good sampling of the professoriate at our nation's finest colleges, you will find many agreeing with him.
I have had a long telephone conversation and exchanged e-mails with Casement since I read his provocative article last summer. He is a conscientious and thoughtful person who has done useful research on selective colleges' tightening of credit for AP courses. I also think he is completely out of touch with the reality of both the AP courses and teachers he dismisses and the freshman college introductory courses and professors he embraces. After two decades of observing the depth and power of college-level courses, including AP and International Baccalaureate (IB), in high school and the inferior teaching of the college introductory courses, I think Casement and many of his friends in academia need to take a much closer look at what is actually happening in high schools, and in their own college freshmen courses. But it is not fair to ridicule Casement's argument before he has a chance to make it, so here is a summary of what he says. The Academic Questions piece is not available online at the moment, but the journal's managing editor John Irving (email@example.com) might send you a copy if asked and Casement (Wmrcase@aol.com) says he would be happy to get e-mails.
In the article, Casement points out that many selective colleges have made it more difficult for students who took AP courses and tests in high school to gain college credit for their work. This trend has accelerated in college academic departments as the number of college freshmen who arrive annually with AP tests on their records has reached the one million mark, and it continues to climb.
"Selective colleges are increasingly wary of the program, and tightening further on the award of credit," Casement says. The AP exam is scored on a point system, 5 points being the equivalent of an A in a comparable college introductory course and a 3, the equivalent of a college C-plus, the lowest score that will earn college credit. One way colleges have tightened credit, Casement says, "is to reject not only scores of 3 but also of 4, leaving only scores of 5 as credit worthy."
"Policies like these are not simple administrative fiats, but a collection of decisions by individual departments," he says. "The matter usually lies within departmental control, and at various other schools -- William and Mary, Carnegie Mellon, Washington University (St. Louis), and Carleton, to name a few as examples -- several departments now require a score of 5, although other departments do not."
I hasten to mention that there remains a big difference between the way college departments award credit for AP and IB and the way the admissions offices of those same colleges regard the college-level courses. The faculty Casement has spoken to are suspicious of the whole idea of high school kids taking college courses, but the admissions deans love it, and want to see a big helping of AP or IB in every applicant's resume. Casement doesn't deny that, but his concern is what happens to those bright young people after they are admitted and try to get credit so they can take more advanced courses their freshman year.
"AP is talked about in the meetings of the Ivy League deans, where the growth and quality of the program are a significant concern," Casement says. "One veteran of these meetings summarizes the group's misgivings: . . . 'I think that we are unanimous in wanting to see AP as a means of placing students in appropriate college courses but not as a substitute for the full experience of a residential college. . . . [E]ven when our faculty are willing to allow students with AP to place out of introductory courses, they rarely believe that the AP experience is equivalent to those courses.' "
Casement says that colleges have little confidence in the ability of AP teachers and their students to handle the subject matter as well as college instructors and undergraduates. And because average AP scores have not dropped as much as he would expect given the increase in the number of AP examinations, he thinks the AP program cannot be providing as demanding and useful an academic experience as introductory college courses.
And therein lies a curious double standard. Casement's view of AP is based on a wealth of information about Advanced Placement, willingly provided by the College Board, but he has almost no information about the college introductory courses that he says are so much better. It is like betting on a horse with no record but a well-regarded trainer, not exactly the way to have a good day at the track.
Casement suggests that the College Board has not kept its AP exams at a college level. The College Board says it has, and offers as proof the fact that it gives AP tests to samples of college freshmen every five years or so and compares their introductory course grades to their AP scores. Casement complains that the five-year gaps are too large, since AP test taking is growing so fast, but he uses the College Board data anyway, without any apparent embarrassment, to buttress his points.
Among the questions Casement does not consider are these: how many colleges give a nationally standardized exam to students who have completed their introductory courses to see how much they have learned compared to what they might have learned in an AP course? How can we be certain that the professors who organize those introductory courses and write the exams are not just teaching and assessing a few peculiar issues that interest them, rather than the topics outlined in the college catalogue?
Casement told me he does not know of any colleges that give an AP-like test to confirm that their introductory courses are fulfilling their promises. I contacted 10 colleges and universities around the country and found no evidence of such an exam being given anywhere.
Officials at several of the colleges who are restricting credit for AP and IB told me they have no research that would show if their introductory courses are better or worse than AP or IB. Furthermore, few of them have any research that shows how well a student who has been given credit for AP or IB does in the next level course, when compared to students who took the college's introductory course rather than skipping it with high school credit.
Casement cites one such study by two departments at Harvard that led that university to decide that only grades of 5 on AP tests could be used for credit in any department. This was because students in three courses who got credit for scores of less than 5 did not do as well in follow-up courses as students who had taken the college's introductory courses.
That was one small study. I found another selective college has done another small study that reaches a different conclusion. At Claremont McKenna College in Claremont, Calif., students who skipped introductory calculus by getting a grade of 4 or 5 in AP calculus AB or BC had an average grade of 11 out of 12 points in its follow-up calculus course. This was better than the average grade of 9.51 for all students in that follow-up course. The same thing happened in Spanish. Students who skipped the equivalent course at Claremont McKenna by scoring 4 or 5 in AP Spanish had an average grade of 11.29 in the college's follow-up course, which was higher than the average grade of 10.68 for all students in that course.
Many colleges cloak the quality of their courses in respectful adjectives without much data to support their claims, and Casement provides cover for them by overlooking several possible explanations for their tightening of AP credit that have nothing to do with any decline in the quality of AP courses. College department heads, for instance, might be downplaying AP because they are unfamiliar with the AP program's strengths, since few professors have spent any time recently inside an AP class. Or they might want to show themselves academically tougher than rival colleges also recruiting high-scoring freshmen. Or they might be concerned that too much AP credit would force them to cut back on their introductory courses, which provide jobs for their graduate students and junior professors. There is not much research to back up these speculations, but neither is there much to support Casement's argument about a decline in the quality of AP.
In an interview, Casement accepted the traditional position of colleges when challenged about the quality of their teaching. He said they are respected institutions with long histories, and that should be enough for everyone when compared to an organization like the College Board that is trying to protect its franchise. Or as Casement put it, ignoring colleges' own tendency to distort facts in their own marketing campaigns, "I would trust whatever assessments the colleges make, whether they have formal data or not, more than I would trust what the present College Board has to say about its data."
You read that right. I showed the quote to him, and he stood by it. I put off writing this column for awhile, figuring he would step back off that precipice, and see that he was accepting all the college assertions on face value, but he stuck with it. Then last week Linda Seebach, editorial writer and columnist with the Rocky Mountain News, scooped me with a good piece about Casement's article, so I couldn't wait any longer. She suggests that somebody give the same standardized tests, maybe parts of the Graduate Record Examination, to AP and college introductory course students and find out who has been taught best. Casements wants AP students to take college tests and have professors grade them.
Until something like that happens, colleges will never accept the notion that mere high school teachers could ever match their elevated level of instruction. The National Research Council, for instance, indulged in this nose-in-the-air, double-standard in a 2002 report, "Learning and Understanding," by the Committee on Programs for Advanced Study of Mathematics and Science in American High Schools. It attacked the College Board for producing AP courses and tests that it said were too broad and shallow. But that same panel overlooked the fact that the AP courses and tests were designed by College Board committees to be virtually identical to the introductory college courses that the National Research Council panel members, many of them college professors, chose not to criticize.
College faculty, at least in my experience, rarely suffer from any lack of self regard when they are comparing themselves to high school teachers. They have impressive degrees, and assume that makes them great educators who know what is best for their students, or at least puts them at a higher level, both intellectually and socially, than those high school AP teachers without the proper credentials.
Casement notes that, according to the College Board, only six percent of AP teachers have doctorates and only about half hold master's degrees "in an academic discipline . . . consistent with the AP course that they teach." He asks: "At what colleges would a faculty profile like this be considered acceptable -- roughly half of the faculty lacking in master's degree in what they teach?"
This sentence awakened sour memories. It was very early in my life as an undergraduate that I learned about the great gap between academic credentials and teaching ability. My introductory college courses were sleep-inducing lectures in large halls with hundreds of students who had little access to their professors and tried, but failed, to get useful insights from the teaching assistants we saw once a week. My sons and their friends attended similarly large private and public universities, and had the same complaint. The message to college freshmen was and is: absorbing this stuff is your job, not ours. This approach encourages memorization and regurgitating the textbook on exams, but you are paying $40,000 a year for it, so you figure it must be good. And the colleges have made sure there is no higher ed version of an independently graded AP or IB test to prove otherwise.
Now let us look into the AP and IB courses. There are usually no more than 20 to 25 students per teacher, and three to four hours of class a week. Students have many opportunities to ask questions, float theories, debate important points and get regular feedback -- rather than the standard college midterm and final -- on how they are doing.
There are some bad AP and IB teachers out there, just as there are bad college instructors, but in my experience, good AP and IB courses are more common than good introductory college courses. AP teachers like Eric Rothschild in Scarsdale, N.Y., or David Keener in Alexandria, or Jaime Escalante in East Los Angeles, Calif., have used simulations and humor and visual aids and knowledge of each student to make those classes work, not something you usually find in big university lecture halls.
College professors are terrific at advising graduate students on their research, but what we are talking about is teaching introductory calculus, or biology or economics to teenagers, which takes patience, clarity and attention to individual student needs. The last time I checked, those were not qualities that universities looked for when handing out PhDs.