An early post today because I just checked out the AP test scores of my students from my last year as a teacher. 51% of my students taking the test got a 3 or better. Over the years my success rate has fallen, but this was a better rate than two years ago.
Although I’d prefer the rate to be above 60%, the College Board doesn’t make it easy to determine what areas, as a teacher, I needed to focus on to improve that rate. So, while each year I get less than a 60%, I’m irked, I’ve learned to throw my hands up in the air and just move forward.
It’s frustrating, and just another example of teachers wanting to improve, but not given the appropriate information to do so. Instead, we are told by the district to get some percentage, like 60% of the students to take the AP test (like we can control that) with 60% getting a 3 or better (with a vague analysis of how students performed).
Knowing that my students averaged a 3.4896 on the synthesis essay isn’t helpful unless I know which students got what score (at the very least) and saw what they wrote (most appropriate). I won’t even go into the ridiculousness that makes me incredibly angry about the multiple choice section.
Here’s a test, unlike State mandated tests, that would be useful to have an analytic breakdown to improve teaching, but we don’t get it. On the other hand, we get a fairly decent breakdown of the State mandated tests, but the information isn’t useful because of the socio-economic factors that impact educational success as well as lack of student investment to do well.
With a 50% success rate, I can allow myself some peace that I didn’t let my students down during my last year of teaching — when my focus was everywhere but the classroom (largely due to personal concerns, but also because of work related matters).
That’s the current problem in education. Teachers are being forced to chase fairy trails of inappropriate information and engage in more fruitless tasks than having access to appropriate information and given the time to focus on actual classroom concerns.
Every professional development session that explains the importance of using inappropriate data takes away from planning time with the scant relevant data we have. Every reflection written and cataloged causes us to devote more time to ourselves rather than improving the work of our students. Every change in schedule or daily format causes us to start over from scratch. Every political reformation causes us to swim with sharks rather than behave collectively as a school.
Ooh… collectively invokes the dreaded thoughts of socialism or communism: heinous words in our American culture. But, competition in education is counterproductive. Teachers need to work together to have students excel. Teachers required to compete don’t work together.
Dana and I are an example of that. She and I have both taught AP language and composition for the past six years. Historically her success rate has been slightly better than mine. This year mine was better than hers. Typically, my first reaction to that difference has been to consider the students in each of our classes… did she have a stronger slate? Did I?
This year, however, my first reaction was a sense of relief. Emperical data that can be used to defend the argument that RUSD’s changes have caused the loss of at least one “good” teacher (in actuality more than just one). The implication of that thought, however, is that, by those same scores, each year Dana or I is “better” than the other, and that definitely isn’t the case. But it’s human nature to use the data to shift discussions away from using the data to improve student learning to using the data to justify, defend, or blame our teaching efforts: competition vs. collaboration. Sharks vs. schools.
The ironic thing about using data in this way is the one-size-fits-all paradigm in reinforces in the effort to individualize instruction.
Anyway, the scores I’m looking at, as usual, have a couple of surprises. A couple of unsuccessful scores that I expected to be successful. A couple of successful scores I’m pleasantly shocked to see. And then there’s the range of successful scores… she didn’t get above a 3!? How the hell did he get a 5!? Questions that won’t get answered, but would be ever so helpful.
This is the last year for me looking at those scores and feeling the frustrations and pride that drive me to improving myself and those around me.
I am content.
Strange. Content is typically my goal. But, in this case, content is a minimalistic feeling. Content must be like that glass half-full, half-empty point. Whereas, I’m usually a half-full type of guy until one more drop has disappeared, this time it’s more of a half-empty perception. It must because of the aspect of potential.
When there’s potential, the glass is half-full. When there isn’t, it’s half-empty. At the end, 51% of my students taking the test this year succeeded in getting a 3 or better. Next year I won’t have any students taking the test. Just over half of my final students succeeded. I feel half-empty about that. Content.
(For the uninitiated, 50% in this case isn’t a failing score. It’s a challenging test, designed to filter mostly college bound students with a diverse range of ability who voluntarily take the test, or are forced to by their parents.)
moving along now, getting married tomorrow, closing that book.