Saturday, March 10, 2018

Student Learning Outcomes

A couple of years ago, we were surprised by an accreditation requirement handed down from the administration called student learning outcomes (SLO). The idea was that we had to come up with a plan that would demonstrate improvement in student learning. The problem was that we really didn't know what we were expected to do and the administration didn't really know either. So we slapped a bunch of lame assessments together that were inconsistent, a major time suck, and didn't tell us much at all. The problem is we ended up having to keep doing them for a few years until the accreditation review was over. The agony frustrated all of us and we promised we would replace it with something better and, more important, useful.

Well the time has come to replace the SLO's. We are having our first division meeting today to discuss it. For a long time we have considered that to make something useful, we need to find out how well our students do when they transfer to a four-year school or an allied health program. If we knew that, then we would know if are students are leaving LBWCC prepared. If not, then we can pinpoint the areas of weakness and beef them up.

Unfortunately, the data are not readily available. So we are stuck, we have no idea whether our graduates are succeeding elsewhere. The slippery slope leads to standardized testing that would tell us how we compare nationally. No one is thrilled with that prospect. One obstacle is that we would have to pack in more material during the semester just to cover what is on the exam. I don't know how other schools do it.

Another idea for creating SLOs is to identify the fundamental skills students need to master for success in future courses and programs. For example, can they use a microscope? That may work for lab-heavy courses like chemistry and physics, but for us biology folks, the list is very limited. My chemistry colleague has suggested we focus on basic content that would embarrass us as teachers if students didn't know when they advance. For example, a biology major who couldn't explain what a gene is.

So far, most of our SLOs are just a series of quiz questions on basic content. An intriguing idea being floated is that, instead of 5 different quizzes assessing the basics, we make one quiz, that focuses on the links between concepts. Since information builds on itself. You have to know such and such to understand thus and so. We could identify at what point did the learning fail. Did they fail because they didn't master the beginning material or was it midway or near the end? For example, calculating yield limits in chemistry. You have to know nomenclature first, then using that, you can master balancing equations, and, once you have mastered that, then you can do stoichiometry, that ability then leads to successfully calculating yield.

My biggest problem with all of this is collecting new data. We already have assessments that measure all of this; quizzes and exams. We already know where learning is weak and what topics are tricky. We assess learning all through a semester. The problem we have with SLO development is matching. To be scientific, the assessments must be similar, identical is best. My quiz on enzymes might be tougher than that my colleagues use. Their 80% average may not mean the same as my 80% average. We need something else. Some sort of metric, independent of assessment differences, that would be useful but wouldn't require extra standardized assessments.

So this is my idea. For years I have noticed that student's test scores rarely vary from exam to exam or quiz to quiz. It is amazing to see. You may have one student, who in quiz after quiz, scores within 5-10 points of the same grade. In fact, most teachers can predict after the first few assessments, who will likely pass the class and who won't. The scores are so consistent, you start to wonder, what makes a student who gets B's on every quiz different from a student who gets D's on every quiz? What is going on? Why is there so little fluctuation?

We have a very strong push from the top down to get as many students as possible successfully through our program. Sadly, one easy way to accomplish this is to reduce the rigor of the courses. Classes that are easy As are popular, packed, and ignored by administrators. Ego, purpose, and the big picture prevent me from following that path. The result is that I am constantly working on the problem, what can I do to help my students learn? How do I help the D students become C students?

                                     _________________________________________

Well, we had our division meeting yesterday on the SLOs and sadly we are back to what we did three years ago. We spent the hour talking about what basic knowledge would embarrass us as teachers if students moved on to future courses without learning. The lists look very similar. I had a bit of a breakdown, but thankfully, Brian explained how grades were not enough to answer the question. How do we know that a C student mastered X,Y, and Z?

I am tired. The stress of creating a hybrid course from scratch, teaching a brand-new orientation class, building canvas sites for newly web-enhanced courses, keeping weekly track of attendance and grades on two new platforms, and trying to improve my classes have done me in. I spent yesterday afternoon and evening painfully processing everything. I was overwhelmed. Thankfully, a long walk at the lake with Mr. B ameliorated a lot of the agony. Two weeks to Spring Break and two more months until the end of the semester, I think I can make it.

No comments:

Post a Comment