Leviathan: On Technocrats, Accountability and Creative Fabrication
The message was urgent. It said the General Education Program Assessment results absolutely had to be submitted by the end of the month. It was imperative that those of us who had not yet reported our program’s results do so immediately. Or so the message said.
So I spent the greater part of this morning trying to figure out what I still needed to do to submit the results of the Humanistic Traditions I and II courses for last academic year. I have reported these results for seven years now. It did not take me long to figure out why I had not already reported the results and why I find it increasingly onerous to be charged with this little technocratic nightmare.
The Ever Expanding Job Description
Seven years ago when this program began, I was a visiting instructor in the Philosophy Department. My primary job duties at that time were to teach four courses a semester wherever I was needed. Because I am able to teach in three of our disciplines – Humanities, Religious Studies and the Philosophy of Law course – I generally have at least three if not four preparations for courses which range widely in content and methodology. As an example, yesterday I left my Latin American Humanities course where we had been talking about Cortez and the conquest to scurry across the campus to meet my Philosophy of Law course where we were discussing Kohlberg’s stages of moral reason developmental model. While this wide a range of ideas and preparations can be exhausting (and confusing –like remembering which books and student papers you need to have with you enroute to classes which inevitably meet across the campus) it is also intellectually stimulating and prevents boredom. This was what I was hired to do nine years ago. And on the dwindling number of good days I still encounter, it’s still a job I love.
In the intervening years, my job duties have grown. I have become a regular on the Honors College faculty and among the Honors in the Majors thesis committee members, chairing five such theses projects myself. I have taken on the prelaw advising for the College of Arts and Humanities for which I have been given a course release once a year (though my advising duties are year round and the release is now in question). I sit on two different curricula committees and over the past two years have sat on two search committees for new faculty members and chaired yet another. None of these duties were part of my contract as a non-tenure track instructor. And most I have at least willingly gone along with under the rubric of being a team player.
Of all of these additional, non-paid duties, the one which I have come to dislike the most is the GEP Assessment program. At a basic level, it is little more than the muddled micromanagement at the university level under the banner of “accountability” that has reduced the once noble profession of public school teaching to a technocratic nightmare driven by standardized testing. It operates out of the unsupported (and I suspect unsupportable) assumption that without some kind of empirical data that students are learning, we must assume they aren’t. The burden thus is upon departments to demonstrate learning is occurring by creating instruments which produce data which allows technocrats and their corporate overlords in Tallahassee and the board rooms beyond to rest easy at night. Those same wonderful people who brought us No Child Left Behind in which about 1/3 of all America’s children were consistently left behind without a high school diploma now have come to a university near you.
In the beginning of the program, the goal was to create learning outcomes and measurements - educational technocratese for data producing instruments. The Philosophy Department did not then and has never had a standard set of ideas which supposedly marked a bottom line in terms of student learning. The content and pedagogy of courses were left to instructors with graduate degrees hired to decide for themselves what and how to teach under the rubric of intellectual freedom. Hence the emphasis in Class A might well be different from that in Class B and indeed such might be expectable from a course which undertakes to examine all of the history of the world through the lens of the arts, religion and philosophy. Without a standard curriculum and pedagogy, measurement of all courses by the same artificial standard tells us little of value.
Though I am poorly trained to be a technocrat and have little inclination to work as such, with the help of the department chair, I created the original assessments for our two humanities courses. We identified three sets of broad learning outcomes (e.g., “To demonstrate knowledge of the meanings of an artwork, performance, or text in diverse aesthetic, historical and cultural contexts”) which would then be tested for outcomes. We devised two measures for this process. The first was a pre- and post-test of 10 identified concepts which students were to take at the beginning and end of each semester. The idea was that students would correctly identify more of the concepts on the post-test than the pre-test. The tests were placed on a webcourses site so students could access them easily. The 10 concepts were derived by asking instructors to name three concepts they thought were essential in their courses, tallying the top ten concepts and then creating a multiple choice question to test each concept. The second measure was to create embedded questions from the concepts we had identified and from which instructors were requested to create questions testing the concept embedded in another written assignment (e.g., exam, essay).
That was seven years ago. As much of a pain as it had been in the beginning, it seemed as if this new assessment process might be relatively painless. Little did we know then what lay ahead.
A Dinosaur Egg Hatches
Over the years, student participation in the pre-test + post-test process has averaged about 35% per semester. While instructors can urge students to take the assessments, it is ultimately up to students who have been taught that they are consumers operating out of the mantra of “What’s in it for me?” to actually do so. That the results are this high probably speaks more to the slightly coercive tactics of classes like my own which assign 5 points participation credit to simply taking the assessments than anything else. As for the embedded questions, over the 13 semesters I have reported results, exactly five instructors have ever reported results. To put this into perspective, we generally offer up to 15 sections of each class every semester.
Were this the only challenge of this process, I would probably just chalk it up to yet another hair-brained micromanagement plan imposed upon already overworked instructors under the rubric of “accountability.” It’s always amusing to hear the people who are most adamant about not trusting public servants who work in government while insisting upon giving absolutely free rein to the greed driven servants of profit in the private sector where transparency is unheard of. It’s also amusing to hear talk about educators needing to be accountable to a public whose conduct has been the paragon of social irresponsibility – defunding public education while imposing ever more burdens upon those actual public servants who continue to labor in the profession despite all of this.
A couple of years ago the then-director of the GEP Assessment program asserted that students were not taking the pre- and post-test because they had no “buy-in,” i.e., there was no incentive for them to take the tests without somehow benefitting their grades. This kind of thinking is expectable in an educational system that regularly confuses its purpose with that of just another provider of consumer goods and services. But the more serious problem with the Assessment program as I see it is that there has never been any buy in on the part of those expected to administer the process itself.
The reality is that no one has ever adequately explained why this process is necessary in the first place. While instructors have repeatedly been told that this is somehow beneficial for their teaching, no real demonstration of that benefit has ever been provided. Hence, to most instructors it appears as little more than one more annoying burden on an already poorly paid and overly demanding job. When pressed on this, administrators of the program often seek to reverse their required burden of proof and place it on the instructor: “Don’t you want to improve your teaching? Don’t you think you could improve?” Of course, without any demonstration that such teaching needs improvement and that this technocratic process can somehow provide valuable insights into the same, why would any reasonable instructor buy-in?
As GEP Assessments coordinator for the humanities courses, it has been my job to procure and report the results of the pre- and post-tests and the embedded questions each year. As a result I regularly find myself in the unpleasant position of being in the middle between technocrats demanding increasingly complicated results on the one hand and colleagues resisting the assessment process on the other. And if that were the only problem, it would probably be bearable. But over the last seven years, the process has taken on a life of its own. All of that came to a head this morning as I attempted to submit the urgently desired results.
The Leviathan Emerges
The original reporting process contained three measures and corresponding outcomes and six sets of drop down fill-in-the-blank boxes allowing for results to be recorded. This was a rather straight forward way of reporting the findings of the assessments. The 2011 version I encountered this morning has grown into a leviathan. The six drop down boxes has now expanded to 26 sets of multiple choice inquiries with boxes which must be checked, each box in turn requiring a drop down box for thenarrative explanation of the answer provided.
The tenor of the current reporting process is indicated at the beginning of the form: “We strongly recommend not copying directly from Microsoft Word or Excel to the rich text boxes as the text being copied may contain html and/or xml code which may hinder how the document is viewed. We suggest to first paste the text to notepad, then copy the text from notepad to the rich text box.” The notion that the technocrats operating the assessment process might actually provide a user friendly system – indeed, a particularly useable system at all - to its unpaid servants seems not to have occurred to them. From the very beginning, the burden is on the instructor.
That tenor is reflected in the content of the form. A number of the boxes contain questions that appear to have no connection to the courses whose results are being reported but require answers nonetheless. One such question asked about the use of surveys in assessing our courses, such surveys aimed at graduating seniors and alumni as well as those designed to determine “student satisfaction” and “customer satisfaction.” The use of surveys to somehow indicate whether students are learning exposes the absurdity of this entire assessment process. True pedagogy has no customers nor is it assessed by any type of student satisfaction level which could be related in a consumer survey. What is amazing is that the Factory has actually tipped its hand here to reveal the depth of its own “buy-in” to the shallow consumerist concerns of its corporate overlords.
This was the point that I realized why my results had not yet been previously submitted. I suddenly remembered that when I sought to submit my results in May just before heading out to my Fulbright trip to Brasil, the website had refused to allow submission of results from the assessments without answering all the questions. That included those that had absolutely no relation to the courses being assessed. Not only were assessment coordinators required to check a box regarding which consumer surveys they had not given and were unlikely ever to administer, they were required to offer a narrative explanation of their choices. Last May I didn’t have time to track down the appropriate technocrat to either tell me how to get around the technology or provide an appropriate non-response. So I had not submitted the results. And I have to admit this morning when that submission had become so urgent, I simply punted.
Under the question about customer surveys, I noted that the Philosophy Department did not have customers, it taught students. For the question regarding student satisfaction, I simply remarked, “Dear G-d, folks, we’re not Burger King – yet!” No doubt this will provide some heartburn for those whose livelihoods depend upon cooperative minion instructors playing the game without question, that “buy-in” thing again. I hasten to add here that I do not wish them ill, personally. But creating increasingly burdensome forms which require answering inapplicable, unanswerable questions and then justifying them with narrative explanations merits a little heartburn if not a reality check for those who impose such burdens on others. While I doubt it will make any difference, if one of them stops for one nanosecond to perhaps question the sanity – much less the real utility – of any of this stupidity, it will have been worth the effort.
For the record, I’m not holding my breath.
Someone Else Will Get Saddled With It….
After I had finally gotten the site to accept the submission of my actual results along with all the bullshit I had to make up to get the report accepted, I wrote the coordinator who had admonished us to submit our reports in light of the urgency of the situation. I noted that I had finally gotten the reports to submit, adding that the process had become onerous and time consuming. I didn’t mention that the process had become impracticable and required creative fabrication to even submit the required forms. The response I got was to thank me and then to suggest that perhaps someone else in the department should take over “these tasks.”
There is no hint in this response of any kind of gratitude for having gotten the process off the ground to begin with and the seven years of unpaid effort which have followed. As with most technocrats, it’s a matter of “What have you done for me lately?” There is also no recognition of my repeated efforts to make clear to the assessment lords that until they make their case for the need for this process, they will get neither instructor nor student “buy-in.” Not surprisingly, it does not respond to my ongoing challenge of the presumptions of the need to prove learning through artificially contrived empirical data or the need for instructors to improve their class through such data. These are not things that technocrats - either the true believers in the “accountability” ideology or those absolutely desperate to hold managerial positions of questionable value - want to hear.
Of course, the reason I got saddled with this job in the first place is because I was low man in seniority in the department at the time and had no choice. When you’re seeking a full-time slot, you’re willing to do a lot of things you wouldn’t do if you had a real choice. My yearly offers to give up my crown as GEP Assessments Queen have drawn no takers. No one else wanted to do this bullshit then and my guess is that no one wants to take it over from me now. If I am relieved of this burden, my guess is that it will simply be the result of there finally being someone beneath me on the seniority pecking order.
I was circumspect in my response to the director’s suggestion that someone should replace me. “Thank you.” I said, adding, “I’ve been saying that for the last couple of years.” And perhaps someone will take it over for me. Of course, that will make this process no more particularly useful to any of us. It will also make it no less onerous and time consuming. It will simply mean that I will no longer have to be bothered with it.
The Rev. Harry Scott Coverston, J.D., Ph.D.
Member, Florida Bar (inactive status)
Priest, Episcopal Church (Dio. of El Camino Real, CA)
Instructor: Humanities, Religion, Philosophy of Law
University of Central Florida, Orlando
If the unexamined life is not worth living, surely an unexamined belief system, be it religious or political, is not worth holding.
Most things of value do not lend themselves to production in sound bytes.