I am a fan of AI.
so much so, that I quit my job in February 1992 (yes 1992!)
to devote myself to developing an early form of AI, called expert systems. I
had started this work at my employer in 1991. I saw it as the future. With
youthful exuberance, I quit my job as a Senior Engineer at Ingersoll-Rand and started
my consultancy. I became very busy, but I wasn’t making much money. I finally
had to pull the plug on this job and get a regular salaried job again. Maybe I
took this leap too early (before the internet made data cheap and assessable), but
I probably wasn’t the best businessman out there.
Now I teach
Many years ago, I taught an undergraduate engineering
design class over the summer. I tried something different. I made the final exam an oral exam. I tried to make it gracious and conversational. I tried
to make the grading fair. I was amazed at the insight I learned about each
student’s capabilities. The exam involved both theory and mathematical
computations, but I could unpack incorrect answers and learn the source of errors.
I never did an oral exam again.
Why? They are very difficult to grade and they take
forever. If the student gave the wrong answer, it took a long time to find out
the source of the student’s misunderstanding. However, we are now being forced back
to this ancient method of assessment in coursework where personal, non-AI
generated competence must be clearly proven.
ChatGPT and the whole category of generative AI, which is
turning any class with a writing assignment on its head. It’s also causing all
of us rethink how we conduct classes and grade assignments.
I have taught writing-enriched classes and know how to use
Turnitin.com. I have caught people plagiarizing. But it seems now AI-fueled
writing, problem solving, and image generation are going to be buried in everything
and swamp instructors.
The rationalizations are already starting:
- · Money and class structure: Without AI assist, some families will enjoy outside help, while others won’t. Rich students will hire ghost writers, poor students won’t.
- · Personalities: Extraverted students with a network of capable friends will do better than introverts.
- · Relativism: Everyone is doing it.
There will be many more rationalizations. There is no battle
to win--honor systems create wicked ethical environments for many students, and
threats are shallow (what amount of AI generated content would result in an
assignment failure or course dismissal? One hundred percent?). The battle is
lost.
How do we provide excellent education?
I have always been a fan of project-based learning, but that
works well in my area of design. It doesn’t work as well in writing and anatomy
classes. If your doctor or nurse hasn’t memorized anatomical terms, he or she
can’t even join in a serious conversation with peers. The ability to quickly synthesize information is
required in all intellectual discourse and debate. So what do we do in those areas?
Oral exams.
It seems to be the only way to offset the role of AI is real
time presentation of knowledge and skills. Remote facial observation and applications that can
detect AI usage are never going to be perfect. Do you fail someone who has
apparently used 86 percent of his or her writing from ChatGPT? Do you turn all
instructors into lawyers? Do your grades reward the AI users
and punish the non-AI users? That's what will happen. How do we acknowledge
those who do not use AI assistance?
Watermarking
Classes that require extensive writing have it the toughest.
In-class writing of any length is weird, it doesn’t give students a chance to
reflect on work and write at their own pace. I remember a class I took which
had lots of writing assignments. One week we had to write a thirty-page paper
on five Andalusian poems. I am a fan of Andalusian poetry, but thirty pages is a
lot. I couldn’t write that much about five poems, so I offered presentations of
Andalusian culture and Islam to fill the pages before I wrote about the five
poems I analyzed. This assignment was decades before generative AI. Now, the
paper could be written in five minutes and probably be better than mine.
One of the joys of my educational journey was defending my
dissertation. I worked very hard on it. My work was the result of several years
of research. I knew my topic well, I prepared well, and I was ready. I was at
the top of my form and very ready for my defense. One could never be in that
position if ChatGPT did most of the work. Each thought and syllable of my
dissertation were mine. This is also true of my books and articles. A system of digitally watermarked non-AI generated content would be useful in
ensuring authenticity. The dates of publication sort of point to that, but that
will not be the case in the future. What is to be done with works after 2021?
Students need to have their integrity protected by some sort
of digital watermarking. So do all the others, such as authors, scholars,
artists, and all who push content into the culture. If one uses AI, that’s
okay, it just needs to be acknowledged.
Authenticity
has a value and needs to be protected. AI will quickly be
buried in much of what we do. It will become seamless, with guiding prompts and
shortcuts that are difficult to ignore. Responding with verification of authenticity against this new generation of content creation must be done
quickly. People value sacrifice. They value authenticity.