Image courtesy by
White Tower Musings1

I stumbled on a report by Electronic Frontier Foundation titled Spying on Students: School-Issued Devices and Student Privacy written by Gennie Gebhart. It is a in-depth report that is concisely summarized in the report itself:

As students across the United States are handed school-issued laptops and signed up for educational cloud services, the way the educational system treats the privacy of students is undergoing profound changes—often without their parents’ notice or consent, and usually without a real choice to opt out of privacy-invading technology.

After reading this, I immediately thought about my nieces who are now studying at a US public middle school, where most of their homework is done and submitted electronically. My family is hugely distempered about this development, and despite them having very diverging opinions on various topics, this is one of the few things that they find a common ground for agreement.

To advocate for technology however, electronically submitted work may entail some positive aspects. And while I’m having a hard time to name one off the tip of my tongue, I do however expect that there are at least several pragmatic and utilitarian benefits of this development, even if this is a development that is widely pushed by government officials who are proponents of whatever the giants of Silicon Valley2 have on their mind. For example, Google’s G Suite for Education is already used by more than 30 million students, teachers, and administrators, and the number is rapidly growing.3

This leads me to my main thought about this development which I want to tie to the recent fuss about the potentials of automation. Already, 27% of all the current education activities can be automated just with currently demonstrated technologies.4 This number will most likely grow twofold in the upcoming 20-30 years with the ongoing AI research and understanding of the natural language — which is the point where computers gain the ability to recognize concepts in everyday communication between people.

If AI will one day be good at, say, 70% of all teaching activities, will our society raise the ethical questions behind this development?

AI Teachers

Now imagine the future where computer AI’s become a reality. If the current educational system is already quantifiable and determinable and is able to produce huge sums of data, it only makes sense to use an AI to make sense of it and make it functionally applicable. This is why teaching AI’s could make sense if these are the goals that are being set. If teaching can be streamlined towards some quantifiable target — such as achieving high student IQ’s and good grades, etc. — what would that do to the traditional notions and ethics of teaching that is valued according to our humanistic values?

I think that this will become an ethical question in the next 5-10 years and it will boil down into these extremes:

  • Pragmatic and Utilitarian — Teaching is viewed as an activity that must increase a child’s cognitive abilities by some determined and quantifiable amount. An AI would assess a child’s flaws and strengths and decide on what area should it focus on teaching the child which can include memory, spelling, speech, arithmetic, etc.
  • Humanistic - Teaching is viewed as an inherently human activity which must be done by a human, despite the fact that a human teacher may be more imperfect compared to a computer teacher.

If it is possible to create AI’s and algorithms that outperform any human teacher, would parents favor such an utilitarian approach as apposed to preferring an imperfect human teacher? In other words, would humans value the traditional humanistic values and notions of teaching, such as the ones that are sensitively shown in films The Karate Kid (1984) and Dead Poets Society (1989); or would humans start preferring computer algorithms and AI’s that can quantify and boost our children’s cognitive abilities more quickly?

People will step on brakes

Now, I think that people will start stepping on some brakes at some point. The factor that should make people want to step on brakes, is not the side that approaches these questions rationally and pragmatically, but the side that favors humanitarian values. From an utilitarian perspective, an AI teacher seems like a rational approach, but would it be able to introduce the same values that we value in our humanistic society? If an AI would be able to teach out children humanistic values better than we do, are we willing to accept this?

I expect that resistance will come from the side that values and conserves humanistic values of teaching, as opposed to a form of technocratic pragmatism that is predominating current technologic progress in general.

People will not step on brakes

The other side of me, thinks that people won’t be able to step on the brakes. And the reason why this might be the most likely scenario, is the ease by which our society adapts to new technology. Most parents don’t have the necessary technologic literacy to question the most underlying privacy and ethical issues behind technology, simply because of the pace everything advances today. The most significant factor that makes the older generation question the advent of new technology, is their personal experience. “Back in the old days” remains the most reliant way in which the older generation questions new technology.

But generations come and go, and the old generation of today, will be different from the generation of tomorrow. Today’s young generations will grow up in a more technologically abundant environment than the older one. If a future generation grows up in a society where AI’s and electronic homework is the norm, the technologically manifested youth will be much more unlikely to question the norms that they grew up in. This is where humanistic values slowly fade into the background, and the pragmatic and utilitarian aspects of teaching become most predominant.

Conclusion

It will be interesting to see how the education system unfolds in the next few years. It may be undoubtedly said that technology is not going to leave the classrooms at this point, but is only going to advance even further. At a certain point, technologic progress and the advent of automation, will eventually start questioning the teacher’s position, just as it would many other professions and occupations. It is going to be a battle among the humanistic and the technocratic side behind the question of education. I can bet that humanistic values will eventually yield to the technocratic ones, and I think this will have big implications on not just human values, but even pave the road to a change in human nature.

  1. Image courtesy by White Tower Musings. A scene from Dead Poets Society (1989). 

  2. Although the creators of software that is used fro evaluating and grading student work is made my startups, giants such as Google, remain the main backbones behind this. 

  3. Bram Bout. (Apr 30, 2014). Protecting Students With Google Apps for Education. Google Cloud Official Blog. 

  4. Michael Chui, James Manyika, and Mehdi Miremadi. (July 2016). Where machines could replace humans—and where they can’t