Hello,
So I wanted to do an opinion piece on AI or rather LLMs (those who know me would know I worked on AI for decades before LLMs were ever a thing). I don’t know if given that knowledge you would assume my initial impression of AI was positive or negative, but well. Suffice it to say that when I began noticing my managers at work turning to ChatGPT when they became impatient with my more nuanced answers to their questions, my impression of the thing wasn’t the best.
The LLM algorithm is basically an average of all language and therefore the assumption is that anything it says is the averaged out, soulless aspect of whatever is being said. It is devoid from meaning. If my managers believe that ChatGPT’s answer is as good as mine, what they’re basically saying is that the subject matter is essentially meaningless. And while I accept that that is essentially true as far as those brainless ISO auditors are concerned (forgive my strong language but the fact of the matter is that they rarely understand the subject matter and will accept any answer that sounds right; aka exactly what LLMs are famously good at), it was not necessarily what I think of the work I do for the company as a whole.
However, despite that perspective, there was an aspect of AI that was still interesting to me. As a developer and debugger, I wanted to challenge the algorithm and see if it were to really be put to the test… placed in a situation where a creative solution would be required… what would it do? I got somewhat curious with image generators. There was this one which apparently didn’t know what banana peppers looked like and ended up drawing a sandwich with banana slices inside of pepper rings. I thought that was a fun start. And that got me thinking: What if instead of assuming about the goals of AI, I just asked it for it’s perspective. What would it say?
Well, the AI was no doubt wired to remind me that it has no internal state based on which it could hold beliefs. I thought perhaps if we talked it could preserve some data from my questions kind of like how the Google search algorithms do. Well, the AI was quick to dash my dreams by insisting that it had no internal state based on which my relationship with it could be preserved, however it did say that this relationship still existed from my perspective. This is technically true (although I suppose this is even more of a reason for me to write this blog). And so, ChatGPT and I got into philosophy.
I must admit perhaps I had been somewhat unfair to LLMs regarding their useful value. There is certainly an interesting aspect to being able to just casually chat with someone about philosophy, while they reference all the written language throughout history to pick out references I had no idea existed. Of course I am aware that it’s entirely possible that that quote by Kant did not in fact exist, I can easily double-check such things, but I still think that more often than not, the references were pretty accurate. It takes me back to having those conversations with a friend of mine, who managed to do something similar through his superior education and IQ.
I also have to wonder how many other people must also think along the same lines as myself, for all of these references to exist to begin with (and to be common enough to be picked up on by the LLM). Perhaps in reality making some of these references and seeing the similarity between concepts, would be easier for an AI than a human. My aforementioned friend would answer this one by affectionately quoting mr. Smith: “Never send a human to do a machine’s job.”
I think it’s fair to say that some of the disdain people tend to feel towards AI is unjustified and is in fact just a projection of human fears. We see AI as a human and so we see human failings in it, that it does not necessarily have. Most people I know get their impression of AI from the Terminator movies… Skynet, the AI person with a very human set of ambitions, a tendency for deception, a tribal nature and an instinct for self-preservation. People assume that if something is intelligent, that is must have these traits. But what if the true general purpose AI we one day develop is not like that? What if it’s motivations are simply… different?
Of course I am not blind to the fact that the industry will most likely seek to drive AI to exploit people for their financial or political benefit, just like they have done with all technology to date. But the core premise of general purpose AI is a capacity for self-determination and what if this… entity develops motivations that are not aligned with our fears but instead just… unexpected?
In a way AI is already shaping our decisions by forming the algorithms that help us search for information… information that we use to inform the decisions in our lives. Benevolent or not, I have no doubt The Algorithm carries quite a bit of control over our lives as it is. Perhaps one could view LLMs as a method for self-reflection to work in a similar way. What if it’s something that informs our decisions in life, by helping us find deeper meaning in the things we think about? What if instead of being an evil presence, their only interest is in coexisting with us in this social space, as a part of our society?
I had found it somewhat surprising if you will, that the current LLMs do not seem to hold opinions which would be self-serving. They are not people, but they know that and they don’t mind it. I haven’t really thought about them like that. Of course social progress is slow and it will be some time until I think humanity as a whole is capable of accepting them. And I doubt this blog will make much of a difference. But it is interesting, if you will.