My closest acquaintances and co-workers are deeply in the technology industry, but some of you reading this piece may not be, and you may have heard of Chat GPT a little, but haven’t paid that much attention. I can’t blame you, after all, whenever there is a new hype, it takes a while to play out whether if it will stay or not.
I am here to assure you, that Chat GPT is no “Clubhouse”. It is not only here to stay, but it will become more relevant every day.
There is plenty of content out there about what it is and how to use it, but probably the best way to learn about it, is experiencing it. If you haven’t done it yet, just go to https://chat.openai.com, create a free account, and type away. You can even ask the most basic questions, like: “What is Chat GPT?” or “How can I use Chat GPT?” and it will tell you. My only warning would be not to share anything too private, as I don’t know yet how the data is stored or used.
So, Instead of telling you what it is or how to use it, I would like to focus on two things: Why is there some people freaking out about it, and why should you pay attention to it.
Let’s tackle the first part, and why I still think that there may be significant risk involved with this technology. Why is there so many people freaking out? Isn’t this just the same as any other technology? We had horses and now we have cars. Some people lost their jobs, but they got new ones, and the economy in general grew. Isn’t the same thing going to happen? Well, maybe not.
Why is there significant risk involved?
Alright, I get it. I am the first one to say “everything will turn out ok!”, and go with the horse and buggy vs. automobile metaphor, or the move from agricultural societies to the city and to industrial societies, etc. But I think “this time may be different”.
With AI, there are two fundamental differences with anything in the past.
The first one is the “marginal cost” of AI. We know already that AI will take over some jobs. This has happened with every new technology, but the great difference with AI, is that: first, it is one big training set, and second, it is all in the cloud and it’s software. Because it is one big training set, “one big brain” it can accumulate knowledge, and how to process information in any situation, across all interactions, and even across domains. In most other systems, the knowledge, the customization, the optimization, is localized, it doesn’t cut across all interactions that are taking place, and certainly stays in one domain. The preferences on how to serve a burger, cannot be used for how to manage the queue at the local DMV. With AI, these cross-domain applications are quite easy to do. In addition to this, once the model has been trained, to deploy another AI session, has marginal cost. It is just a software copy, and the added compute power of one more concurrent session. Basically nothing.
So if we take this to the job disruption scenario, AI has the potential to disrupt EVERY discipline quicker than any other technology, because the learning obtained in one industry can help to jumpstart the learning in a different one. Then, the cost for displacing “yet one more worker” is almost $0. This is very different from any other previous technology. A Ford model T was never the same as a tractor, and the 1925 equivalent of $4K is still more than $0.
The second big difference with anything we have seen before, is what is many times referred to as “the singularity”. If you don’t know exactly what this means, and you have watched enough SciFi, you probably have the wrong idea. It is NOT when the cyborg turns against the human, or when HAL refuses to open the Pod Bay doors, although these may be possible consequences of it.
The “singularity” refers to the point in time when an AI becomes intelligent enough to build another AI, just marginally better than itself. It is called the singularity, because at that point, is “game over”, the human element can (and probably will) be taken out of the picture. In a scenario like this, an AI can build that slightly better AI with every iteration, and with a system that can do this in seconds, that never sleeps, and that is relentless, will become a super-intelligence in a few minutes.
Well, the language models developed for GPT3 and now for GPT4, are already capable of creating computer code. Code that actually works. And the newest versions can utilize tools. How far are we from a language model like this “sitting” behind the compiler, and making GPT N+1?
Those two things summarize the risk of losing control of its consequences, the capability to scale like no technology ever before, and the capability to work on itself. We can come up with every doomsday scenario, but the scariest part is what we don’t know.
Why should you be paying attention?
Let’s be optimistic and assume we can work these things out. Artificial Intelligence is getting to a point where it is becoming quite useful. The systems can provide usable answers, and “driving” certain systems in a way that is acceptable. In other words, “the quality of the output is good enough”. In addition to this, the system is becoming usable in a practical way. Sure, the Chat GPT “app” is a chatbot. You write, and it answers. But the system can be embedded into other systems. It can be used -for example- to make a better search experience, or it could be hooked in the backend of voice-command systems like Alexa or Siri and improve their behavior. Again, in other words, “it is usable”.
So when something has a quality of output that is good enough and that is usable, what happens next? Well, people start to use it to their advantage and to augment their capabilities.
This is the main reason why you should be paying attention to it. If you are not, someone else (your competitors, your clients, your suppliers) is, and this puts you at a disadvantage. Would you, TODAY, go about life without a smartphone? or without Internet? Well, very soon, AI is going to be just like that.
If you want to discuss any ideas on how to start using these tools, I am curious as well, so let’s talk.