Some time ago, I published a note on how systems, when operating outside of their range, start to act in unpredictable ways, and I received a comment that went something like “computer (systems) react to something that has been given to it before”, attempting to support the argument that man-made systems have trouble with the unpredictable. I just wanted to set the record straight.
Nothing farther from the truth.
Unpredictability, and the random nature of the world is something that engineers study hard. That’s one of the reasons that telecommunications engineers -and no offense, I am not talking about the good folks who connect stuff together, but those who actually design the network elements and the software and circuitry that goes into them- study statistics and probability to a depth that would make your run-of-the-mill MBA stat class laughable, I mean, Poisson distribution in eight dimensions type hard. Most of the design considerations are based on the fact, that when a user utilizes the system, whatever they are going to put into it is impossible to predict. Your phone doesn’t know what you are going to say next, if you are going to whisper or scream, or you are going to perhaps pipe the concert audio through it. The camera on your phone is the same. It cannot possibly know if you are taking the nth picture of your cat, or just your toe.
So, no. Computers, and mostly any man-made system is not just doing something that they have been presented before, it is reacting to a statistical model that has been embedded into the design.
Whenever the “signal” that we are trying to put through over-extends the statistical model, the system is operating out of bounds.
One further advancement is to add an “adaptive” feature, designed to adjust certain design parameters, depending on the signal. One ubiquitous adaptive system is the auto exposure of any modern camera, which adjusts sensitivity. Still, there are limitations, that’s why sometimes faces are too dark when shooting against the sun.
The newest advancement with AI, Machine Learning and Neural Networks, is that these new generation systems can actually characterize the signal. In some way they are able to “figure out” what should they do. Sure, throughout this process, we present the system with hundreds, perhaps thousands of examples of what to expect, but they are then capable of generalizing.
So what’s the big deal? Isn’t this just plain progress? One step ahead?
I say no. For two important reasons that we tend to forget: first everything that is digitized improves with Moore’s law, and second, AI and digital systems have reached a point where they are elastic, they can be grown and shrunk according to demand, and with a marginal cost close to zero.
I would be hard-pressed to foresee where this will take us in ten years. It is ironic how our quest to tame the unknown is bringing us further into it.