In this episode, we explore the hype and the reality around so-called ‘artificial intelligence’, and the risks that arise from this…
Gosh! Wow! Look at this shiny new artificial intelligence! Chatbots! Image-creators! Deep-fakes! All that exciting new stuff!
Nah.
Sorry to bust your bubble, folks, but there ain’t no intelligence there. Not a scrap. Sure, okay, there may be some good simulation of some parts of intelligence - sometimes even good enough these days to get about halfway into the Uncanny Valley. But that’s about it right now; and that’s probably about all it will ever be, too.
But maybe the most important point here is that if we ever want to make sense of so-called ‘artificial-intelligence’, we first need to acknowledge and name it for what it really is: artificial unintelligence.
If we don’t do that, we risk getting ourselves into some really serious messes, from which there may be no way out at all…
Remember first that all we’re ever dealing with in this is a bunch of servers in a rack somewhere.
Okay, it may be scaled up, or scaled down, connected by a wire or by wi-fi, it might be digital or analogue, but it’s still only ever going to be something artificial, some kind of machine. And - and this is the important bit - with all of the limitations of a machine, too.
Which, unless something seriously weird starts happening, also means that not only does it not think on its own, but it also cannot think on its own.
All it’s doing is processing data, that’s been provided from somewhere else, or someone else.
That’s it.
Everything else there is, at best, just a clever simulation of something that might look like ‘intelligence’, but actually is not.
There are two key points here, I’d suggest: one is about what we understand ‘intelligence’ to be; and the other is about how we use that intelligence, within which we also run up against the crucial distinction between training and education. We do need to deal with the ‘what is intelligence?’ question first, though, because we can’t meaningfully talk about how to use intelligence if we don’t actually know what it is.
In the classic view, from which we get the concept of ‘IQ’ or ‘Intelligence Quotient’, intelligence is basically about how we gather, process and make decisions from information. And that’s also how we get to the current notion of computer-based ‘artificial intelligence’, because if all that the computer needs to do is gather information, process it in some way, and make decisions based on the outcomes of that processing - and do it far faster than any human could - then yes, that would indeed look like that kind of ‘intelligence’.
If, that is, we don’t any questions about the sources of the information, the assumptions and algorithms used in the processing, or the rules and methods and so on that are used in the decision-making. Because once we do start asking those questions, some truly horrible messes will usually fall out of the woodwork - messes that can, and all too often do, cause horrendous harm further down the line. For just one painful example of that, see the Australian government’s disastrously-flawed ‘Robodebt’ scheme and its social, legal and other consequences - and note that the Netherlands government is apparently just starting to tackle almost exactly the same kind of mistake that had been made over there.
The real danger here is that, in practice, functional intelligence is built around much, much more than just information-processing. If we take the classic four dimensions - physical, mental, emotional, spiritual - then there are actually several distinct forms of intelligence across each of those dimensions. And for a functional, integrated intelligence to be available, not only must all of those ‘intelligences’ have to be present to some extent across each dimension, but also they all have to work together as an integrated intelligence - and it’s that point where so-called ‘artificial intelligence’ still fails, and probably always will.
Most current ‘artificial intelligence’ sits only in the mental dimension - information-processing and so on. And yes, that can be useful, no doubt about that: but if we think of it as real intelligence - and especially if we think that that’s the only form of intelligence that there is - then we’ll soon find ourselves in real trouble, as per the examples above.
And yes, there are some cases where we can do somewhat better, by adding in a bit more about the physical dimension. Probably the best way to describe physical-intelligence is that it’s about physical interactions that take place faster and/or in finer detail than the brain can process: things like the hand-eye coordination of a tennis player, the speed of a fast touch-typist, the precision of a Japanese chef cutting each piece of a fish to an exact weight of 9.7 grams every time, or the way that all of the players in an entire Irish session can switch from one tune to another exactly on the beat, without knowing in advance what the next tune will be.
There’s information-processing - ‘mental-intelligence’ - also going on in there, of course; but it takes place in local ganglia and the like, where the information-pathways are much shorter than going all way up to the brain and back, and that the decisions that need to be made, about which muscle to pull or tendon to tense, are more local and more specialised. For those contexts, the brain will guide, but not control. To some extent some current computer-based robotics can sort-of do that - the Boston Dynamics robots being some of the best-known examples at present. Yet remember that, even there, that only works because computers can do simple processing (emphasis: simple processing) much faster than the brain would do; so even then, it’s still only a limited simulation of physical-intelligence, and not the real thing.
But what about the other two dimensions? - emotional-intelligence, and spiritual-intelligence? Emotional-intelligence is about how well we can sense out and work with the emotional context in a space, whereas spiritual-intelligence - ‘spiritual’ being just about the only term that fits well with this - is about meaning, and purpose, a sense of self and of relationship with that which is greater than self, which in turn leads to crucial real-world concerns such as honesty and ethics, without which ‘good’ and/or ‘ethical decisions cannot be made. These two dimensions of intelligence are also core to decision-making in contexts of high-uncertainty or high-uniqueness.
And conventional computing-systems are well-equipped to handle exactly none of these concerns - which in turn means that the chances of conventional ‘artificial-intelligence’ working well there are very low indeed.
Or, in short, Not A Good Idea…
Which then brings us to the other key point here, about how to use all those various forms of intelligence - artificial or otherwise - and also how to set things up to make such intelligence, if any, actually useful in the real-world.
And here, unfortunately, things often go from bad to worse - especially when poor human intelligence is combined with inadequate artificial-intelligence to multiply the mistakes in every possible way. In other words, artificially-enhanced unintelligence.
Which then brings us to that other about training versus education. ‘Training’ is almost literally putting something on a predefined track: and we must somehow trust that what’s on that track is correct, because once we’re on that track, we have no actual way to test the validity of its assumptions. Which is where one of the really big problems ‘artificial-intelligence’ will arise, because in most cases it depends on some form of pre-training. (That’s what the ‘P’ is in the much-lauded ‘ChatGPT’, by the way: ‘GPT’ as ‘Generative Pretrained Transformer’.) And it has no means, in itself, to test the validity of that training: so if the information is wrong, then the decisions will be wrong, too; and there’s no way to stop that happening. Once again, most definitely Not A Good Idea…
The only way to stop that kind of problem from ballooning out of control is for there to be someone - and not just some other machine - to be in there in the decision-line, and to have the full human intelligence to interpret what the machine is doing, and, if necessary, to override its decisions.
Again, though, there are several other traps that can prevent that override from happening at all. One is that the human supervisor needs to know how and why the machine is making the decisions that it does - which is not only hard for most people anyway, but is actually inherently-impossible in quite a few forms of current artificial-intelligence, such as evolutionary-programming. Another is that to bring out and support that human intelligence needed there will require a true education - literally ‘ex-ducare’, ‘to lead outward from within’ - whereas most so-called ‘education’ these is still little better than imposed indoctrination and training, and, for political and/or social reasons, often for the most part intended more to suppress the burgeoning of independent intelligence rather than to support it.
It’s this lack of real education, particularly when combined with what can only be called sheer laziness, that so often leads to an inability to see when the answers given by the machine are functionally meaningless or wrong, or to understand what to do if the answers are wrong. One common example of this that we used to see some years ago now, when students were first allowed to bring calculators into their exams, was that they often assumed that any answer given by the calculator must always be correct, when in fact it might well be wrong by several orders of magnitude. Because the students didn’t actually know how the calculation worked, they had no means to check it, and therefore let the errors pass. And yet they’d have no idea about what went wrong, or how or why it happened, or what to do about it, to recover before it’s too late. That kind of over-reliance on automation, without enough awareness or ability to do a cross-check, can easily get someone killed - and sometimes does, as we can see in sad incidents such as the crash of Air France Flight 447. Compare that to the case of Stanislav Petrov, a Russian officer who managed to prevent a full-blown nuclear war because he was able to understand and override a mistake being made by the automation.
Okay, there’s that old sarcastic joke that “artificial intelligence is no match for human stupidity”. But the reality is that we can’t rely on that kind of luck (if that’s the right word?) to save us every time; and the risk is made worse in every case where people fail to understand the very real limits and inherent-incompletenesses of any form of artificial ‘intelligence’. Human unintelligence is bad enough; but yeah, it would be sad indeed if we were to allow that form of foolishness to combine too much with the failings of machines, and thence let our world become destroyed by mere misuse of artificial unintelligence.
New and different perspectives received with respect is what (ideally) would give us strength as a species.
I have worked with artificial neural networks (ANN) since graduation which is way back. My graduation project was a speech recognition hardware box implementing ANN and that was 1982. In the phylogenetic tree, we have loved shiny stuff since fish (540 million years). Something in a box that we can interact with verbally and visually is shiny.
Saying all that (and I have to be brief as this is a comment :) ) I will share what I learned over the last decades about ANNs:
1) they are better than us at managing patterns in information -
2) they implement a certain type of intelligence as extracted out of available data
3) biological minds (ours included) are much much more than neural networks - while some of that is reflected in our digital creations than an ANN can extract patterns from, the vast majority of our mind's depth is not visible there, hence not available to AI/ANN
4) models of reality built by ANN are different (in dimension and basis) than those built by biological brains - hence what seems intelligent or stupid on any of the two sides might be either in reality
5) the biggest danger I have experienced with AI/ANN is codependence on something randomly unreliable - like in the case of globalization done for profit, not integration, with AI/ANN tools one delegates significant functionality of one's mind and - "use it or lose it" is one of the core features of biological minds.
And that was it :) Thank you Tom :)
"Let machines do what machines do, and train and educate ourselves to make much better use of our own intelligence instead." Oh, most definitely Bard, my thoughts on this are twofold.
A). He who owns the algorithm, owns the thought patterns of users.
B). Outcomes on a given algorithm will promote a given agenda where common thought is sacrosanct.
A long way to go still, but governance will have a major role to play.