6 Comments
User's avatar
Bogdan Motoc's avatar

New and different perspectives received with respect is what (ideally) would give us strength as a species.

I have worked with artificial neural networks (ANN) since graduation which is way back. My graduation project was a speech recognition hardware box implementing ANN and that was 1982. In the phylogenetic tree, we have loved shiny stuff since fish (540 million years). Something in a box that we can interact with verbally and visually is shiny.

Saying all that (and I have to be brief as this is a comment :) ) I will share what I learned over the last decades about ANNs:

1) they are better than us at managing patterns in information -

2) they implement a certain type of intelligence as extracted out of available data

3) biological minds (ours included) are much much more than neural networks - while some of that is reflected in our digital creations than an ANN can extract patterns from, the vast majority of our mind's depth is not visible there, hence not available to AI/ANN

4) models of reality built by ANN are different (in dimension and basis) than those built by biological brains - hence what seems intelligent or stupid on any of the two sides might be either in reality

5) the biggest danger I have experienced with AI/ANN is codependence on something randomly unreliable - like in the case of globalization done for profit, not integration, with AI/ANN tools one delegates significant functionality of one's mind and - "use it or lose it" is one of the core features of biological minds.

And that was it :) Thank you Tom :)

Expand full comment
Tom Graves's avatar

Many thanks for that, Bogdan - a lot of _very_ useful points.

To me, this one is probably the most important, in terms of danger: "codependence on something randomly unreliable" - that one would (and too often does) lead directly to lethal mutual-misinformation feedback-loops.

This one is really important, too: "models of reality built by ANN are different (in dimension and basis) than those built by biological brains".

And also neural-networks are fundamentally different - much closer to biological brains - than straightforward binary-based ('digital') systems. Neural-networks are far better at pattern-matching, as you say, but the real danger is in the over-hype around simplistic 'digital' systems. The latter really can't handle uncertainty at all, but can be programmed to produce a suitably-misleading _simulation_ of that ability - _pretending_ to have a competence that it does not actually have. (Okay, yeah, we see a lot of humans doing that too - Dunning-Kruger and all that... - but the same point applies.)

Expand full comment
Robert Mckee's avatar

"Let machines do what machines do, and train and educate ourselves to make much better use of our own intelligence instead." Oh, most definitely Bard, my thoughts on this are twofold.

A). He who owns the algorithm, owns the thought patterns of users.

B). Outcomes on a given algorithm will promote a given agenda where common thought is sacrosanct.

A long way to go still, but governance will have a major role to play.

Expand full comment
Robert Mckee's avatar

Loved this article, Tom.

Two major flaws with AI are the size of the data source (currently limited) and the intelligence of the person who wrote the algorithm (also apparently limited) if the answer to the question is already known how much knowledge is required to create the perfect storm and produce a correct answer.

“Artificial intelligence is becoming a crutch for human stupidity”. would be closer to the truth.

Thank you for addressing the hype.

Expand full comment
Tom Graves's avatar

"“Artificial intelligence is becoming a crutch for human stupidity”. would be closer to the truth." - yes, exactly. And therein lies the danger, too...

Expand full comment
Bard C. Papegaaij's avatar

Both flaws you mention are structural, not temporary, Robert. There will never be a complete (let alone flawless) data set, and the algorithms used will also never be perfect. Nature has long ago learned to work by approximation. Nothing in nature is perfect, because things would break if they were. It's the approximation that provides the flexibility to move with changing variables. I am not saying we could never see machines that mimic natural intelligence close enough to be said to be fully intelligent. But a) the current generation of AI/ML is a far cry from this; and b) such machines will have all the quirks and foibles of natural intelligence, so why bother building them? Let machines do what machines do, and train and educate ourselves to make much better use of our own intelligence instead.

Expand full comment