Is life a miracle?
Predicting nihilist AIs
Somebody sent me a common “Life is a miracle” picture this Christmas. I’ve (admittedly, childishly — but hopefully my cool uncle-in-law won’t mind it!) answered by pasting back a link to this video:
Discussing the subject further with both my wife and my brother in a micro-chat session the same evening — after initially noticing how young Niel really was there! — this question arose:
What about Fermi’s paradox? How would you explain it then?
I joked around by saying:
A super AI disaster happens — as natural civilization evolution outcome — and the machine eventually decides that there is no need for any further universe exploration or it destroys itself for some reason.
A few days later, however, I ended up thinking that this might actually be the case!
Hear me out, but do not get me wrong. Myself I’ve always considered living life as a dual experience: one can accept that nothing really matters while, however, still enjoying all events:
I repeat (and note that the author of the video above says it as well): while still enjoying life events!
But who knows how a general AI would actually think? We cannot even dream of understanding it; like I’ve read somewhere, just imagine what an orangutan would understand sitting down and listening to the talk in a court of justice, for example; and adapt.
While humans can select values and self goals for themselves (like in Minecraft!) and reason dually about life like me above, maybe a more clever (?) machine won’t embrace this and decide there’s no reason for it to exist whatsoever, and then do something drastic about it, or do nothing else again and get extinct due to natural events, why not!
Note that I’m a bit biased now, as I’ve recently played with some Python just to see what very simple neural networks are really about and I was absolutely stunned as I’ve seen how easily changing training output and re-running the code allows the machine to learn/guess many of my ad-hoc chosen data rules and predict output like any of my children would!
I really can’t think of any reason why future AI machines won’t be able to eventually “work” like humans do in other areas, including philosophical aspects, provided that their creators push them enough into that direction, too.
Of course, we are now very far from those times (the deepest networks of today still think my neighbor’s cat is in fact a dog), but at cosmic scale this interval may really be like nothing, and Fermi’s then explained!
(That, or the universe is fully random, as I — also absolutely childishly — have postulated here!)
However, going dual again, let’s just say Happy New Year, enjoy your life, thank you for reading, and… we’ll see! (Or no, we won’t.)