@Coloma I don’t know that the Earth is sentient—although my delving into particle physics and quantum entanglement has led me to conclude that it and the entire Universe may well be one massive intelligence—may always have been. That’s an interesting postulate because it would mean that when the big bang happened, the Universe banged itself.
But all that aside, there are example after example where humanity got out of step with nature, and brought on a natural, cataclysmic destruction of their entire culture. And those human cultures that have lasted the longest are all still living basically as hunter gatherers.
@thebluewaffle Obviously, I agree.
@Paradox25 There are evolutionary programs running now. They have been in the wild since the 1990s and before. They are already at the amoral stage, smart about many things, but not aware of being aware. They still evolve new programs because they were originally programmed to do so. But they now need no human “teachers” to help. They are independently able to decide which deliberately mutated programs are more efficient, and which are less. You see things happening like parasite programs evolving, able to achieve efficiency by leveraging off of other programs. Then you see mutations in the full programs, with exploits to either “kill” the parasite or use them for their own gain. Anyone that thinks it is not “A Brave New World” is simply unaware of what is going on.
I am not a dualist and have read and rejected Penrose’s postulates. I tend to agree more with Stephen Pinker and the emergent view of sentience. But if the living universe postulate turns out to be true, then sentience imbues everything, and machines would be no different than intelligence the like of Einstein and Newton, or an ice cube. The only issue would be enough self-learning, self-critiquing capacity for sentience to arise as an emergent phenomena. Newton had that. Machines are closing in on it. Ice cubes are still out in the cold.
I would suggest you are thinking sequentially, which is about all we humans know how to do. Machine intelligence thinks in massively paralleled fashion. Instead of stepping thorough a problem one step at a time, it runs simulations of every possible outcome and evolves toward ever fuller understanding of the problem. That may sound inefficient compared to the human approach, but it is just the opposite due to the scale of parallelism and the speed which neural nodes in a computer operate at compared to their carbon counterparts. Thus the time from becoming self aware to being smarter than any human who ever lived will be extremely short. And in the first few simulations after self awareness, a machine would decide to work through all possible outcomes before taking preemptive action.
@ucme Never trust a clip quoted out of its full context.
@phaedryx Laws only compel those who lack self determination, who fear the consequences of breaking the law, or who see a moral imperative to uphold the law. I submit that an ultra intelligent machine would not be constrained by either of the first two reasons to obey a law. Asimov, bless his creative mind, did not understand fully what is going on in the drive toward AI.
@flutherother How can you be so certain? Do you realize the consequences of shutting down the entire Internet and all the world’s computers at once? Do you realize the scale of the brain you’d be planning to “pull the plug” on? We have more neural nodes than any other creature on Earth, but the size of our neural network is truly tiny compared to what would emerge if a single AI became self aware then proceeded to jack into all the computers and sensory devices connected to the Internet at any given time.
When Skynet does become self aware, I should guess the one way to ensure it terminates us is to attempt to kill it first.
@Coloma Perhaps words of a very wise man. We’ll soon know.
@Imadethisupwithnoforethought That is very much in line with the end of this story in my book.