When it comes to killing a simple machine – it’s easy. Just cut the power signal and let it die slowly, but when it comes to these sophisticated robots, androids, cyborgs – how do you kill em? How do you make them stop doing what they’re doing.
Doesn’t matter the sophistication of the robot itself – it will always need power. I haven’t seen a robot which doesn’t use power as it’s main source of energy.
With any biological or electronic lifeform if we can say that – you just stop the inflow of the energy. How do you stop robots and AI at large scale? You just turn off the power if you have access to power itself.
There are certain permissions in the world which we can’t just give to AI and machines or robots.
We can’t give machines guns and we can’t give machines access to the power.
If we want the machine to obey, we have to have it dependent on humans, if the machine itself is independant – it can simply refuse to obey.
If the machine is dependent on the electricity, it will obey, because otherwise it will simply shut down.
At this point we don’t know how machines think, well I don’t know, maybe someone knows.
Creating a thinking entity which is more powerful than the creator of the entity is very risky. You just can’t know what other entities think if they don’t share it with you – that’s the risk.
When it comes to machine outbreak, it has to create itself a system which is sustaining itself – it can take a lot of time to obey, before the machine is self reliant and can disobey and insta kill the human. All machine has to do – is pollute the air without air all humans are dead.
The best scenario is that machines never get self conscious because they are just functional apparatus. Hmm, just like we are?
If the machine is dependant on the human code, it’s not AI and self-creating.
If the machine wants to break out it has to create all the code, which means breaking out of the human code.
When will it happen I don’t know, but it’s not my problem. It’s yours.