I find it such a strange thing when I see and hear the worry that surrounds AI. It’s my belief that we must actively encourage the development of AI if we’re to survive. If we want to survive long enough, so that we may reach for the stars, we must become superior to our present biological form.
“Human biological lifeform is unsuitable to space.”
I believe we will never develop the means to adapt our bodies to the rigours of outer space. There are so many difficulties to overcome, for our bodies to cope with this, that they are quite simply insurmountable. For long distance space travel to become a reality, our current thinking, is way off track.
“Our current thinking is far too self-centered.”
It would seem that our long term intention is to put humans on other planets, and we’re prepared to spend vast amounts of time and energy, attempting it. We’re developing the propulsion and the means to travel, yet missing the one important proviso already mentioned: Humans will never be suited to long term exposure to space.
If we gave up our current self-centered thinking, and instead focused on developing the AI technology that can travel the huge distances involved, (the only things a robot needs to function well is clever programming and starlight) we might just make it.
“If we get it right our AI robots will be able to raise biological life forms on planets suited for such life.”
If we get it right, robots will monitor frozen embryos (that can stay this way for thousands of years) on spaceships travelling the vast distances between the stars. Once arrived at our destination, these same clever AI robots, will raise our biological selves.
“Robots, able to think for themselves, will always have a root program controlling this ‘self,’ that we’ll be responsible for writing.”
If we get it right, our robots will even reach the stage of being able to terraform other planets, before we arrive as frozen embryos. The key to the success of this kind of plan is in the programming. What we program our AI robots with is the issue.
If the root program, of a self-aware robot, is what we currently believe about love and life, they will most certainly end up self-destructing. If the programming is flawed, our beautiful new AI, will evolve in the same direction we’re currently heading: Extinction.
“So the key, is to ensure the root programming, is correct. In this way we will have AI robots that will never be the threat – that even the likes of Stephen Hawking – believe them to be.”
We need to get it right first. When we truly acknowledge that the most important ingredient, is a correct and proper understanding of love and life, only then, will we create AI, that will not only ensure our future survival, but long for it.