If you have been through the previous part, you can see that things are getting a little bit scary. If there are so many things going wrong, how can we get it right? How to stop misusing AI? Can we be able to stop that?
The problem cannot be solved simply by telling all countries to stop researching AI. We might have several choices, but we don’t get to choose not to have the technology. Once it’s invented, it’ll stay invented.
One thing for sure, this is not an individual problem. We need the synergies of countries, especially those that are developing strongly in technology. With so much accumulated investment and intellectual power in the age of AI, the world is already dominated by just two AI superpowers: America and China. That’s the premise of a new book written by Kai-Fu Lee.
While companies like Baidu, Tencent, Alibaba are growing more powerful, they are also beginning to have difficulty accessing American technology and are racing to develop their own. Countries are diverging and heading into a world of antagonism, and technology suddenly is something they don’t want to share.
“If we do an excellent job in the next 20 years, AI will be viewed as an age of enlightenment. Our children and their children will see AI as serendipity; that AI is here to liberate us from having to do routine jobs and push us to do what we love and push us to think what it means to be human.” - said, Kai-Fu Lee. He believes that the two AI superpowers should lead the way and work together to make AI a force for good. If we do, we may have a chance of getting it right.
What if an AI system grows out of our control? A vision of robots destroying humanity?
A given example when you create an AI system and give it a goal: “Find a way to make cancer go away.” After countless self-studying, the AI system decides to destroy all humans because if there is no human, there will be no cancer. To avoid those deadly scenarios for humanity, right now, we need to write an AI. Even when it escapes, it’s still safe because it is fundamentally on our side, and it shares our values, creates an AI that learns what we value. For example, finding a cure for cancer to save human lives.
“For AI technologies to be truly transformative in a positive way, we need a set of ethical norms, standards, and practical methodologies to ensure that we use AI responsibly and to the benefit of humanity.” - said Susan Etlinger, an industry analyst for Altimeter Group and expert in data.
Moreover, every AI system needs to be monitored. Command lines need to be exposed, so we will know and fix it whenever an AI goes wrong. Don’t 100% trust the AI, even though we develop them for that purpose.
What’s about wars?
How could a robot know the difference between what is legal and what is right? We shouldn't take these concerns lightly. We cannot play with those possibilities and just race ahead without thinking about the potential outcomes.
Jody Williams won a Nobel Peace Prize for her work banning land mines, now is part of the Campaign to Stop Killer Robots. This campaign is staging a protest outside of the U.N. The group is made up of activists, nonprofits, and civil society organizations. So far, 30 countries have joined them and 100 non-governmental organizations, the European Parliament, 21 Nobel laureates, and leading scientists like Stephen Hawking, Noam Chomsky, and Elon Musk.
War can break out, AI-powered combat systems can be invented and put on the battlefield, but Killer-robots can't. Scholars are aware that AI will make wars become more dangerous in the future and are constantly fighting against fully autonomous weapons. What countries need to do is be mindful of the consequences of applying AI weapons to war.
The end of part 05.