What 3 Experts Say About Artificial Intelligence (AI) - Merit Educational Consultants

What 3 Experts Say About Artificial Intelligence (AI)

If you’re like me, you’re excited to hear more about self-driving cars and hopeful that it will save thousands of lives each year.  But, you’re also worried that if important decisions like launching nuclear weapons are made by machines, catastrophic disasters may happen because a human being isn’t part of the decision-making process.

I just read an interesting article “How worried should we be about artificial intelligence? I asked 17 experts” by Sean Illing.  Here are 3 experts that offer good insight:

1: Early autonomous AI systems will likely make mistakes that most humans would not make. It’s therefore important for society to be educated about the limits and implicit hidden biases of AI and machine learning methods. — Bart Selman, Computer Science Professor, Cornell University

2: There are four issues of concern about artificial intelligence. First, there is a concern about the adverse impact of AI on labor. Technology has already has had such impact, and it is expected to grow in the coming years. Second, there is a concern about important decisions delegated to AI systems. We need to have a serious discussion regarding which decisions should be made by humans and which by machines. Third, there is the issue of lethal autonomous weapon systems. Finally, there is the issue of “superintelligence”: the risk of humanity losing control of machines.

Unlike the three other issues, which are of immediate concerns, the superintelligence risk, which gets more headlines, is not an immediate risk. We can afford to take our time to assess it in depth. — Moshe Vardi, Computational Engineering Professor, Rice University

3:  AI is no more scary than the human beings behind it, because AI, like domesticated animals, is designed to serve the interests of the creators. AI in North Korean hands is scary in the same way that long-range missiles in North Korean hands are scary. But that’s it. Terminator scenarios where AI turns on mankind are just paranoid. — Bryan Caplan, Economics Professor, George Mason University

Like CRISPR, utilizing technology just because we can doesn’t mean that it is in our best interest.  I believe that we humans need to work with technology to ensure that ethical and moral considerations are part of the decision-making process at every step of the way.

[Source]