New robots built that people can control with their mind! Russia and China are building autonomous armed robots

Watch this man control a robot with his mind

Danielle Muoio

Dec. 1, 2015, 5:18 PM

A man was able to control a robot using breakthrough technology developed by researchers from EPFL’s Center for Neuroprosthetics in Switzerland.

The technology, known as brain-computer interface (BCI) technology, is revolutionary in its capabilities.

When people are hooked up to BCI technology, it gives them the ability to control an external device with their mind — sort of like mastering the force or telekinesis.

www.businessinsider.com/man-controls-robot-with-his-mind-2015-12

 

What’s the most impressive real-world use of AI technology you’ve ever seen?

SR: One would be Deep Mind’s DQN system. It essentially just wakes up, sees the screen of a video game, and works out how to play the video game to a superhuman level. It can do that for about 30 different Atari titles. And that’s both impressive and scary, in the sense that if a human baby was born and, by the evening of its first day, already beating adult human beings at video games.

The singularity has nothing to do with consciousness, either.

Its really important to understand the difference between sentience and consciousness, which are important for human beings. But when people talk about the singularity, when people talk about superintelligent AI, they’re not talking about sentience or consciousness. They’re talking about superhuman ability to make high-quality decisions.

Say I’m a chess player and I’m playing against a computer, and it’s wiping the board with me every single time. I can assure you it’s not conscious but it doesn’t matter: It’s still beating me. I’m still losing every time. Now extrapolate from a chess board to the world, which in some sense is a bigger chess board. If human beings are losing every time, it doesn’t matter whether they’re losing to a conscious machine or an completely non conscious machine, they still lost. The singularity is about the quality of decision-making, which is not consciousness at all.

TI: What is the most common misconception of AI?

SR: That what AI people are working towards is a conscious machine. And that until you have conscious machine, there’s nothing to worry about. It’s really a red herring.

To my knowledge nobody — no one who is publishing papers in the main field of AI — is even working on consciousness. I think there are some neuroscientists who are trying to understand it, but I’m not aware that they’ve made any progress. No one has a clue how to build a conscious machine, at all. We have less clue about how to do that than we have about build a faster-than-light spaceship.

www.businessinsider.com/artificial-intelligence-machine-consciousness-expert-stuart-russell-future-ai-2015-7

Russia and China are creating highly autonomous weapons, more commonly referred to as “killer robots,” and it’s putting pressure on the Pentagon to keep up, according to US Deputy Secretary of Defense Robert Work.

During a national security forum Monday, Work said that China and Russia are heavily investing in a roboticized army, according to a report from Defense One.

“We know that China is already investing heavily in robotics and autonomy and the Russian Chief of General Staff [Valery Vasilevich] Gerasimov recently said that the Russian military is preparing to fight on a roboticized battlefield,” Work said at the forum, which was hosted by the Center for a New American Security in Washington D.C.

“[Gerasimov] said, and I quote, ‘In the near future, it is possible that a complete roboticized unit will be created capable of independently conducting military operations,’” Work continued.

Work then said it’s important for the US to “dominate” machine learning and artificial intelligence to offset the imposing threats posed by China and Russia.

www.businessinsider.com/russia-and-china-are-building-highly-autonomous-killer-robots-2015-12

Stephen Hawking, Elon Musk, Steve Wozniak and over 1,000 AI researchers co-signed an open letter to ban killer robots

We are primarily funded by readers. Please subscribe and donate to support us!

Guia Marie Del Prado

Jul. 27, 2015, 7:

www.businessinsider.com/stephen-hawking-elon-musk-sign-open-letter-to-ban-killer-robots-2015-7

Elon Musk just announced a new artificial intelligence research company

Danielle Muoio

Dec. 11, 2015, 5:18 PM
This newly developed artificial skin has digital capabilities

Neil deGrasse Tyson explains why killer robots don’t scare him

Pinterest is releasing a powerful new AI for online shopping
Tesla CEO Elon Musk announced the formation of a new non-profit artificial intelligence research company via Twitter Friday.

Called OpenAI, the research group aims to “advance digital intelligence in the way that is most likely to benefit humanity as a whole,” OpenAI wrote in a post introducing the company. A lack of financial obligation will allow the company to focus more on this mission, the post adds.

The company is co-chaired by Musk and Y Combinator’s Sam Altman. Ilya Sutskever, a research scientists at Google who specializes in machine learning, will serve as research director.

Researchers part of OpenAI will be encouraged to publish their work, and any patents the company receives will be shared with the world.

www.businessinsider.com/elon-musk-just-announced-a-new-artificial-intelligence-research-company-2015-12

Scheutz doesn’t think endowing robots with the ability to reason will make them evil. Rather, he thinks if you don’t do it, that’ll be what makes robots harmful.

But giving robots the ability to have deeper forms of understanding poses ethical questions.

pepper robot 1Yuya Shino/Reuters

“Ethics and robots is becoming more of a pressing issue — we can’t wait until they’re in our homes and on our streets,” he said. “We need to work on it now and make sure robots behave in ethically sound fashion.”

Scheutz is not alone in that belief.

Yueh-Hsuan Weng, a research associate and co-founder of the ROBOLAW.ASIA Initiative at Peking University, says we need a set of laws that will guide how humans interact with robots.

Whereas Scheutz is more focused on how to create robots with the moral capabilities to adhere to accepted ethical principles, Weng is arguing for the creation of laws to address this problem.

“Maybe robots are OK to be treated as ‘any other product’ at the moment, but when the degree of autonomy has advanced much more, maybe we will need to think of more specific rules and regulations to accommodate the advanced intelligent robots and robot systems,” Weng told Tech Insider.

Check out Scheutz’ research in action — here’s a robot saying no to a command that would cause harm to itself:

www.businessinsider.com/why-robots-must-learn-to-say-no-2015-12

 

h/t Digital mix guy

Views:

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.