Scientists at Columbia Univ say they created a robot that has learned to be self-aware!

Associated Press

Published on Feb 4, 2019

Subscribe 925K
(4 Feb 2019) Scientists at Columbia University in New York say they created a robot that has learned to be self-aware and adapt tasks to its abilities. (Feb. 4)

(4 Feb 2019) Scientists at Columbia University in New York say they created a robot that has learned to be self-aware and adapt tasks to its abilities. (Feb. 4)

www.apnews.com/8a11f9c827f444b5bdd5e01e1c340c19

Unlike other AI devices, the robot doesn’t require human-provided models and simulators or a long trial-and-error process to operate on its own. “If we want robots to become independent, to adapt quickly to scenarios unforeseen by their creators, then it’s essential that they learn to simulate themselves,” Hod Lipson, a Columbia University mechanical engineering professor, said in the press release.

For the research, Lipson and his Ph.D. student Robert Kwiatkowski tested a four-degree-of-freedom articulated robotic arm. In the beginning, the robot moved randomly and collected roughly 1,000 trajectories, each compromising 100 positional data points. Then, the robot used this deep learning information to create a model of itself. These first self-models weren’t on point though: The robot didn’t know how its joints were connected.

However, after less than 35 hours of training, the robot produced a self-image that was accurate to within approximately four centimeters. Next, the self-model conducted a pick-and-place task in a closed loop system that allowed the robot to pick up objects and deposit them into a receptacle with 100 percent accuracy. With the closed loop system and an internal self-image, the robot completed the pick-and-place task with a 44 percent success rate.

www.geek.com/tech/engineers-built-a-self-aware-robot-that-operates-on-its-own-1772409/

Jan 25, 2019

Columbia Engineering researchers have made a major advance in robotics by creating a robot that learns what it is, from scratch, with zero prior knowledge of physics, geometry, or motor dynamics. Once their robot creates a self-simulation, it can then use that self-model to adapt to different situations, to handle new tasks as well as detect and repair damage in its own body.

The authors have warned: “Self-awareness will lead to more resilient and adaptive systems, but also implies some loss of control.

“It’s a powerful technology, but it should be handled with care.”

However, some scientists have claimed robots will never be able to develop their own intelligence as complex as humans.

www.express.co.uk/news/science/1080325/science-robots-self-aware-mechanical-arm-artificial-intelligence-columbia-university

article, Jan 2018:

Nico, a humanoid, gazes at the mirror kept in front of it, raises one arm, and recognizes that this is its own reflection in the mirror. Nico is the first robot to pass this spatial reasoning task. While this is not an amazing feat by human standards, it’s definitely an important step toward creating self-aware robots.

Nico was developed by scientists at the Yale University led by Justin Hart, a Ph.D. student. So far, the robot is only able to recognize its arm, but this group of scientists is working to help Nico pass the full body test soon.
This successful experiment also paves the way for creating advanced robots that can identify if some part of their body is damaged or is not functioning properly, so they can possibly even repair it themselves.

techgenix.com/self-aware-robots/

Feb 22, 2018
Everyone is worried about robots stealing manufacturing jobs, but the real value (and threat) in robots may lie in whether they can become smart enough to actually think on their own.

One of the major milestones in creating human level intelligence is for machines to attain self-awareness. And Columbia University’s Creative Machines Lab may have already done it. “These robots learn overtime, to stimulate themselves in a future situation they haven’t actually experienced.” said Dr. Hod Lipson, the mechanical engineering professor leading the lab’s push to create self-aware robots.

“In other words, they don’t have to learn by doing,” Lipson told VICE News. “They can learn by thinking.”

The robotics department at UC Berkley has made similar advancements with their self-teaching robot BRETT. Using trial and error, BRETT can learn how to fold laundry, assemble LEGO blocks and fit pegs into a hole.

news.vice.com/en_us/article/59kkm8/robots-could-be-coming-for-more-than-just-your-job

December 17, 2018–Researchers at Columbia University used a technique with voxel printing to create digital wood – a highly complex material for 3DP printing because of its combination of internal grains and external color textures. The technique is described in an article published in 3D Printing and Additive Manufacturing, a peer-reviewed journal from Mary Ann Liebert, Inc., publishers. Click here to read the full-text article free on the 3D Printing and Additive Manufacturing website through January 17, 2019.

In “Digital Wood: 3D Internal Color Texture Mapping,” coauthors Fabian Stute, Joni Mici, Lewis Chamberlain, and Hod Lipson, Columbia University, New York, NY describe the inability to copy 3D objects with complex internal textures as a gap in the current 3D printing field. The researchers used destructive tomographic imaging to obtain images of a wood sample at very thin images and then feed the stack of images to a voxel-capable 3D printer. The authors also discuss less invasive and destructive methods that can be used to image the interior of materials with complex internal textures.

www.eurekalert.org/pub_releases/2018-12/mali-dwp121718.php

AI and Automation Will Replace Most Human Workers Because They Don’t Have to Be Perfect—Just Better Than You
BY ELLEN RUPPEL SHELL ON 11/20/18 AT 5:04 PM

Fully 20 percent of men aged 24 to 55 do not have full-time jobs, and nearly half of all new college graduates are unable to find a job that comports with their education. (Contrary to popular thinking, college students are not impractical “basket weaving” majors—roughly 40 percent earn degrees in “occupational” disciplines such as business, legal studies and public administration, an 80 percent increase since 1970.)

And while Uber drivers and freelance dog walkers technically count as “employed,” they are not employed in the sort of occupation that typically offers a living wage. The bottom line is this: Technology has advanced at a breathtaking pace, while the policy designed to help workers deal with these changes has lagged far behind. Hence, the financial benefits of technological change accrue mainly to the few, while the majority of Americans are left with crumbs—precarious, unstable employment that reflects neither their abilities nor their potential.

Dogwalkers are “employed,” according to labor statistics. But is this really a fulltime job for someone who wants to work?

“We’re at a unique point in human history,” Rice University computer scientist Moshe Vardi says. “We are sitting on the cusp of an enormous change.”In retail, this is the challenge: When it comes to profits, no brick-and-mortar store—no matter how efficient—can hold a candle to e-commerce, which since 2014 has become the fastest-growing retail sector by far. China’s Alibaba Group—Asia’s most valuable company—is the world’s largest player in this keenly competitive arena. But Alibaba has so far failed to gain a foothold in the United States.

In America, Amazon—the nation’s fastest-growing employer—reigns supreme.Analysts predict that by 2020, one-fifth of the multitrillion-dollar U.S. retail market will have shifted to the web and that Amazon alone will reap two-thirds of that bounty. The company already captures one of every two dollars Americans spend online and is by far the nation’s biggest seller of books, music, video games, cellphones, electronics, small appliances, toys, magazine subscriptions and what seems like almost everything else—hence its nickname, “The Everything Store.”

unlike Roomba and PackBot, Sawyer looks almost human—it has an animated flat-screen face and wheels where its legs should be. Simply grabbing and adjusting its monkey-like arm and guiding it through a series of motions “teaches” Sawyer whatever repeatable procedure one needs it to get done. The robot can sense and manipulate objects almost as quickly and as fluidly as a human and demands very little in return: While traditional industrial robots require costly engineers and programmers to write and debug their code, a high school dropout can learn to program Sawyer in less than five minutes. Brooks once estimated that, all told, Sawyer (and his older brother, the two-armed Baxter robot) would work for a “wage” equivalent of less than $4 an hour.

www.newsweek.com/2018/11/30/ai-and-automation-will-replace-most-human-workers-because-they-dont-have-be-1225552.html

Research has traditionally been broken down along simple lines. There’s basic, fundamental research into how the world works, and there’s applied research that attempts to take these insights and make something useful out of them. The two have very different end goals and require very different approaches to the research process.

But there’s a large gray area in between, where the approach is more applied but the end goal may be little more than “make something cool”: things like tiny flying robots or 3D computer displays that rely on beads levitated by lasers. How do researchers find direction for these open-ended engineering challenges?

FURTHER READING

Researchers build miniature flying robots, modeled on Drosophila

The answer is “it depends.” That’s the answer we got when we had the chance to visit some of Columbia University’s robotics labs.

Sailing west
Hod Lipson’s lab is inspired by a seemingly simple question he posed at one point in our interview: “Can you 3D print a robot that can walk out of the printer, batteries included?” To get there, however, his approach is to head into the unknown—”just sailing west,” as he put it. As a result, our time in the lab included discussions of projects like using drones to identify crops that are being damaged by infestations and the use of 3D printers to create elaborate food.

 

 

January 30, 2019—Robots that are self-aware have been science fiction fodder for decades, and now we may finally be getting closer. Humans are unique in being able to imagine themselves—to picture themselves in future scenarios, such as walking along the beach on a warm sunny day. Humans can also learn by revisiting past experiences and reflecting on what went right or wrong. While humans and animals acquire and adapt their self-image over their lifetime, most robots still learn using human-provided simulators and models, or by laborious, time-consuming trial and error. Robots have not learned to simulate themselves the way humans do.

Columbia Engineering researchers have made a major advance in robotics by creating a robot that learns what it is, from scratch, with zero prior knowledge of physics, geometry, or motor dynamics. Initially the robot does not know if it is a spider, a snake, an arm—it has no clue what its shape is. After a brief period of “babbling,” and within about a day of intensive computing, their robot creates a self-simulation. The robot can then use that self-simulator internally to contemplate and adapt to different situations, handling new tasks as well as detecting and repairing damage in its own body. The work is published today in Science Robotics.

For the study, Lipson and his PhD student Robert Kwiatkowski used a four-degree-of-freedom articulated robotic arm. Initially, the robot moved randomly and collected approximately one thousand trajectories, each comprising one hundred points. The robot then used deep learning, a modern machine learning technique, to create a self-model. The first self-models were quite inaccurate, and the robot did not know what it was, or how its joints were connected. But after less than 35 hours of training, the self-model became consistent with the physical robot to within about four centimeters. The self-model performed a pick-and-place task in a closed loop system that enabled the robot to recalibrate its original position between each step along the trajectory based entirely on the internal self-model. With the closed loop control, the robot was able to grasp objects at specific locations on the ground and deposit them into a receptacle with 100 percent success.

engineering.columbia.edu/press-releases/lipson-self-aware-machines

We are primarily funded by readers. Please subscribe and donate to support us!

That BBC article set out the terms — that an AI system that acts as though it has a soul will be determined by the beholder. For the religious and spiritual among us, a sufficiently-advanced algorithm may seem to present a soul. Those people may treat it as such, since they will view the AI system’s intelligence, emotional expression, behavior, and perhaps even a belief in a god as signs of an internal something that could be defined as a soul.

As a result, machines containing some sort of artificial intelligence could simultaneously be seen as an entity or a research tool, depending on who you ask. Like with so many things, much of the debate over what would make a machine conscious comes down to what of ourselves we project onto the algorithms.

“I’m less interested in programming computers than in nurturing little proto-entities,” Nancy Fulda, a computer scientist at Brigham Young University, told Futurism. “It’s the discovery of patterns, the emergence of unique behaviors, that first drew me to computer science. And it’s the reason I’m still here.”

Fulda has trained AI algorithms to understand contextual language and is working to build a robotic theory of mind, a version of the principle in human (and some animal) psychology that lets us recognize others as beings with their own thoughts and intentions. But, you know, for robots.

futurism.com/artificial-consciousness

the point is that there’s every reason to believe that, given enough time and technological advancement, robots can best man in every physical and intellectual endeavor. This could yield a world where machines do all the work and humans are left to play and luxuriate, which presents its own problems — “An idle mind is the Devil’s workshop.” Or it could lead to something else quite devilish.

Remember that conceptualized here isn’t just Robbie the Robot or Gort. On the horizon lie advances in not just robotics, but nanotechnology and artificial DNA that have staggering implications. What if future self-replicating, self-aware robots could be programmed with not only the capacity to learn but had, or could develop, ambitions, a will, and emotions (Lipson says the latter are “possible in sort of more technical ways”) Whether these things would be precisely what man possesses or computer analogs — and many would argue there’d be no difference — isn’t the point.

It’s that if these automatons surpassed us in worldly capacities and had or developed autonomy, well, “Biological species almost never survive encounters with superior competitors.”

Or, at least, so said Sun Microsystems cofounder Bill Joy in his famous essay “Why the Future Doesn’t Need Us.” Writing in 2000, Joy worried — and noted how many other scientists agreed — that “we may well not survive the encounter with the superior robot species,” as he put it. The kicker?

With the rate of technological advancement, he predicted that we’d likely have the capacity to create such mechanical menaces by 2030.

That time frame may now seem a tad unrealistic, as Lipson and other AI experts apparently agree, and, in truth, futurists have a terribly poor track record. Then there’s Oxford University computer science professor Dr. Nigel Shadbolt, who rejects the doom-and-gloom scenarios and says, “It is not artificial intelligence that should terrify you; it is natural stupidity.”

But that may be the point. Professor Lipson told RT that an AI “robot, like a child, will keep on learning on its own, and if it was raised in a good way, then I think things will go well.” That’s a big “if” for people in any age and brings us to what’s perhaps the crux of the problem. With our morals decaying rapidly as our scientific knowledge explodes, we face the prospect of being people with low-tech virtue trying to manage a hi-tech world.

www.thenewamerican.com/tech/computers/item/31374-rise-of-the-machines-and-descent-of-man-scientists-create-quasi-self-aware-robot

It spent approximately 35 hours moving about at random, however intensive machine-learning computing was going on during this time.

Eventually, the robot was able to understand its own dimensions and capabilities.

It then began successfully performing exercises, picking up objects and placing them elsewhere.

Professor Hod Lipson, who leads the Creative Machines lab, where the research was carried out, said: “This is perhaps what a newborn child does in its crib, as it learns what it is,” he says.

“We conjecture that this advantage may have also been the evolutionary origin of self-awareness in humans.

“While our robot’s ability to imagine itself is still crude compared to humans, we believe that this ability is on the path to machine self-awareness.”

The self-modeling robot was also used for other tasks, such as writing text using a marker.

Meanwhile to test whether the self-model could detect damage to itself, the researchers 3D-printed a deformed part to simulate damage and the robot was able to detect the change and re-train its self-model.

The new self-model enabled the robot to resume its pick-and-place tasks with little loss of performance.

“Self-awareness will lead to more resilient and adaptive systems, but also implies some loss of control,” the authors warn.

“It’s a powerful technology, but it should be handled with care.” The study is published in the journal Science Robotics.

finance.yahoo.com/news/robot-thinks-itself-scratch-brings-190000563.html

www.telegraph.co.uk/science/2019/01/30/robot-thinks-scratch-brings-forward-rise-self-aware-machines2/

alternate dissenting view on this!!!!!

THAT ROBOT IS NOT SELF-AWARE
The way the media cover AI, you’d almost think they had invented being hopelessly naïve
JAY RICHARDS FEBRUARY 5, 2019

…the really big morsel comes in the final sentence. “Self-imaging,” they speculate, “will be key to allowing robots to move away from the confinements of so-called narrow AI toward more general abilities. We conjecture that this separation of self and task may have also been the evolutionary origin of self-awareness in humans.”

This is the only reference to self-awareness in the paper. But even here, the authors apply it to humans, not to the robot, and they qualify it as a conjecture. They leave it to readers—and reporters—to connect the dots along the following lines: “If we became self-aware in this way, then perhaps the same thing is happening with this robot arm.” That’s no doubt the inference they’re hoping for, but they don’t risk stating it explicitly.

It is surely this brief concluding conjecture that provided the thread that the science correspondent then used to knit together a story about the rise of self-aware robots. Without that thread, the paper from Science Robotics would not likely have made international news (and found its way to news headline site Drudge, which loves robot stories).

The scientists are hardly blameless here. One of them even provides a quote not found in the paper itself: “This is perhaps what a newborn child does in its crib, as it learns what it is.”

If this is how The Telegraph reports on a robotic arm, can you imagine what it will sound like when we get humanoid robots who seem to carry on conversations? We had best inoculate ourselves now against AI hype from science reporters while most of us still have enough self-awareness to realize what’s going on.

mindmatters.ai/2019/02/that-robot-is-not-self-aware/

There are hundreds of companies competing to install all manner of food-preparation robots for restaurants. The upsurge seems driven by a rather predictable common denominator. In nearly every article I’ve read about Flippy or its competitors, high employee turnover, workplace injuries and above-market wage laws are cited.

Still, flipping burgers and teaching children are worlds apart, right? Consider that two weeks ago, Google unveiled an experimental service called Duplex. The service deploys a digital “personal assistant” which autonomously books appointments for its “client” by placing voice-based telephone calls and engaging in real-time conversations with humans who can’t tell they’re conversing with an artificial-intelligence machine.

In a demonstration, Duplex called a hair salon and negotiated an appointment time. It independently called a restaurant and successfully navigated an awkward conversation with the store’s cashier.

It did all this by recognizing the context of the conversation and responding appropriately, including using “ums,” “ahhs,” humms,” and human-sounding pauses. You can watch the revolutionary demonstration yourself right now. Just ask your smartphone to “Google it” for you.

After watching, ask yourself how difficult it will really be for artificial intelligence to revolutionize virtually any industry where natural intelligence is either regulated away or deliberately suppressed.

www.coloradoan.com/story/opinion/2018/05/17/opinion-teachers-being-replaced-ai-isnt-far-fetched-sounds/613302002/

As the robot explored, it discovered that some blocks are looser and require less pressure to move, while others are harder to budge. Like a human Jenga player, the robot has no way of knowing by sight alone what is going to be a good brick to tackle. “You look at the tower and your eyes don’t tell you anything about which piece you should touch,” says MIT mechanical engineer Alberto Rodriguez, coauthor on the paper. “That information comes from probing it—it requires interactive perception.” With both sight and touch, the physics of a Jenga tower become more apparent.

At least that was this robot’s experience. “We found that with about 200 to 300, sometimes 400 pushes, it builds a sufficiently rich model of physics that it can then play the game with,” says Fazeli. So like a human child, the robot learns basic physics not by going to school to get a Ph.D., but through real-world play. (For now, though, it’s only playing against itself.)

www.wired.com/story/a-robot-teaches-itself-to-play-jenga/

 

h/t Digital mix guy

Views:

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.