Neural networks now solve major math problems more quickly than the fastest computers!

three body problem illustrated by three planets in a triangle in space

(Image: © Shutterstock)

Related: The 11 Most Beautiful Mathematical Equations

The algorithm they built provided accurate solutions up to 100 million times faster than the most advanced software program, known as Brutus. That could prove invaluable to astronomers trying to understand things like the behavior of star clusters and the broader evolution of the universe, said Chris Foley, a biostatistician at the University of Cambridge and co-author of a paper to the arXiv database, which has yet to be peer-reviewed.

“This neural net, if it does a good job, should be able to provide us with solutions in an unprecedented time frame,” he told Live Science. “So we can start to think about making progress with much deeper questions, like how gravitational waves form.”

Neural networks must be trained by being fed data before they can make predictions. So the researchers had to generate 9,900 simplified three-body scenarios using Brutus, the current leader when it comes to solving three-body problems.

They then tested how well the neural net could predict the evolution of 5,000 unseen scenarios, and found its results closely matched those of Brutus. However, the A.I.-based program solved the problems in an average of just a fraction of a second, compared with nearly 2 minutes.

The reason programs like Brutus are so slow is that they solve the problem by brute force, said Foley, carrying out calculations for each tiny step of the celestial bodies’ trajectories. The neural net, on the other hand, simply looks at the movements those calculations produce and deduces a pattern that can help predict how future scenarios will play out.

That presents a problem for scaling the system up, though, Foley said. The current algorithm is a proof-of-concept and learned from simplified scenarios, but training on more complex ones or even increasing the number of bodies involved to four of five first requires you to generate the data on Brutus, which can be extremely time-consuming and expensive.

www.livescience.com/ai-solves-three-body-problem-fast.html

So far, conventional computers have been slaving away at these kinds of calculations, often taking weeks, if not months to produce results — but artificial intelligence could speed things up considerably, as Live Science reports.

Researchers from the University of Cambridge have built a neural network they claim can solve the three-body problem much faster than a conventional computer, giving astronomers a leg up in understanding phenomena such as the behavior of star clusters as they collapse or the formation of black hole systems.

They posted a paper of their research, which has yet to be peer-reviewed, on the preprint archive arXiv last month.

Using Brutus — an advanced software program that, as its name suggests, solves problems by brute force — the team generated about 9,900 simplified three-body scenarios. They then fed these scenarios to the neural net to teach it how to solve them, before pitting Brutus against the neural net on solving 5,000 new and unseen scenarios.

The results were astonishing. The Cambridge team’s AI solved the problems in less than a single second each. Brutus took far longer: almost two minutes. That’s because the AI was able to deduce a pattern rather than making calculations step-by-step.

“This neural net, if it does a good job, should

futurism.com/neural-network-three-body-problem

Super-Intelligent Machines: 7 Robotic Futures

As robots get smarter, humans just won’t be able to keep up. While simpler tasks may be outsourced to robots at first, by 2045 Kurzweil predicts that machines will be billions of times smarter than un-augmented human beings. Robots have already replaced factory workers, and self-driving cars are just around the corner. Still other computer-science technologies, such as the ultrafast stock trading programs that cause “flash crashes,” are being developed without considering how they could damage people or put them out of work, said Bill Hibbard, a computer scientist at the University of Wisconsin-Madison.

www.livescience.com/29376-rise-of-super-intelligent-robots.html

The agents played two different games, the first of which involved tasking two agents with trying to gather as many digital apples as possible. When the researchers chose to configure numerous apples, the agents gathered them peacefully. When apples were in shorter supply, the agents learned to turn on one another by ‘zapping’ their opponent with an immobilising laser.

“Rather naturally, when there are enough apples in the environment, the agents learn to peacefully coexist and collect as many apples as they can. However, as the number of apples is reduced, the agents learn that it may be better for them to tag the other agent to give themselves time on their own to collect the scarce apples,” the team wrote.

However, agents with greater access to neural networks, which have been designed to mimic human brain behaviour, swiftly learned to attack their competitors regardless of the availability of the apples.“We noted that the policies learned in environments with low abundance or high conflict-cost were highly aggressive, while the policies learned with high abundance or low conflict cost were less aggressive,” they continued.

inews.co.uk/news/technology/googles-ai-learned-aggressive-will-probably-kill-us-69726

The report, published this week, warns that up to 47 per cent of jobs in the US are at risk of being replaced by AI.It also claimed the various disparate technologies that make up AI could be harnessed to increase productivity, and in turn create more jobs and raise living standards, wages and leisure hours.

AI to create deeper divide between the skilled and unskilled

This will create greater disparity, as technological innovation typically threatens lower-paid, lower-skilled occupations. It is the highly-skilled specialist few that stand to benefit most from the rise of AI, as competition reduces and wealth inequality rises, the report suggested.

inews.co.uk/news/technology/ai-poses-threat-half-jobs-within-two-decades-531184

Thousands of Britons believe a robot uprising is inevitable and live in fear of a Terminator-esque future where artificial intelligence (AI) presides over humankind, a survey has indicated.

More than half the population (58 per cent) fear the impact of AI on humankind and two fifths (41 per cent) believe robots will destroy humanity as we know it, the survey found.

One in seven people (14%) believes interacting with human-like robots will be an everyday occurrence within 10 years while two thirds (64%) expect this to be the norm in the next 50 years, according to the poll commissioned by Sky Atlantic.

inews.co.uk/news/technology/britons-living-fear-robots-will-destroy-humanity-know-534183

Smugness is probably not the smartest response to such failures. “The Surprising Creativity of Digital Evolution,” a paper published in March, rounded up the results from programs that could update their own parameters, as superintelligent beings will.

When researchers tried to get 3-D virtual creatures to develop optimal ways of walking and jumping, some somersaulted or pole-vaulted instead, and a bug-fixer algorithm ended up “fixing” bugs by short-circuiting their underlying programs. In sum, there was widespread “potential for perverse outcomes from optimizing reward functions that appear sensible.” That’s researcher for ¯\_(ツ)_/¯.

www.newyorker.com/magazine/2018/05/14/how-frightened-should-we-be-of-ai

The computer with the neural network is taught to do a task by having it analyze training examples, which have been previously labeled in advance. A common example of a task for a neural network using deep learning is an object recognition task, where the neural network is presented with a large number of objects of a certain type, such as a cat, or a street sign, and the computer, by analyzing the recurring patterns in the presented images, learns to categorize new images.

How neural networks learn

We are primarily funded by readers. Please subscribe and donate to support us!

Unlike other algorithms, neural networks with their deep learning cannot be programmed directly for the task. Rather, they have the requirement, just like a child’s developing brain, that they need to learn the information. The learning strategies go by three methods:

Supervised learning: This learning strategy is the simplest, as there is a labeled dataset, which the computer goes through, and the algorithm gets modified until it can process the dataset to get the desired result.

Unsupervised learning: This strategy gets used in cases where there is no labeled dataset available to learn from. The neural network analyzes the dataset, and then a cost function then tells the neural network how far off of target it was. The neural network then adjusts to increase accuracy of the algorithm.

Reinforced learning: In this algorithm, the neural network is reinforced for positive results, and punished for a negative result, forcing the neural network to learn over time.

www.techradar.com/news/what-is-a-neural-network

The mostly complete chart of Neural Networks, explained

towardsdatascience.com/the-mostly-complete-chart-of-neural-networks-explained-3fb6f2367464

….He sat in on a coding lecture and became enraptured with the idea of writing software that could generate songs by harnessing the machine’s ability to semi-randomly recombine numbers. “Why haven’t computers been able to do this yet?” he wondered.

Over the next year, he set out to create a composing machine. He taught himself enough to code up a prototype that would create songs based on a set of simple rules. Before long, his system, Jukedeck, was cranking out instrumental tunes good enough to convince some investors to back him. He then hired programmers to rebuild his system using “deep learning” neural networks, the hot new artificial-intelligence technique. Neural nets can, in effect, learn on their own. Newton-Rex would feed thousands of melodies his team composed—pop, blues, folk, and other genres—into the system. The neural net would decode the deep patterns in the music and crank out new melodies based on what it had intuited.

Now, AI offers fresh compositions at the press of a button.

Jukedeck has since penned more than 1 million songs, and in the past few years several similar firms—Amper in New York, Popgun in Australia, and AIVA in Luxembourg—have emerged to join this weird new industry.

Their tools are point-and-click easy: Pick a genre, a “mood,” and a duration, and boom—Jukedeck churns out a free composition for your personal project or, if you pay a fee, for commercial use. Songs composed by Jukedeck and its ilk are already showing up in podcasts, video games, and YouTube content, “from explainer videos to family holiday videos to sports videos,” says Patrick Stobbs, Jukedeck’s co-founder.

For years, DIY video makers have licensed tunes from huge “libraries” of Muzak-y stuff produced by humans. Now, AI offers fresh compositions at the press of a button.

www.motherjones.com/media/2019/03/what-will-happen-when-machines-write-songs-just-as-well-as-your-favorite-musician/

Google’s quantum bet on the future of AI—and what it means for humanity

Google has more computing power, data, and talent to pursue artificial intelligence than any other company on Earth—and it’s not slowing down. That’s why humans can’t, either.

Now, though, powerful forces were stirring inside Google’s servers. For a little more than a year, Dean, Andrew Ng, and their colleagues had been building a massive network of interconnected computers, linked together in ways modeled on the human brain. The team had engineered 16,000 processors in 1,000 computers, which—combined—were capable of making 1 billion connections. This was unprecedented for a computer system, though still far from a human brain’s capacity of more than 100 trillion connections.
To test how this massive neural net processed data, the engineers had run a deceptively simple experiment. For three days straight, they had fed the machine a diet of millions of random images from videos on YouTube, which Google had acquired in 2006. They gave it no other instructions, waiting to see what it would do if left on its own. What they learned was that a computer brain bingeing on YouTube is not so different from a human’s. In a remote part of the computer’s memory, Dean and his peers discovered that it had spontaneously generated a blurry, overpixelated image of one thing it had seen repeatedly over the course of 72 hours: a cat.

This was a machine teaching itself to think.

The day he watched this kind of intelligence emerge from Google’s servers for the first time, Pichai remembers feeling a shift in his thinking, a sense of premonition. “This thing was going to scale up and maybe reveal the way the universe works,” he says. “This will be the most important thing we work on as humanity.”

The rise of AI inside Google resembles a journey billions of us are on collectively, hurtling into a digital future that few of us fully understand—and that we can’t opt out of. One dominated in large part by Google. Few other companies (let alone governments) on the planet have the ability or ambition to advance computerized thought. Google operates more products, with 1 billion users, than any other tech company on earth: Android, Chrome, Drive, Gmail, Google Play Store, Maps, Photos, Search, and YouTube. Unless you live in China, if you have an internet connection, you almost certainly rely on Google to augment some parts of your brain.

www.fastcompany.com/90396213/google-quantum-supremacy-future-ai-humanity

To understand artificial intelligence in 2019, watch this 1960 TV show

Just four years into the AI era, an actor and an MIT professor explained the technology to a mass audience, in terms that are enduringly relevant.

“If the computer is this important, why haven’t I heard more about it?”

“Well, the computer is a relatively new thing, and we’re just really getting an appreciation for the full range of its usefulness. Many people think that it’s going to spark a revolution that will change the face of the earth almost as much as the first industrial revolution did.”

The year is 1960. The skeptic posing the question is David Wayne, a crusty actor familiar to audiences of the time from movies such as Adam’s Rib and TV shows like The Twilight Zone. He’s talking to Jerome B. Wiesner, director of MIT’s Research Laboratory of Electronics and later the university’s president. The two men are cohosts of “The Thinking Machine,” a documentary about artificial intelligence aired as part of a CBS series called Tomorrow, which the network produced in conjunction with MIT. It debuted on the night of October 26, less than two weeks before John F. Kennedy defeated Richard Nixon in the U.S. presidential election.

Just in case you weren’t watching TV in 1960, you can catch up with “The Thinking Machine” on YouTube. It’s available there in its 53-minute entirety, in a crisp print that was digitized and uploaded by MIT. It’s racked up only 762 views as I write this, but deserves far more eyeballs than that.

www.fastcompany.com/90399709/to-understand-ai-in-2019-watch-this-1960-tv-show

As “The Thinking Machine” progresses, it segues from tech demos to sound bites addressing whether, as Wayne puts it, “one day, machines will really be able to think?” CBS lined up an impressive roster of experts to chime in: Two of them, Claude Shannon and Oliver Selfridge, had even participated in the seminal 1956 Dartmouth workshop.

For the most part, they’re upbeat about AI’s future. “Machines can’t write good poetry or produce deathless music yet,” acknowledges Selfridge, stating the obvious. “But I don’t see any stumbling block in a line of progress that will enable them to in the long run. I am convinced that machines can and will think in our lifetime.”

Forty-six years later, in an interview he granted two years before his death, Selfridge sounded more jaded: Speaking of software, he snarked that “the program doesn’t give a shit.” But neither he nor any of his colleagues on “The Thinking Machine” say anything that seems ridiculous in retrospect—just overly optimistic on timing.

www.fastcompany.com/90399709/to-understand-ai-in-2019-watch-this-1960-tv-show

Meanwhile, Hollywood is currently smitten with the concept of using AI to write scripts, such as one for a 2018 Lexus commercial. Unlike MIT’s TX-0, which generated its westerns from scratch, the software responsible for the Lexus ad analyzed a vast repository of award-winning ads for inspiration. But in both cases, it seems to me, humans had a lot more to do with the results than the computer did. Wiesner’s admonishment after Wayne is astonished by TX-0’s apparent creativity—”It’s marvelous to do . . . on machines, but far from miraculous”—applies equally to AI’s role in creating the Lexus spot.

 

h/t DMG

Views:

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.