AI everywhere! New method allows full AI on basic laptops! supercomputers could QUADRUPLE neural networks!

scientists have developed a revolutionary method that quadratically accelerates artificial intelligence (AI) training algorithms.

This gives full AI capability to inexpensive computers, and would make it possible in one to two years for supercomputers to utilize Artificial Neural Networks that quadratically exceed the possibilities of today’s artificial neural networks. The scientists presented their method on June 19 in the journal Nature Communications.

Read more at:

phys.org/news/2018-06-ai-method-power-artificial-neural.html#jCp

The proposed method, dubbed Sparse Evolutionary Training (SET), takes inspiration from biological networks and in particular neural networks that owe their efficiency to three simple features: networks have relatively few connections (sparsity), few hubs (scale-freeness) and short paths (small-worldness). The work reported in Nature Communications demonstrates the benefits of moving away from fully-connected ANNs (as done in common AI), by introducing a new training procedure that starts from a random, sparse network and iteratively evolves into a scale-free system. At each step, the weaker connections are eliminated and new links are added at random, similarly to a biological process known as synaptic shrinking.

The striking acceleration effect of this method has enormous significance, as it will allow the application of AI to problems that are not currently tractable due to the vast number of parameters. Examples include affordable personalized medicine and complex systems. In complex, rapidly changing environments such as smart grids and social systems, where frequent on-the-fly retraining of an ANN is required, improvements in learning speed (without compromising accuracy) are essential. In addition, because such training can be achieved with limited computation resources, the proposed SET method will be preferred for the embedded intelligences of the many distributed devices connected to a larger system.

Frog brain

Thus, concretely, with SET any user can build on its own laptop an artificial neural network of up to 1 million neurons, while with state-of-the-art methods this was reserved only for expensive computing clouds.

This does not mean that the clouds are not useful anymore. They are. Imagine what you can build on them with SET. Currently the largest artificial neural networks, built on supercomputers, have the size of a frog brain (about 16 million neurons). After some technical challenges are overpassed, with SET, we may build on the same supercomputers artificial neural networks close to the human brain size (about 80 billion neurons).

Read more at:

phys.org/news/2018-06-ai-method-power-artificial-neural.html#jCp

Identifying rare pathologies in medical images has presented a persistent challenge for researchers, because of the scarcity of images that can be used to train AI systems in a supervised learning setting.

Professor Shahrokh Valaee and his team have designed a new approach: using machine learning to create computer generated X-rays to augment AI training sets.

“In a sense, we are using machine learning to do machine learning,” says Valaee, a professor in The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE) at the University of Toronto. “We are creating simulated X-rays that reflect certain rare conditions so that we can combine them with real X-rays to have a sufficiently large database to train the neural networks to identify these conditions in other X-rays.”

Valaee is a member of the Machine Intelligence in Medicine Lab (MIMLab), a group of physicians, scientists and engineering researchers who are combining their expertise in image processing, artificial intelligence and medicine to solve medical challenges. “AI has the potential to help in a myriad of ways in the field of medicine,” says Valaee. “But to do this we need a lot of data—the thousands of labelled images we need to make these systems work just don’t exist for some rare conditions.”

Read more at:

phys.org/news/2018-07-artificial-intelligence-x-rays.html#jCp

Researchers have shown that it is possible to train artificial neural networks directly on an optical chip. The significant breakthrough demonstrates that an optical circuit can perform a critical function of an electronics-based artificial neural network and could lead to less expensive, faster and more energy efficient ways to perform complex tasks such as speech or image recognition.

“Using an optical chip to perform neural network computations more efficiently than is possible with digital computers could allow more complex problems to be solved,” said research team leader Shanhui Fan of Stanford University. “This would enhance the capability of artificial neural networks to perform tasks required for self-driving cars or to formulate an appropriate response to a spoken question, for example. It could also improve our lives in ways we can’t imagine now.”

An artificial neural network is a type of artificial intelligence that uses connected units to process information in a manner similar to the way the brain processes information. Using these networks to perform a complex task, for instance voice recognition, requires the critical step of training the algorithms to categorize inputs, such as different words.

We are primarily funded by readers. Please subscribe and donate to support us!

Although optical artificial neural networks were recently demonstrated experimentally, the training step was performed using a model on a traditional digital computer and the final settings were then imported into the optical circuit. In Optica, The Optical Society’s journal for high impact research, Stanford University researchers report a method for training these networks directly in the device by implementing an optical analogue of the ‘backpropagation’ algorithm, which is the standard way to train conventional neural networks.

Read more at:

phys.org/news/2018-07-closer-optical-artificial-neural-network.html#jCp

The team wanted the help of an AI system that could analyze and classify Fermilab’s detector data. MENNDL evaluated 500,000 neural networks in 24 hours. Its final solution proved superior to custom models developed by human scientists.

In another case involving a collaboration with St. Jude Children’s Research Hospital in Memphis, MENNDL improved the error rate of a human-designed algorithm for identifying mitochondria inside 3D electron microscopy images of brain tissue by 30 percent.

“We are able to do better than humans in a fraction of the time at designing networks for these sort of very different datasets that we’re interested in,” Young says.

What makes MENNDL particularly adept is its ability to define the best or most optimal hyperparameters—the key variables—to tackle a particular dataset.

“You don’t always need a big, huge deep network. Sometimes you just need a small network with the right hyperparameters,” Young says.

The team at ORNL expects it can also make bigger impacts beginning next year when the lab’s next supercomputer, Summit, comes online. While Summit will boast only 4,600 nodes, it will sport the latest and greatest GPU technology from Nvidia and CPUs from IBM. That means it will deliver more than five times the computational performance of Titan, the world’s fifth-most powerful supercomputer today.

“We’ll be able to look at much larger problems on Summit than we were able to with Titan and hopefully get to a solution much faster,” Young says.

singularityhub.com/2018/01/03/ai-uses-titan-supercomputer-to-create-deep-neural-nets-in-less-than-a-day/#sm.0001rgaehmj35evwtip1742gt87v4

Artificial Neural Networks (or ANN) are at the very heart of the AI revolution that is shaping every aspect of society and technology. But the ANNs that we have been able to handle so far are nowhere near solving very complex problems. The very latest supercomputers would struggle with a 16 million-neuron network (just about the size of a frog brain), while it would take over a dozen days for a powerful desktop computer to train a mere 100,000-neuron network.

Personalized medicine

The proposed method, dubbed Sparse Evolutionary Training (SET), takes inspiration from biological networks and in particular neural networks that owe their efficiency to three simple features: networks have relatively few connections (sparsity), few hubs (scale-freeness) and short paths (small-worldness). The work reported in “Nature Communications” demonstrates the benefits of moving away from fully-connected ANNs (as done in common AI), by introducing a new training procedure that starts from a random, sparse network and iteratively evolves into a scale-free system. At each step, the weaker connections are eliminated and new links are added at random, similarly to a biological process known as “synaptic shrinking”.

The striking acceleration effect of this method has enormous significance, as it will allow the application of AI to problems that are not currently tractable due to the vast number of parameters.

technologist.eu/ai-method-increases-the-power-of-artificial-neural-networks/

Meanwhile, the next step is fast approaching: technology that makes full use of quantum effects, such as quantum computers, but also quantum sensors, simulations and communication. Because of the broad impact that this technology will have in society, such as the development of new medicines, data security or the conversion of energy, the European Commission (EC), among others, speaks of a ‘second quantum revolution’. This year, the EC launched the Flagship on Quantum Technologies, with a total of no less than one billion euros in research funding over ten years to ensure that this new technology finds its way into industry and society.

technologist.eu/tu-e-launches-center-for-quantum-materials-and-technology/

h/t Digital mix guy

Views:

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.