That’s it. Scientists are ready to unleash computers that can “THINK!

The map indicated the possibility of training physical wave systems to learn complex features in temporal data using standard training techniques used for neural networks. As proof of principle, they demonstrated an inverse-designed, inhomogeneous medium to perform English vowel classification based on raw audio signals as their waveforms scattered and propagated through it. The scientists achieved performance comparable to a standard digital implementation of a recurrent neural network. The findings will pave the way for a new class of analog machine learning platforms for fast and efficient information processing within its native domain.

The recurrent neural network (RNN) is an important machine learning model widely used to perform tasks including natural language processing and time series prediction. The team trained wave-based physical systems to function as an RNN and passively process signals and information in their native domain without analog-to-digital conversion. The work resulted in a substantial gain in speed and reduced power consumption. In the present framework, instead of implementing circuits to deliberately route signals back to the input, the recurrence relationship occurred naturally in the time dynamics of the physics itself. The device provided the memory capacity for information processing based on the waves as they propagated through space.

Phase-change memory

Our device uses phase-change memory (PCM) for in-memory computing. PCM records synaptic weights in its physical state along a gradient between amorphous and crystalline. The conductance of the material changes along with its physical state and can be modified using electrical pulses. This is how PCM is able to perform calculations. Because the state can be anywhere along the continuum between zero and one, it is considered an analog value, as opposed to a digital value, which is either a zero or a one, nothing in between.

phys.org/news/2020-01-physics-analog-recurrent-neural-network.html

The algorithm they built provided accurate solutions up to 100 million times faster than the most advanced software program, known as Brutus. That could prove invaluable to astronomers trying to understand things like the behavior of star clusters and the broader evolution of the universe, said Chris Foley, a biostatistician at the University of Cambridge and co-author of a paper to the arXiv database, which has yet to be peer-reviewed.

“This neural net, if it does a good job, should be able to provide us with solutions in an unprecedented time frame,” he told Live Science. “So we can start to think about making progress with much deeper questions, like how gravitational waves form.”

Neural networks must be trained by being fed data before they can make predictions. So the researchers had to generate 9,900 simplified three-body scenarios using Brutus, the current leader when it comes to solving three-body problems.

They then tested how well the neural net could predict the evolution of 5,000 unseen scenarios, and found its results closely matched those of Brutus. However, the A.I.-based program solved the problems in an average of just a fraction of a second, compared with nearly 2 minutes.

The reason programs like Brutus are so slow is that they solve the problem by brute force, said Foley, carrying out calculations for each tiny step of the celestial bodies’ trajectories. The neural net, on the other hand, simply looks at the movements those calculations produce and deduces a pattern that can help predict how future scenarios will play out.

That presents a problem for scaling the system up, though, Foley said. The current algorithm is a proof-of-concept and learned from simplified scenarios, but training on more complex ones or even increasing the number of bodies involved to four of five first requires you to generate the data on Brutus, which can be extremely time-consuming and expensive.

www.livescience.com/ai-solves-three-body-problem-fast.html

Analog Neural Circuit and Hardware Design of Deep Learning Model☆

In this study, we used analog electronic multiple and sample hold circuits. The connecting weights describe the input voltage. It is easy to change the connection coefficient. This model works only on analog electronic circuits. It can finish the learning process in a very short time and this model will enable more flexible learning. However, the structure of this model includes only one input and one output network. We improved the number of unit and network layers. Moreover, we suggest the possibility of the realization the hardware implementation of the deep learning model.

www.sciencedirect.com/science/article/pii/S1877050915022644

Benedek said a second-generation Bzigo device, already built but kept out of the public eye for now, will automatically dispatch a flying “nano-drone” to kill targeted mosquitoes, sparing people from getting blood on their hands.

“A nano-drone flies from a docking station on the device, goes to the mosquito, kills it, and it comes back to recharge,” Benedek said, prompting a nearby visitor to the CES booth to laugh uncontrollably.

Bzigo has already raised a million dollars in funding, and is out to raise $5 million to begin mass production of the device, which will be about the size of an apple when it reaches market, according to Benedek.

techxplore.com/news/2020-01-bzigo-mosquitoes-death.html

from 2013:

Objectives: Recently, a new system based on brain signal has been developed rapidly. Comparing with the usual systems controlled by computer, this system is controlled by “human brain”. That is the brain signal is utilized to realize the system control based on brain-computer interface. The system of such human-computer integration control by brain-computer interface is called brain control system. “Brain control” relates to several fields including neuroscience, cognitive science, control science, medicine, computer science, and psychology etc. It represents a new frontier interdisciplinary research direction and attracts wide attentions of researchers.

The development of brain control has come to a new stage. We introduced the achievements of the research of brain control systematically and made a careful analysis of the problems that we met in the current stage. We also showed the future development directions and requirements of brain control system.

f5000.istic.ac.cn/f5000_en/paper?code=S026201303002&utm_source=TrendMD&utm_medium=cpc&utm_campaign=F5000_TrendMD_1

from LAST WEEK!!!!!:
This week, at the International Electron Devices Meeting (IEDM) and the Conference on Neural Information Processing Systems (NeurIPS), IBM researchers will showcase new hardware that will take AI further than it’s been before: right to the edge. Our novel approaches for digital and analog AI chips boost speed and slash energy demand for deep learning, without sacrificing accuracy. On the digital side, we’re setting the stage for a new industry standard in AI training with an approach that achieves full accuracy with eight-bit precision, accelerating training time by two to four times over today’s systems.

On the analog side, we report eight-bit precision—the highest yet—for an analog chip, roughly doubling accuracy compared with previous analog chips while consuming 33x less energy than a digital architecture of similar precision.

These achievements herald a new era of computing hardware designed to unleash the full potential of AI.

phys.org/news/2018-12-dual-bit-breakthroughs-ai-edge.html

from 2014!!!!!

We are primarily funded by readers. Please subscribe and donate to support us!

We introduce Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments. The method is straightforward to implement, is computationally efficient, has little memory requirements, is invariant to diagonal rescaling of the gradients, and is well suited for problems that are large in terms of data and/or parameters. The method is also appropriate for non-stationary objectives and problems with very noisy and/or sparse gradients. The hyper-parameters have intuitive interpretations and typically require little tuning. Some connections to related algorithms, on which Adam was inspired, are discussed.

We also analyze the theoretical convergence properties of the algorithm and provide a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework. Empirical results demonstrate that Adam works well in practice and compares favorably to other stochastic optimization methods. Finally, we discuss AdaMax, a variant of Adam based on the infinity norm.

arxiv.org/abs/1412.6980

from LAST MONTH!!!

A new perspective in understanding of Adam-Type algorithms and beyond
Zeyi Tao, Qi Xia, Qun Li
25 Sep 2019 (modified: 24 Dec 2019)ICLR 2020 Conference Blind SubmissionReaders: EveryoneShow BibtexShow Revisions
TL;DR: A new perspective in understanding of Adam-Type algorithms
Abstract: First-order adaptive optimization algorithms such as Adam play an important role in modern deep learning due to their super fast convergence speed in solving large scale optimization problems. However, Adam’s non-convergence behavior and regrettable generalization ability make it fall into a love-hate relationship to deep learning community. Previous studies on Adam and its variants (refer as Adam-Type algorithms) mainly rely on theoretical regret bound analysis, which overlook the natural characteristic reside in such algorithms and limit our thinking.

In this paper, we aim at seeking a different interpretation of Adam-Type algorithms so that we can intuitively comprehend and improve them. The way we chose is based on a traditional online convex optimization algorithm scheme known as mirror descent method.

By bridging Adam and mirror descent, we receive a clear map of the functionality of each part in Adam. In addition, this new angle brings us a new insight on identifying the non-convergence issue of Adam. Moreover, we provide new variant of Adam-Type algorithm, namely AdamAL which can naturally mitigate the non-convergence issue of Adam and improve its performance.

We further conduct experiments on various popular deep learning tasks and models, and the results are quite promising.

openreview.net/forum?id=SyxM51BYPB

FROM TWO WEEKS AGO!!! THIS IS IT, PEOPLE!!

To the best of our knowledge, for the first time, we propose an adaptive moment estimation (Adam) algorithm based on batch gradient descent (BGD) to design a time-domain equalizer (TDE) for pulse-amplitude modulation (PAM)-based optical interconnects. The Adam algorithm has been applied widely in the fields of artificial intelligence.

For TDE, the BGD-based Adam algorithm can obtain globally optimal tap coefficients without being trapped in locally optimal tap coefficients. Therefore, fast and stable convergence can be achieved by the BGD-based Adam algorithm with low mean square error. Meanwhile, the BGD-based Adam algorithm is implemented by parallel processing, which is more efficient than conventional serial algorithms, such as least mean square and recursive least square algorithms.

The experimental results demonstrate that the BGD-based Adam feed-forward equalizer works well in 120-Gbit/s PAM8 optical interconnects. In conclusion, the BGD-based Adam algorithm shows great potential for converging the tap coefficients of TDE in future optical interconnects.

www.osapublishing.org/ol/abstract.cfm?uri=ol-45-1-141

from 2018!! from GOOGLE!!!

Our analysis suggests that the convergence issues can be fixed by endowing such algorithms with “long-term memory” of past gradients, and propose new variants of the Adam algorithm which not only fix the convergence issues but often also lead to improved empirical performance.

research.google/pubs/pub47409/this video was posted today!!!

 

this is from JANUARY 2020!!

Based on this idea, we propose weighted adaptive gradient method framework (WAGMF) and implement WADA algorithm on this framework. Moreover, we prove that WADA can achieve a weighted data-dependent regret bound, which could be better than the original regret bound of ADAGRAD when the gradients decrease rapidly.

This bound may partially explain the good performance of ADAM in practice. Finally, extensive experiments demonstrate the effectiveness of WADA and its variants in comparison with several variants of ADAM on training convex problems and deep neural networks.

link.springer.com/article/10.1007/s11704-019-8457-x

2018:

This week, at the International Electron Devices Meeting (IEDM) and the Conference on Neural Information Processing Systems (NeurIPS), IBM researchers will showcase new hardware that will take AI further than it’s been before: right to the edge. Our novel approaches for digital and analog AI chips boost speed and slash energy demand for deep learning, without sacrificing accuracy. On the digital side, we’re setting the stage for a new industry standard in AI training with an approach that achieves full accuracy with eight-bit precision, accelerating training time by two to four times over today’s systems. On the analog side, we report eight-bit precision—the highest yet—for an analog chip, roughly doubling accuracy compared with previous analog chips while consuming 33x less energy than a digital architecture of similar precision. These achievements herald a new era of computing hardware designed to unleash the full potential of AI.

phys.org/news/2018-12-dual-bit-breakthroughs-ai-edge.html

November 12, 2019 — Blaizeä today emerged from stealth and unveiled a
groundbreaking next-generation computing architecture that precisely meets the demands and complexity
of new computational workloads found in artificial intelligence (AI) applications. Driven by advances in
energy efficiency, flexibility, and usability, Blaize products enable a range of existing and new AI use
cases in the automotive, smart vision, and enterprise computing segments, where the company is
engaged with early access customers.

These AI systems markets are projected to grow rapidly* as the
disrupting influence of AI transforms entire industries and AI functionality becomes a “must-have”
requirement for new products.

ces20.mapyourshow.com/8_0/exhibitor/exhibitor-details.cfm?ExhID=T0012455

 

h/t dmg

Views:

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.