Artificial Intelligence News

Is Deep Learning a necessary ingredient for Artificial Intelligence?


The earliest artificial neural network, the Perceptron, was introduced about 65 years ago and consisted of only one layer. However, to address the solution to more complex classification tasks, a more advanced neural network architecture consisting of many (consecutive) feed-forward layers was then introduced. It is a critical component of today’s deployment of deep learning algorithms. It improves the performance of analytical and physical tasks without human intervention, and is behind everyday automation products such as new technologies for self-driving cars and autonomous chat bots.

Credit: Prof. Ido Kanter, Bar-Ilan University

The earliest artificial neural network, the Perceptron, was introduced about 65 years ago and consisted of only one layer. However, to address the solution to more complex classification tasks, a more advanced neural network architecture consisting of many (consecutive) feed-forward layers was then introduced. It is a critical component of today’s deployment of deep learning algorithms. It improves the performance of analytical and physical tasks without human intervention, and is behind everyday automation products such as new technologies for self-driving cars and autonomous chat bots.

The key questions driving the new research published today in Scientific Reports is whether efficient learning of a non-trivial classification task can be achieved using shallow, brain-inspired feedforward networks, while potentially requiring less computational complexity. “Positive answers call into question the need for deep learning architectures, and perhaps lead to the development of unique hardware for efficient and fast implementation of shallow learning,” said Prof. Ido Kanter, from the Department of Physics and the Bar-Ilan Multidisciplinary (Goldschmied) Bar-Ilan Brain Research Center, who led the research. “In addition, it will demonstrate how brain-inspired shallow learning has advanced computational capabilities with reduced complexity and energy consumption.”

“We have shown that efficient learning is artificial shallow architectures can achieve the same level of classification success previously achieved by deep learning architectures consisting of many layers and filters, but with less computational complexity,” said Yarden Tzach, a PhD student and contributor to this work. “However, the realization of an efficient shallow architecture requires changing the technological properties of advanced GPUs, and the development of dedicated hardware in the future,” he added.

The efficient learning of brain-inspired shallow architectures is in line with the efficient learning of dendritic trees based on previous experimental research by Prof. Kanter on sub-dendritic adaptation using neuronal culture, along with other anisotropic properties of neurons, such as different spike waveforms, refractory periods and maximal transmission rates (see also videos on dendritic learning: https://vimeo.com/702894966).

For many years brain dynamics and the development of machine learning were researched independently, but more recently brain dynamics has been revealed as a source for new types of efficient artificial intelligence.




Source link

Related Articles

Back to top button