Publications

Articles

Pugavko, M.M., Maslennikov, O.V. & Nekorkin, V.I. Multitask computation through dynamics in recurrent spiking neural networks. Sci Rep 13, 3997 (2023).

https://doi.org/10.1038/s41598-023-31110-z

In this work, inspired by cognitive neuroscience experiments, we propose recurrent spiking neural networks trained to perform multiple target tasks. These models are designed by considering neurocognitive activity as computational processes through dynamics. Trained by input–output examples, these spiking neural networks are reverse engineered to find the dynamic mechanisms that are fundamental to their performance. We show that considering multitasking and spiking within one system provides insightful ideas on the principles of neural computation.

Maslennikov O V, Pugavko M M, Shchapin D S, Nekorkin V I "Nonlinear dynamics and machine learning of recurrent spiking neural networks" Phys. Usp. 65

https://doi.org/10.3367/UFNe.2021.08.039042

The review describes the main results in the field of design and analysis of recurrent spiking neural networks for modeling functional brain networks. Key terms and definitions from the field of machine learning are given. The main approaches to the construction and study of spiking and rate neural networks trained to perform specific cognitive functions are shown. The modern hardware neuromorphic systems that imitate the information processing by the brain are described. The principles of nonlinear dynamics are discussed, which make it possible to identify the mechanisms for performing target tasks by neural networks.

Pugavko, M. M., O. V. Maslennikov, and V. I. Nekorkin. "Dynamics of a Recurrent Spiking Neural Network in the Two-Alternative Choice Task." Radiophysics and Quantum Electronics 64.10 (2022): 736-749

https://doi.org/10.1007/s11141-022-10175-2

We have revealed the dynamic mechanism of solving a cognitive task of two-alternative choice by an artificial recurrent network of spiking neurons. The approach to designing a functional network model is described based on machine learning methods. The formation of a modular coupling structure during training is established. The properties of the network response, which underlie the performing of a target task, are found.

M. M. Pugavko, O. V. Maslennikov, V. I. Nekorkin. Dynamics of a network of map-based model neurons for supervised learning of a reservoir computing system. — Izvestiya VUZ. Applied Nonlinear Dynamics 2020. V. 28, N. 1. PP. 77-89

https://doi.org/10.18500/0869-6632-2020-28-1-77-89

The purpose of this work is to develop a reservoir computing system that contains a network of model neurons with discrete time, and to study the characteristics of the system when it is trained to autonomously generate a harmonic target signal. Methods of work include approaches of nonlinear dynamics (phase space analysis depending on parameters), machine learning (reservoir computing, supervised error minimization) and computer modeling (implementation of numerical algorithms, plotting of characteristics and diagrams). Results. A reservoir computing system based on a network of coupled discrete model neurons was constructed, and the possibility of its supervised training in generating the target signal using the controlled error minimization method FORCE was demonstrated. It has been found that with increasing network size, the mean square error of learning decreases. The dynamic regimes arising at the level of individual activity of intra-reservoir neurons at various stages of training are studied. It is shown that in the process of training, the network-reservoir transits from the state of space-time disorder to the state with regular clusters of spiking activity. The optimal values of the coupling coefficients and the parameters of the intrinsic dynamics of neurons corresponding to the minimum learning error were found. Conclusion. A new reservoir computing system is proposed in the work, the basic unit of which is the Courbage–Nekorkin discrete-time model neuron. The advantage of a network based on such a spiking neuron model is that the model is specified in the form of a mapping, therefore, there is no need to perform an integration operation. The proposed system has shown its effectiveness in training autonomous generation of a harmonic function, as well as for a number of other target functions.

Mechislav M.Pugavko, Oleg V.Maslennikov, Vladimir I.Nekorkin Dynamics of spiking map-based neural networks in problems of supervised learning. — Communications in Nonlinear Science and Numerical Simulation, 2020, vol. 90, P. 105399

https://doi.org/10.1016/j.cnsns.2020.105399

Recurrent networks of artificial spiking neurons trained to perform target functions are a perspective tool for understanding dynamic principles of information processing in computational neuroscience. Here, we develop a system of this type based on a map-based model of neural activity allowing for producing various biologically relevant regimes. Target signals used to supervisely train the network are sinusoid functions of different frequencies. Impacts of individual neuron dynamics, coupling strength, network size and other key parameters on the learning error are studied. Our findings suggest, among others, that firing rate heterogeneity as well as mixing of spiking and nonspiking regimes of neurons comprising the network can improve its performance for a wider range of target frequencies. At a single neuron activity level, successful training gives rise to well separated domains with qualitatively different dynamics.

Conferences

The Dynamics of a Spiking Neural Network in a Two-Alternative Choice Task

https://bik.sfu-kras.ru/ft/LIB2/ELIB/b22/free/i-559413.pdf

We present the results of studying an artificial spiking neural network capable of performing the target function after training, modeling the process of solving the cognitive task of two-alternative choice. The task involves a subject (human or animal in the experiment or some computational system) making a conclusion based on the presented stimulus about which of the two possible properties it possesses. In a classical experiment, a monkey observes a cloud of moving dots on the screen for a finite period, some of which move randomly, while others move in one of two highlighted directions. The monkey then communicates its decision about the direction of predominant movement through eye movements. In this study, this task is formalized as a target function – comparing the mean values of two noisy input signals and choosing the larger one. A spiking neural network with accumulate-discharge neurons is implemented, and its controlled training using the e-prop method to perform the target function is conducted. Dynamic mechanisms underlying its operation are identified based on methods of nonlinear dynamics.

Network Dynamics of Spiking Neurons in the Two-Alternative Choice Task

https://hpc-education.unn.ru/files/conference_hpc/2021/MMST2021_Proceedings.pdf

The work involves developing an artificial neural network based on spiking neuron models, which is trained to perform a cognitive task of two-alternative choice. A dynamic mechanism for solving the considered cognitive task has been identified. An analysis of the network's characteristics after training has been conducted under various network sizes, input stimulus characteristics, and learning parameters such as learning rate and training time.

Spatial-Temporal Dynamics of a Network of Discrete Neuron Models under Controlled Learning.

https://hpc-education.unn.ru/files/conference_hpc/2020/MMST2020_Proceedings.pdf

Recurrent networks of artificial spiking neurons trained to perform target functions represent a promising tool for understanding the dynamic principles of information processing in computational neurobiology. This paper presents a system of this type based on a discrete neuron model. Target signals used for controlled network learning are sinusoidal functions of various frequencies. The spatial-temporal dynamics before and after training are analyzed. The influence of the neurons' intrinsic dynamics and the number of neurons in the network on the quality of learning is investigated. We also consider the case of a heterogeneous network.

Machine Learning and the Dynamics of Spiking Reservoir Neural Networks

http://sessiann.ru/files/24_teh_est_mat.pdf

Here we present the results of machine learning using systems that replicate one of the basic properties of real neurons - the ability to generate spike sequences under certain conditions. It has been shown that spiking machine learning systems have several advantages: competition-based rules can be used for their training, reducing energy consumption, etc..

Chaotic spatiotemporal dynamics of a chain of bistable maps

https://www.dropbox.com/scl/fi/918hbwlditqu9xhgvxzty/BookShWsh18.pdf?rlkey=cv2dq4uysdmje6og6m1pgn19m&dl=0

We study the model of a chain of bistable maps with piecewise linear nonlinearity. It‘s show that this model demonstrates two modes of behavior. In the first mode, spatial disorder is realized. In the second mode, spatiotemporal chaos is realized. The transition from the first to the second mode is carried out through a period doubling bifurcation. It‘s shown that the chaotic attractor is the mathematical image of spatiotemporal chaos. Its characteristics, such as the Lyapunov and fractal dimensions, are numerically calculated. Also, the parameter region corresponding to the chaotic attractor was estimated. This area is compared with numerical calculations. Are constructed spatiotemporal plots, which demonstrate spatial disorder and spatiotemporal chaos

Chaotic Spatiotemporal Dynamics in a Chain of Interconnected Bistable Mappings

http://old.rf.unn.ru/rus/sci/books/18/pdf/oscill.pdf

Researching the processes of formation and evolution of spatiotemporal chaos in distributed dissipative systems is one of the current tasks in modern nonlinear physics. A significant class of such media includes systems with discrete coordinates, where the active element numbers forming the system play a role. Examples of systems in this class include networks of self-oscillators, neural networks, laser arrays, and others. In some cases, the system's state changes discretely not only in space but also in time.