Special Issue on Honor of Dr. Alexander Poznyak 80th birthday celebration Submission Date: 2026-01-31 Dr. Alexander Poznyak, a distinguished scholar and pioneer in the field of artificial neural networks, will celebrate his 80th birthday in December 2026. His groundbreaking research and visionary leadership have significantly shaped the trajectory of neural networks, inspiring countless researchers and practitioners worldwide. To commemorate his extraordinary contributions, we propose a special issue in the Neurocomputing Journal dedicated to honoring Dr. Poznyak's legacy. This special issue aligns perfectly with the journal's mission to advance the theory and application of neural networks. By showcasing the latest advancements in the field, we aim to inspire future generations of researchers and highlight the enduring impact of Dr. Poznyak's work.
Guest editors:
Dr Isaac Chairez
Tecnologico de Monterrey, Monterrey, Mexico
Email: isaac.chairez@tec.mx
Fields of interest: Robotics, Neural networks, Fuzzy systems, Adaptive control, Data mining, Biomedical systems, Metabolic Networks and Dynamic Games
Dr Wen Yu
Centro de Investigación y Estudios Avanzados del Instituto Politécnico Nacional (CINVESTAV-IPN), Ciudad de México, Mexico
Email: yuw@ctrl.cinvestav.mx
Fields of interest: Robotics, Neural networks, Fuzzy systems, Adaptive control, Data mining
Special issue information:
This special issue invites comprehensive survey/review papers that address the latest advancements in neural networks and their applications. We encourage submissions of survey/review that build upon Dr. Poznyak's pioneering work and explore new frontiers in the field. Please note that only survey/review papers will be considered for this special issue; other types of papers, including original research articles, will not be accepted. Potential topics include, but are not limited to:
* Novel neural network architectures: Exploring innovative topologies for recurrent neural networks (RNNs) and differential neural networks (DNNs) with a focus on improved performance and efficiency.
* Game theory and neural networks: Developing theoretical frameworks for differential and static games with uncertain models using RNNs and DNNs, with applications in various domains.
* System identification and learning: Advancing parametric and non-parametric identification methods for deriving effective learning laws for RNNs and DNNs.
* Control and estimation: Designing robust controllers and estimators for systems with approximate dynamics based on RNN and DNN models.
* Stability analysis and learning: Applying Lyapunov stability theory to develop novel learning algorithms for neural networks with guaranteed stability and performance.
* Constrained optimization: Addressing state and control constraints in the dynamics of artificial neural networks for practical applications.
* Distributed parameter systems: Approximating complex distributed parameter systems using neural networks for efficient modeling and control.
* Physics-informed neural networks: Developing and applying physics-informed neural networks to solve challenging engineering and scientific problems.
* Biomedical and biotechnological applications: Exploring the use of neural networks in biomedical imaging, drug discovery, and other related areas.
* Chemical systems modeling and control: Applying neural networks to model and control complex chemical processes.
Manuscript submission information:
Important Dates:
Manuscript Submission Deadline: January 31, 2026
Final Acceptance Deadline: September 30, 2026
Prospective authors should follow standard author instructions for Neurocomputing and submit their manuscripts online at https://www.editorialmanager.com/neucom/default.aspx. Authors must select “VSI: Dr. Alexander Poznyak 80th birthday" when they reach the "Article Type" step.
Please refer to the Guide for Authors (https://www.sciencedirect.com/journal/neurocomputing/publish/guide-for-authors) to prepare your manuscript.
For any further information, the authors may contact the Guest Editors.
Keywords:
Differential Neural Networks; Recurrent Neural Networks; Advanced learning laws; Intelligent control