Abstract: Recurrent Neural Networks (RNNs) have long been central to modeling sequential data, powering applications in natural language processing, speech recognition, and time-series analysis. However, traditional RNN architectures face challenges such as vanishing gradients, limited scalability, and difficulties in capturing long-range dependencies. Recent advances have introduced modern variants and hybrid approaches that significantly expand the capabilities of recurrent models.
In this talk, Dr. Bapi Chatterjee and Dr. Paweł Wawrzyński from IDEAS NCBR will present the latest developments in recurrent neural networks, focusing on how these innovations are reshaping sequential learning. The session will explore architectural improvements, integration with attention mechanisms, and the role of RNNs in complementing transformer-based models.
Key themes will include:
Advances in recurrent architectures for improved efficiency and stability.
Techniques for handling long-term dependencies in complex sequences.
Applications of modern RNNs in language, vision, and reinforcement learning.
Comparative insights on the evolving relationship between RNNs and transformers.
The presentation will highlight both theoretical progress and practical case studies, demonstrating how modern RNNs continue to play a vital role in AI research and applications. By revisiting and reimagining recurrent models, the talk will underscore their enduring relevance in the rapidly evolving landscape of machine learning.