Recurrent neural networks (RNNs) are a special type of artificial neural networks that can be used to process sequences, such as time series or sentences, through an internal state that serves as a memory. However, training RNNs is known to be difficult, especially for long sequences. Indeed, when gradients are backpropagated through a high number of timesteps, they are more prone to either vanish or explode, making it difficult to learn long-term dependencies. Previous work (Vecoven et al., 2021) introduced RNNs with multistable dynamics and showed that it can improve the learning of such dependencies. In this new paper (Lambrechts et al., 2023), we expand this idea by first deriving a measure of multistability, called the VAA. This metric is then used to unveil the correlation between the reachable multistability of an RNN and its learning of long-term dependencies, both in a supervised and a reinforcement learning setting. Secondly, we establish a derivable approximation of our new measure. Gradient ascent steps can then be performed on a usual RNN using batches of sequences, in order to maximise that approximation. This aims at promoting multistability within the RNN's internal dynamics, and it works for any RNN, including the classical GRU and LSTM networks. Finally, we test this new pretraining method, called the warmup, on both supervised and reinforcement learning benchmarks. RNNs pretrained with the warmup are shown to learn faster and better the long-term dependencies than their non-pretrained counterparts.
Synopsis: In a spirited discussion about energy strategies, Anna champions the cause of cost-effectiveness, while Lucas emphasizes the importance of energy return on investment. As tensions rise, Eva introduces a groundbreaking approach that could bridge their differences.
Qui ne rêverait pas d’un Hainaut prospère où le taux d’emploi serait à des niveaux jamais atteints ? Qui ne rêverait pas que la Belgique puisse bénéficier de l’énergie produite par ses éoliennes en mer du Nord afin de décarboner son économie ? Une énergie décarbonée qui servirait l’entreprise, l’économie et le citoyen belge. Ce rêve était à portée de nos mains. Mais la politisation d’un dossier crucial pour notre région nous en éloigne chaque jour un peu plus.
The research positions are about combining modelling, simulation, optimisation, and machine learning techniques in order to investigate several technical, economic and regulatory aspects induced by major upcoming changes in energy (and in particular, electricity) generation, [...]
Despite achieving impressive performances on various tasks, modern artificial intelligence (AI) systems have become complex black box models. A growing body of work aspires to open the box and understand its internal functioning. In this new article (Lambrechts et al., 2022), we follow this field of research by studying the internal representation that intelligent agents learn through reinforcement learning (RL), when those agents act in partially observable environments (POEs). In particular, the informational content of the memory of those agents is studied when the latter are trained to act optimally in maze and orientation tasks.