The model architecture can be seen in Fig 1C In the CRBM, the p

The model architecture can be seen in Fig. 1C. In the CRBM, the past nodes are conditioned on, serving as a trial-specific bias. These units are shown in orange in Fig. 1C. Again, learning with this architecture requires only a small change to the energy function of the RBM and can be achieved through contrastive divergence. The CRBM is possibly the most successful of the Temporal RBM models to date and has been shown to both model and generate data from complex dynamical systems such as human motion capture data and video textures ( Taylor, 2009). Much of the motivation for this work is to gain insight into the typical evolution of learned hidden layer features or RFs

present in natural movie stimuli. With the existing CRBM this is not possible as it is unable to explicitly model the evolution selleck chemicals llc of hidden features without resorting to a deep network architecture. Sparse coding models, as proposed by Cadieu and Olshausen (2008) overcome this restriction by learning

complex filters, allowing for phase dynamics by multiplying the filters by complex weights whose dynamics are governed by phase variables. However, the evolution of the filters is indirectly modelled by the phase variables, not allowing for a direct biological interpretation. The TRBM, in comparison, www.selleckchem.com/products/cx-4945-silmitasertib.html provides an explicit representation of the evolution of hidden features but, as we show, can be difficult to train using the standard algorithm. While this model does not have a direct biological influence, its artificial neural network structure allows for a biological interpretation of its function and indeed, producing a spiking neural network implementation of this approach would make for interesting future research. Here, we present a new pre-training method for the TRBM called Temporal Autoencoding (aTRBM) that dramatically improves its performance

in modelling temporal data. Training procedure  : The energy of the model is given by Eq. (1) and is essentially an M  -th order autoregressive RBM which is usually trained by standard contrastive divergence ( Sutskever and Hinton, 2007). Here we propose to train it with a novel approach, highlighting the temporal structure of the stimulus. A summary of Nutlin-3 manufacturer the training method is described in Table 1. First, the individual RBM visible-to-hidden weights WW are initialized through contrastive divergence learning with a sparsity constraint on static samples of the dataset. After that, to ensure that the weights representing the hidden-to-hidden connections (WtWt) encode the dynamic structure of the ensemble, we initialize them by pre-training in the fashion of a denoising Autoencoder as will be described in the next section. After the Temporal Autoencoding is completed, the whole model (both visible-to-hidden and hidden-to-hidden weights) is trained together using contrastive divergence (CD) training.

Leave a Reply

Your email address will not be published. Required fields are marked *

*

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>