DeepPCR: Parallelizing Sequential Operations in Neural Networks
AuthorsFederico Danieli, Miguel Sarabia, Xavier Suau, Pau Rodríguez, Luca Zappella
DeepPCR: Parallelizing Sequential Operations in Neural Networks
AuthorsFederico Danieli, Miguel Sarabia, Xavier Suau, Pau Rodríguez, Luca Zappella
Parallelization techniques have become ubiquitous for accelerating inference and training of deep neural networks. Despite this, several operations are still performed in a sequential manner. For instance, the forward and backward passes are executed layer-by-layer, and the output of diffusion models is produced by applying a sequence of denoising steps. This sequential approach results in a computational cost proportional to the number of steps involved, presenting a potential bottleneck as the number of steps increases. In this work, we introduce DeepPCR, a novel algorithm which parallelizes typically sequential operations in order to speed up inference and training of neural networks. DeepPCR is based on interpreting a sequence of steps as the solution of a specific system of equations, which we recover using the Parallel Cyclic Reduction algorithm. This reduces the complexity of computing the sequential operations from to , thus yielding a speedup for large . To verify the theoretical lower complexity of the algorithm, and to identify regimes for speedup, we test the effectiveness of DeepPCR in parallelizing the forward and backward pass in multi-layer perceptrons, and reach speedups of up to for the forward and for the backward pass. We additionally showcase the flexibility of DeepPCR by parallelizing training of ResNets with as many as 1024 layers, and generation in diffusion models, enabling up to faster training and faster generation, respectively, when compared to the sequential approach.
ParaRNN: Unlocking Parallel Training of Nonlinear RNNs for Large Language Models
January 16, 2026research area Methods and Algorithms, research area Tools, Platforms, Frameworks
Recurrent Neural Networks (RNNs) laid the foundation for sequence modeling, but their intrinsic sequential nature restricts parallel computation, creating a fundamental barrier to scaling. This has led to the dominance of parallelizable architectures like Transformers and, more recently, State Space Models (SSMs). While SSMs achieve efficient parallelization through structured linear recurrences, this linearity constraint limits their expressive…
Stochastic Weight Averaging in Parallel: Large-Batch Training that Generalizes Well
January 7, 2020research area Methods and Algorithmsconference ICLR
We propose Stochastic Weight Averaging in Parallel (SWAP), an algorithm to accelerate DNN training. Our algorithm uses large mini-batches to compute an approximate solution quickly and then refines it by averaging the weights of multiple models computed independently and in parallel. The resulting models generalize equally well as those trained with small mini-batches but are produced in a substantially shorter time. We demonstrate the reduction…