DeepPCR: Parallelizing Sequential Operations in Neural Networks
AuthorsFederico Danieli, Miguel Sarabia, Xavier Suau, Pau Rodríguez, Luca Zappella
DeepPCR: Parallelizing Sequential Operations in Neural Networks
AuthorsFederico Danieli, Miguel Sarabia, Xavier Suau, Pau Rodríguez, Luca Zappella
Parallelization techniques have become ubiquitous for accelerating inference and training of deep neural networks. Despite this, several operations are still performed in a sequential manner. For instance, the forward and backward passes are executed layer-by-layer, and the output of diffusion models is produced by applying a sequence of denoising steps. This sequential approach results in a computational cost proportional to the number of steps involved, presenting a potential bottleneck as the number of steps increases. In this work, we introduce DeepPCR, a novel algorithm which parallelizes typically sequential operations in order to speed up inference and training of neural networks. DeepPCR is based on interpreting a sequence of steps as the solution of a specific system of equations, which we recover using the Parallel Cyclic Reduction algorithm. This reduces the complexity of computing the sequential operations from to , thus yielding a speedup for large . To verify the theoretical lower complexity of the algorithm, and to identify regimes for speedup, we test the effectiveness of DeepPCR in parallelizing the forward and backward pass in multi-layer perceptrons, and reach speedups of up to for the forward and for the backward pass. We additionally showcase the flexibility of DeepPCR by parallelizing training of ResNets with as many as 1024 layers, and generation in diffusion models, enabling up to faster training and faster generation, respectively, when compared to the sequential approach.
Parallel Track Transformers: Enabling Fast GPU Inference with Reduced Synchronization
February 10, 2026research area Methods and Algorithms
Efficient large-scale inference of transformer-based large language models (LLMs) remains a fundamental systems challenge, frequently requiring multi-GPU parallelism to meet stringent latency and throughput targets. Conventional tensor parallelism decomposes matrix operations across devices but introduces substantial inter-GPU synchronization, leading to communication bottlenecks and degraded scalability. We propose the Parallel Track (PT)…
ParaRNN: Unlocking Parallel Training of Nonlinear RNNs for Large Language Models
January 16, 2026research area Methods and Algorithms, research area Tools, Platforms, Frameworksconference ICLR
Recurrent Neural Networks (RNNs) laid the foundation for sequence modeling, but their intrinsic sequential nature restricts parallel computation, creating a fundamental barrier to scaling. This has led to the dominance of parallelizable architectures like Transformers and, more recently, State Space Models (SSMs). While SSMs achieve efficient parallelization through structured linear recurrences, this linearity constraint limits their expressive…