Diffusion models have achieved impressive results in world modeling tasks, including novel view generation from sparse inputs. However, most existing diffusion-based NVS methods generate target views jointly via an iterative denoising process, which makes it less straightforward to impose a strictly causal structure along a camera trajectory. In contrast, autoregressive (AR) models operate in a causal fashion, generating each token based on all previously generated tokens. In this work, we introduce \textbf{ARSS}, a novel framework that leverages a GPT-style decoder-only AR model to generate novel views from a single image, conditioned on a predefined camera trajectory. We employ an off-the-shelf video tokenizer to map continuous image sequences into discrete tokens and propose a camera encoder that converts camera trajectories into 3D positional guidance. Then to enhance generation quality while preserving the autoregressive structure, we propose an autoregressive transformer module that randomly permutes the spatial order of tokens while maintaining their temporal order. Qualitative and quantitative experiments on public datasets demonstrate that our method achieves overall performance comparable to state-of-the-art view synthesis approaches based on diffusion models. Our code will be released upon paper acceptance.
Left: we apply a video tokenizer to convert image sequence into latent codes. We also apply a camera autoencoder to map camera Plücker raymap to latent camera tokens. The camera tokens are inserted before visual tokens as a 3D positional instruction. Right: the interleaved sequence is the input of a decoder-only causal transformer. The tokens of the first view are the condition tokens thus always visible to all the subsequent tokens. We use the ground truth sequence from the tokenization process to supervise the weights of a utoregression model.
@article{teng2025arss,
title={ARSS: Taming Decoder-only Autoregressive Visual Generation for View Synthesis From Single View},
author={Teng, Wenbin and Chen, Gonglin and Chen, Haiwei and Zhao, Yajie},
journal={arXiv preprint arXiv:2509.23008},
year={2025}
}
Supported by the Intelligence Advanced Research Projects Activity (IARPA) via Department of Interior/ Interior Business Center (DOI/IBC) contract number 140D0423C0075. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon. Disclaimer: The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of IARPA, DOI/IBC, or the U.S. Government.