Unlocking Turing Completeness: How Large Language Models Achieve Universal Computation Without Assistance
A research team from Google DeepMind and the University of Alberta presents evidence that transformer-based LLMs using autoregressive decoding can indeed support universal computation without any external adjustments or modifications to model weights.



