Maximilian Beck

Working on efficient Large Language Models architectures.

I am a final-year PhD student at the Institute for Machine Learning at Johannes Kepler University (JKU) Linz, advised by Sepp Hochreiter, and a part-time PhD Researcher at NXAI.

My research focuses on efficient architectures for Large Language Models (LLMs) with sub-quadratic complexity. I’m particularly interested in optimizing training and inference efficiency, understanding scaling laws, and exploring the application of these architectures in domains such as computer vision and robotics.

From May to October 2025, I was interning at Meta FAIR in the CodeGen team. During my internship I worked on LLMs for code generation and understanding. Specifically, I explored architectures for code world models, evaluated code execution prediction capabilities and worked on neural debuggers.

Before starting my PhD, I completed both my Bachelor’s and Master’s studies in Mechatronics and Information Technology at the Karlsruhe Institute of Technology (KIT), with a focus on Control Theory, graduating in 2017 and 2021.

news

Oct 24, 2025 I have successfully finished my internship at Meta FAIR with Jonas Gehring. I am happy to have contributed and co-authored two papers, with one more in preparation - Stay tuned!
Oct 03, 2025 We published our paper xLSTM Scaling Laws: Competitive Performance with Linear Time-Complexity on arxiv.
Sep 06, 2025 I have handed in my PhD pre-defense report for my PhD Thesis:
xLSTM: Recurrent Neural Network Architectures for Scalable and Efficient Large Language Models
May 12, 2025 I started a summer internship at Meta FAIR in Paris working on code Large Language Models.
May 10, 2025 Our two papers xLSTM 7B and A Large Recurrent Action Model are accepted at ICML 2025!

selected publications

  1. xlstm_figure1_cropped.png
    xLSTM: Extended Long Short-Term Memory
    Maximilian Beck*, Korbinian Pöppel* , Markus Spanring , and 6 more authors
    In Advances in Neural Information Processing Systems (NeurIPS) , 2024
  2. tfla_fig1_icon.png
    Tiled Flash Linear Attention: More Efficient Linear RNN and xLSTM Kernels
    Maximilian Beck, Korbinian Pöppel , Phillip Lippe , and 1 more author
    In NeurIPS 2025 , 2025
  3. ICML25
    xLSTM 7B: A Recurrent LLM for Fast and Efficient Inference
    Maximilian Beck*, Korbinian Pöppel* , Phillip Lippe* , and 5 more authors
    In International Conference on Machine Learning (ICML) , 2025
  4. xlstm_scaling_laws_icon.png
    xLSTM Scaling Laws: Competitive Performance with Linear Time-Complexity
    Maximilian Beck, Kajetan Schweighofer , Sebastian Böck , and 2 more authors
    In Arxiv , 2025
  5. flashrnn_fig1_cropped.png
    FlashRNN: I/O-Aware Optimization of Traditional RNNs on modern hardware
    Korbinian Pöppel , Maximilian Beck, and Sepp Hochreiter
    In The International Conference on Learning Representations (ICLR) , 2025
  6. ICLR25
    Vision-LSTM: xLSTM as Generic Vision Backbone
    Benedikt Alkin , Maximilian Beck, Korbinian Pöppel , and 2 more authors
    In The International Conference on Learning Representations (ICLR) , 2025