Published in January, 2026
I started exploring PyTorch Compile since mid 2025, for some work related stuff. As I got to know more about it, I got more and more interested. The way it is designed in PyTorch is extremely complex, which fascinated me. So I learnt about how it works underneath, all the way down to Python Interpreter!
Parallely and inevitably, I started to get familiar with the field of Deep Learning (DL) Compilers. Although scared at first, I could not resist the curiousity it generated in me. I learnt about the various compile backends that are supported by PyTorch, including the default inductor. Digging deeper into the inductor and beyond, I learnt about MLIR, LLVM, and Triton. I also went breadth-wise from the torch.compile() and found out about various existing compilers such as TVM, Glow, and XLA.
At present, I have an idea of the lay of the land of DL compilers with depth only in torch.compile and inductor. Hence, my plan is to go deeper into the topics I have found to be most interesting and document my knowledge in this series of blogs.
If you are someone who is fascinated by this space as well, I invite you to tag along!