Lightning AI launches next-gen AI compiler ‘Thunder’ to accelerate model training

Join us in Atlanta on April 10th and explore the landscape of security workforce. We will explore the vision, benefits, and use cases of AI for security teams. Request an invite here.


Open-source AI development platform Lightning AI, in partnership with Nvidia, announced today the release of Thunder, a source-to-source compiler for the open-source machine learning (ML) framework PyTorch. The company claims that this new offering is designed to speed up the training of AI models by using multiple GPUs to improve efficiency. 

According to Lightning AI, the compiler Thunder achieves up to a 40% speed-up for training large language models (LLMs) when compared to unoptimized code in real-world scenarios. As for pricing, the company said that Thunder is open source under an Apache 2.0 license and freely available.

Lightning AI made its presence known at Nvidia GTC, where the company has since unveiled Thunder, which could be the answer to the challenge of using GPUs to their fullest potential, rather than increasing the number of GPUs that are used. Since 2022, Lightning AI has set out to create next-generation deep learning for PyTorch that’s compatible with Nvidia’s suite of other products that include torch.compile, nvFuser, Apex, CUDA Deep Neural Network Library (cuDNN) as well as OpenAI’s Triton. 

Formerly known as Grid AI, Lightning AI – the creators of the open-source Python library PyTorch Lightning – aim to expedite workloads through its optimizations, potentially, to perform alongside other open-source communities that include Open AI, Meta AI, and Nvidia.

VB Event

The AI Impact Tour – Atlanta

Continuing our tour, we’re headed to Atlanta for the AI Impact Tour stop on April 10th. This exclusive, invite-only event, in partnership with Microsoft, will feature discussions on how generative AI is transforming the security workforce. Space is limited, so request an invite today.

Request an invite

Led by PyTorch core developer Thomas Viehmann, known for his work on TorchScript as well as making PyTorch operational on mobile devices, the company said that the compiler will serve generative AI models from multiple GPUs. Lightning AI CEO and founder William Falcon expressed his excitement to work with Viehmann, noting that “Thomas literally wrote the book on PyTorch. At Lightning AI, he will lead the upcoming performance breakthroughs we will make available to the PyTorch and Lightning AI community,” Falcon said in a statement.

Between data collection, model configuration and supervised fine-tuning, the model training process can be a time-consuming and costly one. Add in other factors such as technological expertise, management and optimization, those challenges are heightened. In the age of adversarial AI, attackers train the LLMs to manipulate and deceive AI systems, which can pose a major threat to organizations. 

Lightning’s Chief Technology Officer Luca Antiga indicates that performance optimization and profiling tools are necessary keys for scaling the training of models. Without it, plenty of time, resources and money – amounting to billions of dollars – is spent training LLMs. “What we are seeing is that customers aren’t using available GPUs to their full capacity, and are instead throwing more GPUs at the problem,” Antiga said in a statement. He also noted that combined with Lightning Studios and its profiling tools, customers can use GPUs more effectively and train the LLMs to operate on a faster and larger scale through Thunder’s code optimization. 

Thunder is now available for use following the company’s release of release of Lightning 2.2 in February. Products from Lightning Studios can be purchased under four pricing levels: free for individual developers, pro level for engineers, researchers, and scientists; teams level for startups and teams, and enterprise level for larger organizations.