Luminal raises $5.3 million to build a better framework for GPU codes

🚀 Read this must-read post from TechCrunch 📖

📂 Category: AI,Startups,Felicis Ventures,GPU compiler,inference,inference optimization,Luminal

💡 Main takeaway:

Three years ago, Luminal co-founder Joe Viotti was working on chip design at Intel when he came up with an idea. While he worked to make the best chips possible, the most important bottleneck was software.

“You can make the best hardware on Earth, but if it’s hard for developers to use, they won’t use it,” he told me.

Now, he’s founded a company focused entirely on this problem. Luminal on Monday announced $5.3 million in seed funding, in a round led by Felicis Ventures with angel investment from Paul Graham, Guillermo Rauch and Ben Porterfield.

Fioti’s founders, Jake Stevens and Matthew Gunton, are affiliated with Apple and Amazon, respectively, and the company was part of Y Combinator’s Summer 2025 cohort.

Luminal’s core business is simple: The company sells compute, just like new cloud companies like Coreweave or Lambda Labs. But while those companies focus on GPUs, Luminal has focused on optimization techniques that allow the company to extract more compute from the infrastructure it owns. In particular, the company is focusing on improving the compiler that sits between written code and GPU hardware — the same developer systems that gave Fioti a lot of trouble in his previous job.

Right now, the industry’s leading compiler is Nvidia’s CUDA system — an underrated component of the company’s runaway success. But many elements of CUDA are open source, and Luminal is betting that with many in the industry still scrambling to get their hands on GPUs, there will be a lot of value to be gained from building out the rest of the lineup.

It’s part of a growing group of startups in the field of improving inference, which has increased in value as companies search for faster and cheaper ways to run their models. Inference providers like Baseten and Together AI have long specialized in optimization, and now smaller companies like Tensormesh and Clarifai are emerging to focus on more specific technical tricks.

Luminal and other members of the group will face stiff competition from optimization teams in major laboratories, which have the advantage of optimizing for a single set of models. Working for clients, Luminal has to adapt to whatever model comes their way. But even with the risk of being outdone by super-expanders, Viotti says the market is growing fast enough that he doesn’t have to worry.

“It’s always going to be possible to spend six months manually tuning the model architecture on a given machine and potentially outperform any type of compiler performance,” Viotti says. “But our big bet is that anything less than that, the all-purpose use case is still very economically valuable.”

⚡ Share your opinion below!

#️⃣ #Luminal #raises #million #build #framework #GPU #codes

🕒 Posted on 1763460736

By

Leave a Reply

Your email address will not be published. Required fields are marked *