Technology

Former Apple Face ID Engineers Just Raised $107M to Give Robots a “Visual Brain”

Remember Face ID? The technology that lets your iPhone recognize your face even in the dark, even when you’re wearing sunglasses, even when you’ve got a new haircut? The people who built that just launched a robotics startup, and they raised $107 million before most of us even knew they existed.

Lyte AI came out of stealth mode today, and their pitch is pretty straightforward: robots are kind of dumb about understanding the world around them, and we know how to fix that.

The company was founded in 2021 by former top engineers from Apple’s Face ID team, including CEO Alexander Shpunt. And they’re not just making better cameras for robots. They’re trying to build what they call a “visual brain”—a system that fundamentally changes how robots perceive and understand their environment.

The Problem With How Robots See

Here’s the issue with most robots today: they’re using basic sensors or cobbled-together systems that treat vision and motion as separate problems. It’s like giving a robot eyes and a sense of balance, but not connecting the two. The robot can see a door, and it knows when it’s moving, but it struggles to integrate that information in a way that lets it navigate smoothly through the doorway while avoiding obstacles.

Lyte’s solution is LyteVision, which combines three types of data into one integrated perception system: visual imaging from cameras, inertial motion sensing, and an advanced 4D sensor that doesn’t just measure distance but also tracks how objects are moving over time.

That last part is key. Traditional sensors can tell a robot “there’s something three feet away.” A 4D sensor can tell it “there’s something three feet away, and it’s moving toward you at five miles per hour, so you need to react.” That’s the difference between a robot that freezes when it detects motion and one that can actually anticipate and respond intelligently.

Why the Apple Connection Matters

The fact that these folks came from Apple isn’t just a credibility boost—it’s central to their approach.

“We are trying to take the best things that Apple taught us—on attention to detail, operational excellence and how to excite and wow the customers—in order to bring this to the robotics market,” Shpunt told Bloomberg. “Apple was definitely a good school.”

He’s not wrong. Apple’s whole thing is making complex technology feel simple and intuitive. Face ID works so well that most people don’t even think about it—you just look at your phone and it unlocks. That’s incredibly hard to pull off, and it required solving massive challenges in perception, processing, and reliability.

The robotics industry desperately needs that kind of thinking. Too many robots work great in labs and then completely fall apart when you put them in a messy warehouse or a busy sidewalk. They can’t handle unexpected obstacles. They freeze when lighting changes. They’re basically zombie robots, as Shpunt puts it—moving through the world without really understanding it.

“We realize that perception and more generally, having robots understand what they do, be safe and immediately react to the world—not be a zombie robot—is something that we would like to solve,” he said. “So we went to solve that problem.”

What This Actually Enables

Lyte’s technology is designed to work across a wide range of robotic platforms. We’re talking mobile robots in warehouses, robotic arms in factories, humanoid robots, autonomous vehicles like robotaxis—basically anything that needs to move through and interact with the physical world.

The big promise is that LyteVision makes robotic vision robust enough for real-world conditions, not just controlled environments. A factory floor is one thing. But what about a construction site with variable lighting, dust in the air, and people walking unpredictably? Or a sidewalk delivery robot that needs to navigate around pedestrians, dogs, scooters, and street vendors?

That’s where most robotic perception systems struggle. They’re optimized for specific scenarios and fall apart when conditions change. Lyte is betting that their unified approach—combining multiple sensing technologies with AI into one coherent system—can handle the messiness of reality.

The $107 Million Question

Raising $107 million for a robotics startup that just came out of stealth is no small feat. Investors are clearly betting that the team’s Apple pedigree translates to the robotics world, and that perception is a big enough bottleneck that solving it unlocks massive value.

They’re probably right about that second part. If you look at why robots aren’t more widespread in everyday life, it’s not usually because they can’t physically do the tasks. It’s because they can’t reliably navigate and understand their environment. A delivery robot that constantly gets stuck or confused isn’t useful, no matter how well it can carry packages.

The harder question is whether Lyte’s approach is the right one. There are other companies working on robotic perception, including some with their own impressive pedigrees and technology. And there’s always the risk that rapid advances in AI—particularly in computer vision and multimodal learning—could make specialized hardware less necessary.

What Success Looks Like

If Lyte pulls this off, we’ll know because robots will suddenly start showing up in more places, doing more things, and doing them reliably. The delivery robots won’t get confused by shadows. The warehouse robots won’t need to shut down when someone moves a pallet to a slightly different spot. The robotaxis will actually feel safe to ride in.

That’s the vision, anyway. Making robots that can truly understand their environment, react in real-time, and operate safely in unpredictable conditions. It’s the kind of problem that sounds straightforward until you actually try to solve it, and then you realize how phenomenally difficult it is.

The good news for Lyte is they’ve got a team that’s already solved a similarly difficult problem at Apple. The bad news is that robotics is an even harder domain than smartphones—the stakes are higher, the environments are more variable, and there’s a lot less room for error when a 200-pound robot is moving through a space with humans.

But with $107 million in the bank and a team that knows how to build perception systems that actually work, they’ve got a real shot at making “zombie robots” a thing of the past.

We’ll see if they can pull it off.

Related posts
Technology

Samsung's Bringing 15 Startups to CES—And Most of Them Aren't From Seoul

Samsung’s heading to CES 2026 with something a little different this year: 15 startups from…
Read more
Technology

This Robotic Puppy Might Be the Answer to Senior Loneliness—And It's Going Back to CES

There’s something both heartbreaking and hopeful about a robotic puppy designed to keep…
Read more
Technology

Nvidia's $20B Groq Deal Just Made Every AI Chip Startup a Whole Lot More Interesting

Nvidia doesn’t usually make big moves on Christmas Eve, but this year they dropped a…
Read more
Newsletter
Become a Trendsetter

Sign up for TheTechly’s Daily Digest and get the latest and trending technology updates.

[mc4wp_form id="729"]