Synaptics CEO on how touch, AI, and sensors are giving us smart edge devices

Synaptics isn’t a household name, but it makes a lot of the underlying technology behind things like smartphone touchscreens, automotive displays, voice-enabled smart speakers, and virtual reality headsets.

As we make our internet of things (IoT) devices smarter and smarter so that we can get dumber and dumber, or at least more relaxed, we’re creating a tsunami of data. Rather than sending all that data over the internet to a datacenter, we can process, analyze, and act upon much of it right where it’s collected.

The result is that devices at the edge of the network — things like tech-enabled cars, Alexa smart speakers, and security cameras — are getting smarter. And Synaptics has to adapt to this changing world by using technologies like artificial intelligence to improve the chips and other technologies that do the sensing and processing in these devices.

I talked with Rick Bergman, CEO of Synaptics, which makes touch display controllers and a variety of other smart underlying technologies, about what’s coming down the road.

Here’s an edited transcript of our interview.

Above: Synaptics CEO Rick Bergman shows off Audio Smart products.

Image Credit: Synaptics

VentureBeat: What do you see coming into the market soon, this year?

Rick Bergman: You had an article on our new SOC solutions, with neural network and voice enablement. We just began sampling at the tail end of last year, beginning of this year. We’ll see that mass production within our fiscal year, by the end of June. You’ll see some interesting new solutions come to market with that capability.

VentureBeat: Is that a lot more voice-based, smart hubs and things like that?

Bergman: Voice-enabled anything, pretty much. The initial ones, as you can imagine — the smart speaker category is the fastest-moving. But it applies to mesh routers, mirrors, toilets, TVs, set-top boxes, all utilizing the far field voice capabilities that we offer to the market. More specifically now, with the new ones we announced, the neural network capability can improve the voice capability.

VentureBeat: We’re used to having that pause when we talk to Alexa, or having to say things twice. Do you think those days are going away? Or might we still have that kind of problem?

Bergman: It’s getting better and better. I saw a presentation a couple of days ago where the smart speakers may be better than humans at interpreting speech now. It’s pretty close as is. The latency certainly is potentially moving the recognition, or at least the limited vocabulary, to the edge. You could see a bit of improvement in the latency.

VentureBeat: Is that a trend you also see, that a lot of the intelligence is moving to the edge?

Bergman: Yes. We’re just sampling these devices, so there’s going to be some lag time or latency from when we get the hardware capabilities to when the software fully takes advantage of it. But in the second half of this year, when you see some of these systems announced, that intelligence is clearly moving to the edge. The chip we announced has a billion operations per second.

I know you’ve tracked gaming for a long time. I remember distinctly, in 2008, shortly after AMD acquired ATi, they introduced the first billion operations per second GPU. That was a $499 solution. The chips we announced will be $2 or $3, with that same level of capability, at least measured by the number of operations. It’s the same idea. It’s matrix math. One is intended for graphics and the other is intended for AI.

VentureBeat: When you put some of these things together, like 5G and this intelligence at the edge and new interfaces, what do you think comes out? What are some things you’re looking forward to?

Bergman: To break it down a little bit, on smartphones certainly 5G is going to drive gaming — as we move to foldable and bendable and that sort of thing, as well. We’re being encouraged by our customers to increase display refresh rate because of gaming. For a long time it’s been 60Hz on a phone, but now we’re getting requests for 90Hz or 120Hz. We’re moving in that direction. Some of it — not all, of course, because 5G exists today — but some of it is being delivered by 5G enabling higher performance gaming. People want that response rate associated with it.

Above: Rick Bergman shows of VideoSmart products.

Image Credit: Synaptics

VentureBeat: I saw Razer was really touting the 120Hz on its phones. Apple had still not quite gotten there. They had 120Hz touch, but people want 120Hz refresh on the screen.

Bergman: It also helps with videos, for the same reason TVs have a higher refresh rate. Some people in the industry are still a little bit dubious. It’s one thing to see it on a 70-inch TV. On a 7-inch smartphone, can you actually see a football move in a way that makes that meaningful? I’m not sure. But in any case, people are very discerning. That’s what they want. We certainly will offer these capabilities. Asus is another one that also has a high-refresh phone.

VentureBeat: On the VR side, do you guys have things that are seeing use there?

Bergman: We have display drivers that are actually dedicated just for VR, to drive 2K x 2K screens. Of course, that doesn’t sound like a lot, but the screens are one inch on a side, so it’s a very high DPI. You don’t see the screen door effect with the glasses an inch from your eyeballs. You’ll see that again, actually, with those coming out in systems in the second half of the year, as well. Very cool stuff.

VentureBeat: I started seeing some new models at CES that were talking about 4K per eye. It sounds like VR systems are moving ahead of where this first generation was, with the Oculus and Vive, the kind of visual quality you could get with them. For those, are you confident that we’ll get a whole new generation of these pretty soon?

Bergman: Oh, yes. I think everybody’s waiting. There are many opportunities to improve the experience with VR and the visual quality is one of them. Obviously, the size of the headset is another one, and tethered versus non-tethered, and being able to see the environment around you so you’re not stumbling into the coffee table. They’re all fundamental challenges, along with the long-term issue of a small percentage of the population just having a fundamental issue with latency and so forth. We’re working with our customers on a few of those challenges, as well. Foveated rendering is another one where we can help out.

VentureBeat: With 5G in particular, do you also have some solutions there? Or is that going to drive a different kind of use of what you already have there?

Bergman: It’s more the latter. We don’t do any 5G components. We don’t have any connectivity capability at all. It’s more about how 5G impacts our devices. I already mentioned the refresh rates for displays and touch controllers. Also, because it ends up requiring a little more RF and a little more power, we’re getting pushed hard on our display drivers to reduce power and help create thinner screens, so more battery can get squeezed in there. Ultimately, of course, we hope it causes a new wave of smartphone replacement.

VentureBeat: What do you see taking hold in cars?

Bergman: From a Synaptics perspective, we’re primarily in the display area. In there, the trend is clear. I don’t need to tell you that you’re going to see much larger displays, and curved displays — some of those with more of a minimal curve still being LCD. It’ll take a while before you see OLED displays in vehicles. We’re seeing that type of approach. But you’d be amazed by how big of displays people are contemplating putting in their vehicles in the coming years. They’re almost like TVs.

Leave a Reply