Menu

IoT FEATURE NEWS

Synaptics and Google Open Up on AI-Native IoT

By

Sally Ward-Foxton of EETimes interviewed Billy Rutledge, Goggle’s Director of Edge AI, and Nebu Philips, Senior Director of Strategy and Business Development at Synaptics, about their collaboration and the future of AI, IoT Edge, and the benefits of their collaboration for developers.

In particular, the companies have collaborated on adding an open-source AI accelerator core to Synaptics’ next generation of Astra line of IoT chips. For those not familiar with Synaptics Astra, paraphrasing Philips, let me highlight that the Astra platform was announced “just over a year ago and it comprises ARM based SOCs with built-in AI accelerator engines that are targeted, designed, and built for the IoT.” 

IoT is a fragmented market and not every application lends itself to silicon solutions, even when they are looking to impact the cost, power, or performance. Synaptics sees this as opportunity to solve the problem by developing the next generation of Astra as an “AI-native” solution – what I consider part of AIoT.

Philips continues about this next generation as a “line of processing solutions which pair very well with our connectivity portfolio that is very specifically targeted for an IoT …[and]… these different modalities, whether it’s vision, audio, or voice, graphics. All these functions are inherently becoming more AI aware. So, we want to enter the market with the right class of silicon that is built from a performance, cost, and power standpoint that is ideal for these workloads in IoT.”

To overcome the fragmentation in the market, Synaptics worked with Google to create use standards “to the extent possible, and power that with open-source software.”

Synaptics Astra is now open source and publicly available on Github. Google is very well known for its commitment to create open ecosystems.

“So this current engagement for Synaptics with Google is an engineering and a research collaboration that is built and based on open source software and standards that is targeted at bringing a bit more of an order to how we do things in IoT and especially targeted at a lot of the different classes of devices, whether it is consumer, enterprise, industrial, some segments within the IoT. So, we are really excited about working with Google to sort of define and clean up a lot of these best practices so that a lot of innovative solutions can be brought out with AI built in for this space. So, it is a research collaboration, and we’re very excited about moving forward with a partner like Google on this.”

Ward-Foxton then asked the logical question: “So, why is it a collaboration and not a license? I think you mentioned a research partnership. You’re going to be working together on this, right?”

Philips responded, “That is right. As I was saying, there is a need to plug a lot of these gaps about best practices all the way from built frameworks to integrating the right kind of accelerators in a way that is scalable from low-, mid-, and high-performance tiers across the vendor portfolios. Through this research partnership and collaboration, we want to integrate these learnings into the Synaptics Astra portfolio. Coming up in our roadmap, that will clear the way for very clean ways of doing software and AI-native software development for IoT. There are a lot of things that we still need to build on and define. That’s why we’re calling it research/engineering collaboration.”

At Ward-Foxton’s request Rutldge explains, “Open Se Cura is an effort that we started in 2022 into 2023, and it comes on the heels of an earlier research program that we launched called coral.ai, which was a brand that centers on hardware components and software tools to experiment with bringing AI to edge devices for the first time. The goal of that project was to learn what people might do with this technology if it was affordable and easy to use. Coral is still there today and it’s still selling, and we’re excited about what it’s doing for the IoT markets.”

Corai is stable and no longer a research project for Google, so ASUS, which worked with Google on the project, manages the solution today.

One of the takeaways from the Corai solution is “that ambient sensing was one of the key features for IoT devices. So, basically, giving devices on the edge the ability to see and hear at human level sensing, using the edge chip ASIC that we had in that portfolio. Some of the feedback that we heard in addition to that was, the device ecosystem and the neural processing units that people are talking about are very fragmented. For developers, it’s quite difficult to build a model and compile it down through the various tool chain components to land something on an architecture. Even when you’re able to do that, the model might behave in different ways with different performances or different accuracy.”

Rutledge continued, noting that Google Research “wanted to try to affect the broader ecosystem by providing a new version of our work, which is soft IP, to try to unlock this ambient sensing use case specifically, but do it in a way that’s open source, using open hardware as well as open software, and packaging it to be commercial ready, so that any silicon company can pick it up and use it as a front end to what they might be building. So, the overall effort is really about just making it easier for developers to build on edge devices and, obviously, we would like to see them use Google services on the back end, which is important for our company. But there’s no lock-in. It’s completely open, and it can operate kind of across the board with a lot of open source technologies..Private ambient sensing is the theme of Open Se Cura, and the goal of that project was to release system designs that have not just the machine learning accelerator component, but also an architecture that provides the right level of security guarantees to build user trust. So, as devices are aware of your surroundings, they see what you see. They hear what you hear.”

Rutledge explains that, “with Synaptics, we’re talking about the Kelvin component – that’s the name of the open source machine accelerator project that we’re promoting, with Synaptics being the first commercial adopter.”

Ward-Foxton drills down and clarifies that Kelvin, is “a RISC-V CPU that’s part of Open Se Cura, but can you give us a few more details about Kelvin itself?”

Rutledge goes on to say that Google “decided to build Kelvin, again, as soft IP that’s open so that anyone could take it and extend it, really as an attempt to try to de-fragment the ecosystem of MPUs. We are members of the RISC-V consortium, and we try to participate in driving the ISA extensions for different parts of RISC-V. So, Kelvin is actually a way of implementing those extensions that we’ve helped contribute to and bringing them forward in a specific way for a specific purpose. Kelvin is a very small machine learning accelerator. With the Coral portfolio, we had an edge GPU that was four teraOPS of performance at a low power budget at the time. Here we’re shrinking it down even further. This is in the range of 5 or 12 GOPS, so a fraction of that, and we’re starting really, really small, just to try to affect a market that we think is growing across the ecosystem, and that’s wearable devices, which are likely to be the front end for the new generative AI experiences that the world is excited about. But, Kelvin, as a design, is really open, and it can be adjusted in different ways to achieve bigger performance, higher scale. So, it’s an architecture that is quite flexible for other use cases – starting small and giving people the right information to customize it, extend it, to really tailor it for the industry that they might be targeting, whether it’s transportation all the way down to, say, medical devices or wearables for consumer interest.”

Ward-Foxton then asks Rutledge  if Synaptics is the first implementer of Kelvin, and Billy explains that Google released version one in November of 2023.

“You can go to the Github repo, pull down the Rtail, build cover yourself, and run it on an FPGA board spec that we provided. We did do a test chip in real silicon with one of our silicon services providers as part of our commercial readiness program. So, it has been proven in real silicon. But, with Synaptics, they are the first to adopt it into a commercial line that will be taken to market.”

Rutldge adds that Google “selected Synaptics to be our flagship partner for a number of reasons. We have a long history with Synaptics and are excited about what we can do together as a team. Their specialization in low power and interest in IoT and wearable devices align with what we would like to pursue. But, we encourage everyone to take a look at it and our hope is that by partnering with all the silicon companies that are building MPUs in small form factors, we can help lower the barrier for developers to build models for these devices.”

Ward-Foxton bring Philips back into the conversation about the collaboration and he explains, “To Billy’s point earlier, I think there was quite a good overlap in the market focus and the device and the form factor focus for both companies, from Billy’s team at Google and for Synaptics. We’ve been in the IoT market for a decade. We understand the space quite well and we’ve had AI accelerators built in into our silicon since 2018. So, that’s been supporting some of our vertical markets. What we have done as part of making Astra a broader platform is to take on the learnings from how we have done that for certain targeted vertical markets and open it up to a broader base, supplemented with open-source AI frameworks. Again, to Billy’s point earlier, if you look at the whole software and the stack, the language around models and frameworks, and driven by companies like Google, that’s becoming a lot more common. You can talk about the TensorFlow ecosystem. You can talk about PyTorch or Onyx. That language is becoming a bit more common. But, the moment you step into the world of models, into how you compile them, it becomes very fragmented. You know that experience sort of breaks every inherited license, and nothing is that open about these compilers just yet. Even a lot of silicon vendors have their own tooling. So, that is an area that we addressed very early on. It worked very well with priorities for Google and we want to see how open and standards-based we could make the entire software stack.

I cannot recommend EE Times enough, but particularly I appreciate the insights I gain by reading Foxton-Ward’s work. My take on this effort is it is a noble aspiration, and certainly medical wearables are on the rise, but as a market in itself, it is as fragmented as trying to find an effective treatment for dysautonomia (look it up).




Edited by Erik Linask
Get stories like this delivered straight to your inbox. [Free eNews Subscription]

Partner, Crossfire Media

SHARE THIS ARTICLE
Related Articles

Slicing Up the Network with 5G SA: An Interview with Telit Cinterion's Stan Gray

By: Carl Ford    6/10/2025

Carl Ford speaks with Stan Gray about 5G SA, network slicing, and trends, challenges, and opportunities related to both.

Read More

Cisco Introduces Agentic AI to Industrial AIoT

By: Carl Ford    6/10/2025

The goal at Cisco is to make management of systems easier, particularly for OT, with a focus on operational issues and not on the networks connecting …

Read More

CiscoLive and Well in 2025

By: Carl Ford    6/10/2025

Cisco's new AI infrastructure innovations aim to simplify, secure, and future-proof data centers for the AI era, whether they are on-premises or a hyp…

Read More

What are the Hyperscalers' Goals Working the Power Play with Telcos?

By: Carl Ford    6/6/2025

Are telcos in prime position to support hyperscalers as AI drives up energy and compute needs?

Read More

Meta Goes Nuclear with Constellation Energy.

By: Carl Ford    6/5/2025

Meta will be powering its AI data centers with nuclear power from Constellation Energy's plant in Illinois.

Read More