The challenge – and benefits – of porting legacy code to a multicore platform

Interviews |
By admin

Is anyone really doing multicore in embedded?
Multicore has come upon the embedded world like a wave approaching a rocky coast – it hits some parts before others. We see it dividing into perhaps four groups.

·         There are markets that have demanded multicore for a long time, chiefly communications and packet processing. Because they’ve been so far ahead of the others, they’ve developed specialized architectures and techniques that are difficult but work.

·         Leading-edge systems-on-chip (SoCs) also use many different cores to achieve complex functionality at lower power, resulting in complex heterogeneous architectures. But designing these architectures and programming on them can also be very complicated as there are no tools really dedicated to solving that problem.

·         At the opposite end of the spectrum are the guys that still use 8-bit microcontrollers. They’re not going to need multicore for a long time, if ever.

·         But in the middle are a group of users that have resisted the switchover simply because it hasn’t been life-threatening to stay with a single core (so far), and multicore has rightly been seen as a difficult transition. Some of them have seen the wave approaching, but may have seen it as something that Intel would somehow solve for them. Others even use what’s essentially multicore – combining a processor with a DSP or even an FPGA, but have dealt with each piece separately instead of treating the whole system holistically.

These latter users and SoC designers will benefit from new tools like vfEmbedded that make multicore implementation much more like what they’re used to. Yes, it’s helpful to be more knowledgeable about how multicore programming works, but if a lot of that detail can be abstracted away (in the same manner as compiler optimizations happen mostly under the hood), then these users can take advantage of the newly-emerging dual- and quad-core embedded processors without the pain they’ve been dreading. And there’s no avoiding it in the long run: Moore’s law, constrained by the realities of power consumption, guarantee that multicore has to happen since clock speeds can’t keep rising.

Aren’t there already multicore tools out there to help with this sort of stuff?
Prior to Vector Fabrics, there have been no tools that we’re aware of that actually lead a developer to a correct-by-construction solution using an automated, intuitive approach that will work even on code the developer doesn’t know intimately. Most tools completely avoid the difficult partitioning and mapping area. Those that get near it only supplement a completely manual process, and you get no guidance as to where to go, only information on what might happen if you try this or that.

vfEmbedded , by contrast, provides explicit options to the user that are guaranteed to work. Because there’s always more than one way of doing things, the tool still lets the user make the decisions, but the user has avoided all the dead-ends and suboptimal solutions that would result from trying to do the whole thing manually. That gets things done much more quickly and gives the developer confidence both in his/her schedule and in the quality of the resulting product.

If a developer is targeting a multi-processing system, why would he or she write a sequential program first?
First and foremost, people think sequentially. We can force people to think in parallel, or indoctrinate them in school using new parallel paradigms, but we’re sequential beings at heart (and recent studies have shown that we’re not as good at multi-tasking as we like to think).

From a practical standpoint, there are two other considerations. First, many engineers end up having to implement someone else’s code on a multicore platform. It might be old legacy code written by someone that’s moved on, or it might be open-source code being used as a starting point. These are typically sequential.

The second practical matter is the fact that you will want to parallelize code differently for different platforms. So there’s one “algorithm” – most conveniently articulated as a sequential program – that can have many multicore implementations. A tool like vfEmbedded is particularly important for porting a program onto a number of widely differing platforms that may address different users or price points.

What’s the payoff to a company for doing a good job on a parallel implementation?
Anyone can do a sloppy job on parallelization. Actually, even that’s not true – it’s hard to do any parallelization that doesn’t have hidden bugs. But it’s much harder to really tune the implementation to make best use of the platform. You really need a tool like vfEmbedded to do a good job. Without it, you end up requiring more processing power just to add some “slop” for a suboptimal implementation.

If you can really tune your implementation, then you can do a much better job of fitting the program and the underlying platform together, minimizing waste, reducing power, and making the bill of materials as low as possible. In addition, if the process that gets you to an optimal solution is guided, with correct-by-construction results, you get to market faster and your schedules are more predictable.

Why did you choose a SaaS model?
The traditional way of developing tools is to send your program to your customers, originally using CDs, and now via download. But managing tools this way is cumbersome, and it limits how responsive we can be to users’ needs. With a SaaS model, all that management goes away; we take care of it. So we can be far more responsive to our customers’ needs, turning around fixes and features much more quickly.

SaaS tools also are more accessible to small and medium-sized companies, who have a hard time paying six digits for a single seat of a tool. They’re willing to pay for value, but with a SaaS setup you can tailor the business model to be friendlier to the little guy. These companies also are unlikely to be able to manage server farms if the tool capabilities start to exceed what a single server can reasonably do. The cloud gives them access to resources that typically only big companies can afford.

While it’s taking the industry some time to get used to the idea of using the cloud to offer tools, we see it as inevitable that things will go this way. It will be too hard for companies sticking with the traditional model to deliver the kind of value that’s possible using the cloud. vfEmbedded will have been one of the pioneering tools.


Linked Articles
eeNews Power