Startup claims 100x more efficient processor than current CPUs, secures $16M in funding

zohaibahd

Posts: 32   +1
Staff
Forward-looking: Power efficiency is the name of the game these days, and every chip designer strives to squeeze more performance out of less energy. A secretive startup called Efficient Computer Corp. claims to have taken that to a whole new level with their newly announced processor architecture.

Efficient Computer emerged from stealth mode this week, revealing their "Fabric" chip design. According to the company, this novel CPU can provide performance on par with current offerings, while using a tiny fraction of the electricity. They claim to be 100 times more efficient than "the best embedded von Neumann processor," and 1,000 times more economical than power-intensive GPUs. If true, those would be mind-blowing improvements.

In a press release, Efficient Computer announced that it has already implemented the Fabric processor architecture in its 'Monza' test system on a chip (SoC).

So what's their secret sauce? The details get pretty technical, but the key idea behind Fabric is optimizing for parallelism from the ground up. General-purpose CPUs are incredibly complex, carrying around tons of legacy cruft to maintain backwards compatibility. Efficient Computer strips all that away and uses a simplified, reconfigurable dataflow architecture to execute code across numerous parallel computing elements.

The innovative design stems from over seven years of research at Carnegie Mellon University. It exploits spatial parallelism by executing different instructions simultaneously across the physical layout of the chip. An efficient on-chip network links these parallel computing elements.

Efficient's software stack also supports major embedded languages such as C, C++, TensorFlow, and some Rust applications, enabling app developers to quickly recompile their code for the Fabric architecture. The startup indicates that Fabric's compiler was "designed alongside the hardware from day one."

Nevertheless, this need for software recompilation could limit mainstream adoption. Recompiling every application is challenging for more conventional consumer devices. So, at least initially, the target markets are specialized sectors such as health devices, civil infrastructure monitoring, satellites, defense, and security – areas where the power advantages would be most valuable.

Under the hood, Fabric presents a unique approach to parallel computing. While Efficient Computer is tight-lipped about the specifics of the processing architecture, their descriptions hint at a flexible, reconfigurable processor that can optimize itself for different workloads through software-defined instructions.

The company recently locked down $16 million in seed funding from Eclipse Ventures, likely aimed at launching their first Fabric chips. Efficient Computer reports having already inked deals with unnamed partners and are targeting production silicon shipping in early 2025.

Permalink to story.

 
With limited information, does not sound anything special. It's well known for decades that CPUs are pretty bad for paraller workloads. GPUs are better but still quite bad. Even comparing this type of paraller maximizer against CPUs and GPUs pretty much tells everything.

Against GPUs this may be OK. Against ASICs it's only advantage is probably ability to use standard compilers and/or ability to use on multiple purposes.

From limited information, this sounds like programmable ASIC-like chip. It probably wll have some use but because ASICs will still be more efficient, there is too much hype right now.
 
Where are the tests? Where's the technical information? Show at least one valid comparison. It's not even worth writing an article about such a vague announcement.

Anyone can say anything and start their own leeching startup to live off the necks of investors
Ps: U$ 16M won't even pay for the development of a 14nm chip. lol
 
Last edited:
Now a days parallel coding is fairly easy to process. That's why many programs off load to GPU's. Lets see the stats on these for branched chain coding, and if its still 100x then I'll be impressed. Until that happens, yawn.
 
Where are the tests? Where's the technical information? Show at least one valid comparison. It's not even worth writing an article about such a vague announcement.

Anyone can say anything and start their own leeching startup to live off the necks of investors
Ps: U$ 16M won't even pay for the development of a 14nm chip. lol


Yeah I thought the 16M was a joke as well .maybe people responsible looking for large payout.

Given that I think if approximately true - has great potential in autonomous devices- with huge inputs .
Given that a whole new language would be find

Surely US defense department would be interested - Seeing current conflict , Drones etc need to be protected vs jamming, GPS hacking etc

I think people forget in how many tiny cpus/controllers in the world - in every day things
Even conservation/monitoring

Musk could use this for a rat to power it's neural link if so little power

Flip side "1984"- small hard to find longlife ( or permanent with solar/sound/compression( eg by cars )/wifi power source ) monitoring
 
Yeah I thought the 16M was a joke as well .maybe people responsible looking for large payout.
This is normal for first-round funding; it's just enough seed capital to pay salaries and keep the doors open, while they put together something solid enough to attract Round Two investors. I imagine they're working on a software simulator now to verify performance, as well as an actual recompile of some existing software.
 
This is normal for first-round funding; it's just enough seed capital to pay salaries and keep the doors open, while they put together something solid enough to attract Round Two investors. I imagine they're working on a software simulator now to verify performance, as well as an actual recompile of some existing software.
So glad someone here understands funding rounds. Cheers.
 
Back