Israeli startup Speedata revealed its existence today by showing off its big data analytics processing chip development and a $55 million funding round from keen VCs showing they think it’s a good bet.
The big bet here is that data analytics workloads can run one to two orders of magnitude faster than x86 processors when running on the Speedata analytics processing unit (APU) chip. The APU does for analytics what GPUs do for graphics processing.
A statement from Speedata’s co-founder and CEO, Jonathan Friedmann, said: “Analytics and database processing represents an even bigger workload than AI with regard to dollars spent. That’s why industries are anxiously seeking solutions for accelerating database analytics, which can serve as a huge competitive advantage for cloud providers, enterprises, datacentres, and more. …
“Our amazing team of academic and industry leaders has built a dedicated accelerator that will change the way datacenter analytics are processed — transforming the way we utilise data for years to come.”
Pedal to the metal
Speedata has designed a dedicated accelerator chip for these workloads and claims a server with this APU will replace multiple racks of CPUs, dramatically reducing costs, electricity usage and saving space.
The chip is said to address the main bottlenecks of analytics, including I/O, compute, and memory, effectively accelerating all three. It is compatible with all legacy software so workloads can execute on it with no code changes.
The illustration shows the chip fastened to a board. How does it get its data? How is it linked to a host system? We asked Friedman some questions to find out more.
Blocks & Files: How does it get the data it needs to work?
Friedmann: Speedata’s APU connects via a PCIe to a NIC or Smart NIC and/or to local storage. The data flows from local storage via PCIe bus, and/or from remote storage via Ethernet to the NIC and the connected APU.
Does it have its own pool of DRAM with a PCIe bus linking it to storage resources?
Yes. The APU does have its own pool of DRAM and a PCIe bus linking it to storage resource. Since the APU will do all the data processing, this will dramatically reduce the amount of DRAM needed next to the CPU.
How does it hook up to normal servers?
The APU connects to normal servers via a standard PCIe card. The PCIe card contains an APU and is inserted into a standard server. The APU PCIe card is thus hooked up within normal servers in the same way that a GPU PCIe card is hooked up within normal servers.
Any information on its size and number of processing elements and types?
The size and number of processing elements is similar to a GPU. It is important to note that Speedata’s APU elements are optimized for Analytics and Databases, while the GPU elements are optimized for Graphics and AI.
How would the performance in on-premises APU system running database analytics software compare to that same software running in AWS, Azure etc?
Speedata’s APU performance will be up to 20x to 100x more powerful than a CPU when running database analytics. The APU will work equally well on-premise and in public cloud systems.
How does it compare to Snowflake and similar public cloud data warehouses?
The boost in performance may be utilised by multiple types of analytic tools and data warehouses. Snowflake and other similar public cloud data warehouses can use Speedata’s APU to benefit from this improvement in their own data warehouses.
Funding and founding
Speedata has pulled in $55 million of A-round funding from VCs led by Walden Catalyst Ventures, 83North, and Koch Disruptive Technologies (KDT), with participation from existing investors Pitango First, Viola Ventures and prominent individual investors including Eyal Waldman, Co-Founder and former CEO of Mellanox Technologies. Waldman has joined Speedata’s board.
There was a previous undisclosed seed round of $15 million from a group of investors led by Viola and Pitango. This took place in 2019, the year Speedata was founded. Speedata’s total funding is $70 million.
There were six founders: Friedmann, CTO Yoav Etsion, Chief Architect Rafi Shalom, VP System Engineering Dani Voitsechov, Chairman Dan Charash, and Itai Incze, Chief Software Architect.
Friedman was CEO and founder of processor developer Centiped Semi, and a COO at Provigent, a fabless developer of broadband wireless SoCs before that. Charash was the CEO of Provigent. Etsion is an associate professor at Technion, the Israel Institute of technology. Shalom was a chief architect at storage networking NIC supplier QLogic. Voitsechov was a post-doctoral researcher into massively parallel computer systems architecture at Technion. Interestingly, Voitsechov and Etsion have partnered to write research papers. For example, Inter-thread Communication in Multithreaded, Reconfigurable Coarse-grain Arrays.
We think Speedata will get the APU chip into test use at sample customers and we’ll hear more about it next year.