Toshiba has announced the -P version of its XG6 M.2 gumstick flash drive with doubled capacity and faster streaming write performance than its XG5-P predecessor.
The XG6-P has 2TB of capacity and performs sequential reading and writing at 3.18 and 2.92GB/sec compared to the XG5-P’s 3.0 and 2.2GB/sec. Random read and write IOPS numbers are up to 355,000 and 365,000 – compared with 320,000 and 265,000 for the XG5-P.
Like the XG6 on which it is based, the XG6-P uses 96-layer TLC (3bits/cell) flash and has a fast SLC (1bit/cell) cache.
This -P version of the XG6 has the same random read and write IOPS numbers, the same sequential read value but is marginally slower, at 2.92GB/sec, than the XG6’s 2.96GB/sec sequential write bandwidth.
Both XP6 models use a single-sided 2280 M.2 card format and consume less than five watts when operating. The XG6-P has optional TCG Pyrite Version 1.0 support for Non-SED (self-encrypting drive) configurations and TCG Opal Version 2.01 support for SED.
Toshiba is sampling this client workstation drive is sampling to OEM clients from mid-June. No datasheet is available yet.
By buying Cray HPE acquires a set of supercomputer, storage and interconnect hardware and software products that it will integrate with existing product lines.
We asked HPE about its HPC product positioning post-Cray and a spokesperson said: “We’ll make those decisions as we move through the integration process after the deal closes [in December 2018]. What I can confirm now is that the entire Cray business and all of its functions will become part of HPE and will be combined with our current HPC business.”
So what will that mean for HPE’s HPC storage line up whenCray’s ClusterStor is under its wing?
Post acquisition, HPE can offer its customers four scale-out, parallel file system product choices for high-performance computing (HPC). Or perhaps one or more product lines will face the chop.
Today HPE sells three scale-out, parallel access file storage product ranges: its own Scalable Storage for Lustre and third-party offerings fromQumulo and WekaIO. Here is a brief overview at each product.
HPE HPC Filesystem products pre-Cray.
Scalable Storage for Lustre
This is the Lustre file system software, with ZFS RAID software, based on Object Storage Servers and Metadata Servers. The software runs on HPE ProLiant DL360 and DL380 Gen10 servers, with D3710 Enclosure, MSA 2050 SAN Storage for metadata storage, and D6020 Disk Enclosure for capacity storage.
An HPE Version of Community Lustre is provided, featuring the ZFS file system for data integrity, data compression, and file system snapshots. Lustre is also POSIX-compliant.
Qumulo Core
Qumulo provides three kinds of nodes to deliver its Core software: high-performance, hybrid capacity, and nearline archive. Core is also available on AWS and the Google Cloud Platform.
Core supports trillions of files, mainstream NFS and SMB file access protocols, and supports S3 via Minio.
A CIO in the life sciences industry told us: “We have been talking to Qumulo since they were still in stealth. Honestly I think it is a good platform for scale-out, and if we weren’t committed to Isilon we would be really looking hard at them, but I don’t see enough of a difference between this and Isilon to justify the migration.
“I am not sure if they would be picked up by HPE as an Isilon competitor or if HPE would just see that scale-out market as too limited. I do know Dell Technologies has been very successful selling Isilon in a variety of use cases; not sure how much bigger that market is.”
WekaIO Matrix
This is a high-performance product with SPEC SFS 2014 benchmark wins and near-Summit supercomputer storage performance. It runs in AWS and has an S3 gateway. Like Qumulo, WekaIO supports NFS and SMB plus HDFS as well, and is POSIX-compliant.
That means it supports APIs, command line shells and utility interfaces found in Unix-style operating systems. It also utilises NVMe-oF for fast access to storage nodes.
Our CIO source said: “Weka [is] the most technically advanced storage solution. We have tested it pretty extensively and it is very performant as it scales; definitely a new generation of filesystem that is equally at home in the cloud and on-premises.”
He added: “I would not be surprised to see HPE gobble them up as it seems that is their strongest large partner, but one never knows if AWS or Azure might have interest. We have not purchased any of this storage at this stage but they are definitely on the short list of technologies.”
ClusterStor
These three products all overlap to some extent with Cray’s ClusterStor systems which have been sold in Cray HPC and supercomputing sites. ClusterStor is based on Lustre and comes with L300N disk and L300F all-flash arrays. It does not support NFS, SMB or the HDFS protocol, and is POSIX-compliant. The software runs in Azure.
Our CIO said: “ClusterStor [is a] good Lustre solution, great for niche HPC application, limited utility in the enterprise, not a real growth market as Lustre is still seen as esoteric by most enterprise shops.”
Our read
Blocks & Files thinks that ClusterStor will, over time, replace Scalable Storage for Lustre There could possibly be an importation of the ZFS features from Scalable Storage for Lustre into ClusterStor.
Four overlapping HPE HPC products with ClusterStor.
We also think that HPE will likely lead with ClusterStor – rather than Qumulo – in HPC opportunities needing hardware and software, as it can capture more of the customer spend for itself.
ClusterStor will also win out over Qumulo when Azure compatibility is needed but lose when AWS, S3 or GCP support is needed. Naturally ClusterStor will be preferred when the storage is part of a Cray compute bid.
However Qumulo has tiered hardware and that could swing bids in its favour. It may be that ClusterStor outperforms Qumulo but that will need testing.
We think Qumulo and ClusterStor performance will be be roughly similar whereas WekaIO’s Matrix is faster, given the benchmark evidence. Unless there is a clear need for WekaIO’s performance and/or AWS support, its opportunities within the HPE market could well be limited by ClusterStor.
To conclude, ClusterStor is bad news for Scalable Storage for Lustre and a cuckoo in the HPE nest for Qumulo and WekaIO.
Formulus Black, the company formerly known as Symbolic IO, said its Forsa software pushes Microsoft SQL Server to run 129 times faster than standard operating systems.
In a recent benchmark test, the company executed 22.9 million SQL transactions per minute and achieved sub-microsecond latency on a single commodity server.
It modestly dubbed the results as “Platinum CPU class database performance on Silver-CPU class servers”.
The company commissioned End Point Corporation, a software consultancy, to benchmark the performance of Microsoft SQL Server on a fairly ordinary X86 server.
The test machine spec was a Dell PowerEdge R740xd with dual Intel Silver 4114 CPUs (20 physical cores and 40 hyper-threaded), 768GB of memory, two 800GB PCIe SSDs and a 200GB SSD Boot Drive. Ubuntu Linux and Forsa were installed.
The SSD drives were not used and the databases were stored in memory on “LEMs” (Logical Extensions of Memory) created using Forsa. 650GB out of 768GB was set aside for LEMs, leaving the system with 128GB of memory. Results were obtained using HammerDB v3.1 with the TPC-C benchmark.
End Point benchmarks
Bench press
In detail: “Test results include achieving over 22.9 million aggregate transactions per minute in host mode and over 5.1 million aggregate transactions per minute in a virtualized environment. Latency was 0.25ms, on average.”
End Point compared the results with publicly available benchmarks, including an October 2018 report by Principled Technologies that also tested a Dell PowerEdge R740xd. TPM and latency results achieved on Forsa are about 129x better (22.9 million TPM vs. 177,000 TPM).
The Dell server tested by Principled ran Red Hat Enterprise Linux 7.5 and Microsoft SQL Server.
End Point wrote: “While the Principled Technologies report used the DVD Store 2 benchmark and this report used HammerDB TPC-C benchmark, the key difference is of course attributed to running the database on SATA SSD’s vs. in memory on LEMs with Forsa.”
Josh Williams, a database specialist at End Point, said: “Because Forsa allows all data to persist in-memory all the time, applications, especially databases, can take advantage of the massively parallel processing capabilities of the memory channel and deliver levels of performance that cannot be achieved when data is stored in traditional mediums like SSDs, HDDs or over a SAN.”
Wayne Rickard, Formulus Black’s chief strategy and marketing officer, was cock-a-hoop: “Testing Forsa’s ability to achieve significant performance levels in SQL environments on a single mid-range server without tuning, shows that it is unmatchable by any SSD or other I/O-bound technology today.”
Download the performance report here (registration required.) It contains an email address for Josh Williams if you have follow-up questions.
Interview As Pure was to Flash and Nutanix to HCI, Datera wants to be the main independent enterprise software-defined block-plus storage player.
Datera CEO Guy Churchward revealed the company’s ambitions in an interview with Blocks & Files. The company agreed a partnership deal with HPE earlier this year and we wanted to better understand the company’s situation and strategy.
We conducted the interview by email and I sent the questions in before Churchward’s recent heart attack. He sent us his answers post-op recovery – testament of his intent to return to direct Datera.
Guy Churchward
Blocks & Files: What markets does Datera serve?
Guy Churchward: Geographic – worldwide but we’re concentrating on US domestic and Europe right now. Vertical – usual suspects where data volume, diversity, and criticality to competitive advantage matters.
Storage Swim-Lane – SDS hyper-scale ‘Cloud Like’ value set with Tier1 performance and availability for Block, Container & high performance Object.
Blocks & Files: What are the main IT trends affecting customers in the markets you serve?
Guy Churchward: We see a number of key trends right now:
Reducing technical debt, peeling dollars from an existing budget to drive innovation around data intelligence and data agility giving competitive edge. (#throwbackthursday Do more with less mantra)
Run as a cloud either hosted or private.
Preserve optionality, key terms heard regularly – multi vendor, open paths, reduce fork lift upgrades, no lock-ins (vendor or configuration), open formats, avoid ‘sticky value add’, data freedom, flexibility, choice.
Blocks & Files: How would you describe Datera’s situation and state as a business?
Guy Churchward: You once described Datera as a Minnow and I concur, based on the bigger picture.
We’re an early stage startup with a high IP quota, we have 45+ production deployments and our Minnow is power lifting at the gym with HPE as a main spotter. Fundamentally, enterprise accounts listen to trusted advisors during transitions and companies like Datera need a big brother to help showcase their innovations.
So early days for us with our new business relationship, but HPE has already won multiple enterprise accounts against Dell using Datera as the secret sauce.
Blocks & Files: If I say that, in essence, the company is a high-performance scale-out software SAN supplier with object storage on the side, would that be fair?
Guy Churchward: Yup, if you lean in harder, you see the ‘data services platform’, not so much and you see the SW SAN with the added benefits of storage lifecycle management and perhaps adding ‘distributed’ between scale-out and SW if I were wordsmithing but still keep on your theme.
Blocks & Files: What is the role of the object storage element in Datera’s product? Can block storage data be tiered off to object storage?
Guy Churchward: Some customers are looking to utilise object storage protocols for portions of their data so we’ve added support for performant objects so they can manipulate that data or metadata to enable data consolidation for those applications.
Once those objects become static or the data is less in demand, the object can be moved to purpose-built private or public object store. More commonly we enable block snapshots to be tiered off to object – Datera to Datera, or Datera to another on-prem or public cloud object store.
Blocks & Files: Datera can provide sub-200uS data access latency. The sub-200uS latencies are for block storage, I suppose. What are the latencies for object storage?
Guy Churchward: There can be additional latency but can be as little as 100us, though it can be more complicated. Because objects can vary a lot in size, we would typically think of latency in terms of time-to-first-bit as opposed to time for the entire object.
The media and network latency components don’t change by the protocol, but the software [has] only the additional Minio code for converting from an object namespace to the underlying blocks that represent the object and its metadata…Yup I cribbed this sentence off an email from our CTO to a client 🙂
Blocks & Files: What is the company’s strategy?
Guy Churchward: Silos don’t help drive innovation and you can’t have flexibility if you are bound by custom tin wrappers. So, our focus is on delivering optionality with better economics to our customers through distributed data freedom – lofty, I know, but the entire storage market will transition down this road over time, so either we take companies there or we push the industry to step up and offer competitive alternatives, which will keep us on our toes and that’s also a good thing.
Ideally, I’d like to establish Datera as the Tier1 go-to independent SDS Block++ player, rather like Pure was to Flash or Nutanix was to HCI which then moves us nicely toward being a key technology supplier in a ubiquitous multi-cloud data service proxy platform layer.
Blocks & Files: What is or are the main product strong points it can build on?
Guy Churchward: The Datera architecture is superb. The team thought through how storage ‘should’ operate at scale and worked backwards, which means automation and telemetry are second to none. These points can be easily built on as we careen towards multi-cloud and the inevitable cloud/OS or proxy layers (per the above question).
Blocks & Files: What is the product development strategy? Does file storage have a role in it?
Guy Churchward: Our product team is staying focused on turning automation and telemetry into time for our customers to focus more on using the data and less on the data infrastructure.
For file, there is a role, but I don’t see us getting into the Weka space as one product can’t deliver everything. But similar to object, we do have conversations with customers that want to sweep data from say a NetApp into our data service platform, so general purpose file can help in a consolidation play. But our foundation remains block.
Blocks & Files: Will Fibre Channel be supported? Ditto NVMe/TCP and NVMe/FC?
Guy Churchward: Our customers have standardised on Ethernet switching going forward for their interconnect needs and we have no plans to support Fibre Channel as a legacyinterconnect. We already support NVMe devices and NVMeoF/TCP support is in development and will be available when the technology is sufficiently mature to meet the requirements for Tier 1 applications deployed against Enterprise Software Defined Storage.
The customers that choose us are mainly looking to move from hardware-resiliency to software-resiliency.
Blocks & Files: What role does the public cloud play in this?
Guy Churchward: Per our back and forth on the role of object above, we’ve been asked to provide object to cloud support, replication to S3, etc. In the next release in the coming quarter, the code will be provided as a ‘sandbox’ and will go GA shortly after.
Blocks & Files: Is there a SaaS element in your strategy?
Guy Churchward: We provide our platform to enable SaaS vendors rather than us providing our own SaaS. In fact, our largest current deployment is with a European SaaS company.
Net:net
CEOs need to encapsulate a startup’s core vision and direct and drive the company forward to realise it. Datera has a lofty goal -to replicate Pure and Nutanix success in the independent enterprise SDS block-plus storage space. It has focused on this goal and has not gotten distracted by file storage or legacy Fibre Channel.
Either could be ‘bolted on’ later if needed.
The market is wide open for such a player and Datera has as good a chance as anyone else.
Veeam, the virtualized server backup business, said yesterday it has achieved a billion dollar annual run rate and is adding 4,000 customers a month.
The company wants to spread its wings beyond data protection and move into cloud data management. It revealed its ambitions at At VeeamON 2019 its conference in Miami, along with the news of a bunch of storage partnerships, and a DR orchestrator.
Progressing to a billion dollar run rate
Cloud data management
Ratmir Timashev, co-founder and EVP sales and marketing, said in a scripted quote: “Veeam created the VMware backup market and has dominated it as the leader for the last decade. This was Veeam’s Act I and I am delighted that we have surpassed the $1 billion mark; in 2013 I predicted we’d achieve this in less than six years and we have.”
Veeam has passed the 350,000 customer count, is set on becoming a hybrid cloud data management company
Timashev said: “The market is now changing. Backup is still critical, but customers are now building hybrid clouds with AWS, Azure, IBM and Google, and they need more than just backup. To succeed in this changing environment, Veeam has had to adapt.”
He thinks data management will be Act II for the company – and that Veeam is well positioned for this business. Certainly it has a good foundation on which to build, as backup is a prime generator of data managed in the cloud.
Storage supplier partnerships
As a start Veeam is setting up “with Veeam” partnerships with tenterprise storage and hyperconverged (HCI) vendors to provide customers with secondary storage systems. These combine Veeam software with storage and HCI infrastructure hardware and management stacks.
The functionality will include secondary storage and copy data management. Veeam APIs play a role here and the company said it has a data management stack.
Veeam has already announced deals with ExaGrid and Nutanix. Timashev expects others to materialise and said they can include a single SKU from infrastructure vendors’ price lists and single support points of contact.
Some “with Veeam” systems will be sold and serviced directly from an infrastructure partner. Others will be sourced by distribution and/or global system integrators, depending on geography or partner network.
Western Digital has ensured its storage systems are compatible with Veeam, including:
ActiveScale Cloud Object Storage System, supporting Veeam Cloud Tier via its S3-compatible interface,
Ultrastar Data60 and Data102 JBOD/JBOF storage,
IntelliFlash NVMe N-series.
Net:net
Blocks & Files thinks backup is inherently straightforward: backup my data and virtual machines fast and restore them fast when I need it.
There is no such simple rubric for data management, which entails indexing, search, analytics, copy provision and management, compliance and governance. Nevertheless, backup is a highly sticky application, and twinning data management with backup will give Veeam a good start.
Somewhat surprisingly it claims to be “the unequivocal leader in Cloud Data Management” already. Blocks & Files will wait for independent analysis from the likes of Gartner and IDC before taking a view on that claim.
DR Orchestrator
And let’s not forget this week’s product launch, the Veeam Availability Orchestrator v2.0. VAO v2 adds disaster recovery, operational recovery of production virtual machines and platform migrations to any organisation that uses Veeam’s backup and replication capabilities.
Veeam protection data can be used to prove recoverability, SLA adherence, and regulatory compliance. VAO v2 can automatically test, document and recover entire sites down to individual workloads from backups.
Backup and replication data can be used through VAO v2 by DevOps, patch and upgrade testers and for analytics. This extension of VAO to providing data for re-use take Veeam into competition with secondary storage players offering copy data supply and management. That means Actifio, Cohesity and Delphix .
Datera CEO Guy Churchward has suffered two heart attacks and is recovering after an operation to place three stents in his arteries.
The events are described in Tweets and a LinkedIn article he published on Monday, May 20. The article is no longer available.
Guy Churchward
Churchward experienced what he thought was a mild heart attack a couple of weeks ago and went to hospital for a check-up. The doctors diagnosed a blocked artery and inserted a stent in an artery that was 98 per cent closed.
Afterwards, and while a dye was being injected into his blood stream to aid a scan of his chest arteries, he collapsed with what appeared to be another heart attack.
He was resuscitated with chest paddles, while still conscious, and underwent further surgery, with two more stents inserted into partially blocked arteries.
His tweet says the attack was due to an inherited condition and not the common lifestyle causes of heart attacks.
Churchward, who became Datera’s CEO in December last year, is now recovering. We wish him well.
SoftIron has introduced the Accepherator, an FPGA accelerator that speeds up erasure coding for Ceph storage workloads.
This saves host CPU cycles for application work and reduces storage capacity requirements.
Ceph provides file, block and object storage from the same distributed storage pool and ensures data reliability by keeping three copies (replicas) of the data spread across many drives. This keeps data available if a drive fails but uses three times more storage as a single copy of the base data.
Erasure coding splits data into chunks, added parity bits to each chunk, and requires less overhead – 1.33 times the original data in Softiron’s scheme. If a drive fails the parity bits on remaining chunks are used to rebuild the lost data. The upshot is less storage capacity for the same data.
Normally a host CPU computes erasure coding parity bits and this can slow storage writes. SoftIron has programmed an FPGA to do this work and put it inside a 10GbitE SFP network interface card.
Accepherator NIC with FPGA for erasure coding
SoftIron CTO Phil Straw said: “We’ve managed to cut the overall total cost of ownership in half by virtue of the fact you now only require 1.33x replication, and you don’t need expensive CP’s for the erasure coding.”
The Accepherator is on sale as an option for SoftIron’s HyperDrive Ceph storage products.
Flash chip builder Toshiba Memory Corp. is buying out its US corporate shareholders, all customers who helped the company out last year.
In June 2018 financially-troubled Toshiba sold its Toshiba Memory Corporation NAND chip business for $18bn to a consortium led by Bain and including SK Hynix, Apple, Dell Technologies, Seagate Technology and Kingston Technology.
Toshiba Memory Corporation operates a joint venture with Western Digital that makes flash chips.
As part of the Bain deal Toshiba repurchased 40 per cent of the unit, leaving 60 per cent in the Bain consortium’s hands.
Now Toshiba Memory Holdings Corp. (TMC) is to buy back the shares from Apple, Dell, Kingston, and Seagate, with Japanese banks lending it ¥1.3 trillion ($11.8bn) to refinance the company. It also intends to get a Tokyo Stock Exchange listing.
That could provide Bain with a profitable exit for its TMC holdings.
The four US companies are customers for TMC’s flash chips. They joined the consortium to help TMC stay independent from Western Digital which tried to buy the company. As Toshiba’s partner in the flash fab joint-venture, a Western Digital purchase would have reduced price-competition in for NAND chips.
The Wall Street Journal reports that the four will receive $4bn to $4.5bn for their TMC shares. That will give them a profit of a few hundred million dollars – not bad for an investment that lasted for less than a year.
FalconStor is an object lesson in the difficulty of turning around a company when it falls behind the competition.
Latest earnings for this small enterprise storage player show how much ground there is to catch up. At its peak in 2009, the company turned over $89m. Fast forward to 2018 and revenues were $17.8m and net loss was $906,000.
Falconstor last week posted $4.5m revenues and a net loss of $500,000 for Q1 2019. The company posted $5m in the year ago quarter and a similar net loss.
In the earnings call CEO Todd Brooks, in cheerleader mode, said: “We do believe the company’s performance over the last several quarters has set up a fantastic opportunity for FalconStor and its shareholders.”
Falconstor achieved non-GAAP operating income of $0.4m, the seventh consecutive quarter of non-GAAP operating profitability. It noted 112 per cent year-over-year sales growth in the Americas although total 2019 billings through the end of April increased only six per cent. The US success was not replicated elsewhere.
Early software-defined storage company
Founded in March 2000, FalconStor was an early software-defined storage startup with its IPStor software which virtualized server storage into an iSCSI-accessed SAN. The company went public in 2001 and grew rapidly.
The latter service was used by EMC in its Clarion Disk Library in 2004. That was in pre-Data Domain days. EMC acquired the Data Domain deduplicating disk array backup target company in 2009.
Early failing software-defined storage company
FalconStor grew revenues nicely until 2009 when sales started falling. It experienced turmoil in September 2011 when founder and CEO ReiJane Huai committed suicide, in the face of a lawsuits about improper customer payments. These were settled for $6m in 2012.
The rollercoaster curve
Three subsequent CEOs failed to stem revenue decline as all-flash arrays came along and newer software-defined storage companies such as Nexenta emerged. Revenues fell from 2009’s $89.5m to 2017’s $25.2m, as Falconstor failed to find new customers. The company appears to be largely dependent on its legacy customer base.
Now Brooks, who arrived in August 2017 from ESW Capital, is trying to turn the company around. He obtained fresh funding from ESW Capital and squeezed a profit of $1m for fiscal 2017 and since then, has ploughed a hard furrow in stony ground.
Brook’s initiatives include product branding and scope. the company changed IPStor’s name to FreeStor Storage Server and is rebranding the software again as the Data Mastery Platform. Falconstor has added public cloud archiving and recovery to VTL and is integrating the software with deduplication under the Data Mastery banner.
Recovering business technology angst
VTL has become vital to FalconStor’s survival as it is the central and highest revenue-generating part of the Data Mastery Platform.
This product has legs: FalconStor commissioned the Evaluator Group to cast its peepers at VTL In a paper published in April 2019, the research group wrote that VTL delivered up to 6x times better price/performance than its leading competitor, understood to be Dell EMC’s Data Domain.
The Evaluator Group tested FalconStor software running on a 24-core X86 server with a dual 16Gbits/s Fibre Channel link to a 35TB Dell SCv3020 all-flash array target. A 39.24TB/hr backup rate was achieved.
Moving forward?
In the earnings call Brooks set out three points of focus: “First, on continued delivery of operating profitability; second, on generating year-over-year billings growth; and then third, on key product expansion.”
The product expansion areas include disaster recovery and high-availability, and FalconStor aims to be hardware- and cloud-agnostic. It has to pay for product development from revenue and the last fund raise.
There is not a lot of money sloshing around: the cash balance at the end of Q1 2019 was $2.4m and net working capital was $0.9m.
Can FalconStor succeed and get back from being a footnote in storage history to having higher revenues and regaining profitability? It has to demonstrate relevance in today’s IT world, data storage and management services in a hybrid and multiple public cloud world.
In essence the company is a high-performance virtual tape library data protection business extending into data management.
Preserving the VTL data-moving niche is vital, as is building cloud-based services on top. But, at some stage FalconStor has to move beyond the VTL niche and expand cloud-based services.
It faces competition from big backup appliance companies, such as Cohesity, Commvault and Rubrik, and from backup target companies such as Exagrid
Reaching revenue growth and profitability will be a huge step forward and that could happen in a quarter or two…if things go well.
Reduxio today announced the Magellan Cloud Data Platform via a jargon-dripped press release.
This is a “native cloud storage and data management platform with its breakthrough microservices architecture that provides enterprises deploying stateful container applications never-before-available capability and flexibility for Kubernetes-based private, hybrid, and multi-cloud infrastructure”. Try saying that in one breath.
Reduxio said Magellan delivers instant application mobility and rich data management capabilities. The platform will allow enterprises to unify multiple infrastructure islands into a single data cloud, the company claimed.
Cloud storage will be big
“Cloud storage will be a $97 billion market worldwide in just three years, driven in large part by the shift to cloud-native applications, but deployments are still far too burdened by the technologies of the past,” said Ori Bendori, Reduxio CEO.
Reduxio cites Gartner estimates that “by 2022, more than 75 per cent of global organisations will be running containerised workloads in production, which is a significant increase from fewer than 30 per cent today.”
But many containerised applications today run on storage systems created for on-premise workloads and retrofitted to support containers and clouds. This creates siloed and inflexible infrastructure, according to Reduxio.
Magellan fixes this because it delivers cloud-native container portability and scalability, the company said.
Magellan will support the analysis, storing and sharing of large amounts of data better and more cost-effectively than any other product, Reduxio said. But the company has released no numbers to support its big claim and has not published a data sheet to describe the technology.
Magellan voyage
Reduxio is working on Magellan evaluation and development with various customers and partners. It has joined the Linux Foundation and Cloud Native Computing Foundation.
According to Reduxio, Magellan can accelerate GPUs for AI/ML workloads, and it has joined the NVIDIA Inception virtual accelerator program intended to help startups in AI and data sciences.
Pivot
Magellan goes live in autumn 2019 and its launch marks something of a pivot for Reduxio.
The Israeli startup began life in 2012 as the developer of primary storage, an HX/TimeOS arraythat could be rolled back to any point in time. It stored data in unique indexed, tagged and timestamped chunks.
However, Reduxio went through a re-organisation in May 2018 with a new CEO joining, executives leaving and fresh funding sought. The company changed tack to focus on Kubernetes containers and the public cloud.
This involved the notion of metadata,describing storage objects, as slides presented at an October 2017 Reduxio briefing in Tel Aviv illustrate:
The upshot is that Reduxio has applied its metadata-driven technology, developed originally for primary storage, to stateful containers instead.
HPE is buying the supercomputer company Cray for $1.3bn cash.
The deal is expected to be accretive to HPE non-GAAP operating profit and earnings in the first full year following the close. HPE expects to incur one-time integration costs that will be absorbed within its FY20 free cash flow outlook of $1.9bn to $2.1bn. The transaction is expected to close by the first quarter of HPE’s fiscal year 2020.
Cray, headquartered in Seattle WA, has a strong position in the top 100 worldwide supercomputer installations. In March 2019 it jointly won with Intel a $500m contract by the US Department of Energy to build the Aurora exascale supercomputer.
Cray employs 1,300 employees worldwide, and reported revenue of $456m for its most recent fiscal year, up 16 per cent on the previous year. It builds XC and CS supercomputers with Shasta representing the next generation.
Cray-Intel Aurora exascale system mock-up.
HPE is an established player in high performance computing and bought Cray’s smaller rival SGI in 2016. The Cray acquisition gives its the full spectrum of HPC compute, storage, system interconnects, software and services to layer on top of its existing capabilities. HPE will deliver future HPC-as-a-Service and AI / ML analytics offerings through its GreenLake subscription scheme.
Exascale future
The HPC market is growing steadily as organisations adopt more AI and machine learning applications, which need faster processing and access to larger amounts of data.
HPE estimates the HPC-supercomputing market will grow from $28bn in 2018 to $35bn in 2021, a nine per cent compound annual growth rate. It anticipates more than $4bn of Exascale opportunities will be awarded over the next five years.
IHP reckons it can deliver higher revenue growth by selling into the HPC/supercomputing market as it heads towards exascale requirements, and also accelerate commercial supercomputer adoption. It hopes to realise cost savings by using the Cray technologies such as Slingshot interconnect as well as other synergies from bringing the two companies together.
CEO quotes
This is CEO Antonio Neri’s big bet to leapfrog Dell Technologies and IBM and step into the forefront of the high-performance computing market.
In a canned quote he said: “Cray is a global technology leader in supercomputing and shares our deep commitment to innovation. By combining our world-class teams and technology, we will have the opportunity to drive the next generation of high performance computing and play an important part in advancing the way people live and work.”
Another prepared quote, from Cray President and CEO Peter Ungaro, said: “This is an amazing opportunity to bring together Cray’s leading-edge technology and HPE’s wide reach and deep product portfolio, providing customers of all sizes with integrated solutions and unique supercomputing technology to address the full spectrum of their data-intensive needs.”
Mellanox has pumped an undisclosed sum of money into Excelero, its second announced investment in a storage startup in a week. The amount is likely to be small, as with its investment in WekaIO, announced yesterday.
Excelero provides NVMesh block storage accessed over an NVMe fabric and WekaIO provides file access which also uses an NVMe link between its front-end software and the target storage drives.
Both companies’ products are likely to be used where Mellanox’s Ethernet and InfiniBand switches are deployed. Excelero supports RoCE (Remote Direct Memory Access over Converged Ethernet) as well as InfiniBand.
In a scripted quote, Nimrod Gindi, head of investments at Mellanox Technologies, said: “Strategic partnerships with storage leaders such as Excelero and WekaIO are critical to develop the high-performance storage ecosystem and enable our customers to achieve efficient and scalable data processing and analytics capabilities to drive their businesses forward.”
Excelero CEO Lior Gal was equally effusive: “Mellanox’s low-latency networking and leading support for RDMA on both Ethernet and InfiniBand help us deliver the fastest distributed block storage solutions to our customers. We have partnered closely with Mellanox and welcome their investment in Excelero.”
Israeli-based Excelero was founded in 2014 and took in a $3.5m A-round and $12.5m B-round in 2015. In 2017 it received strategic investments from Micron, Qualcomm and one other unnamed business.
The company’s last reported funding round was in 2018 and involved Western Digital Capital. Total funding is $35m.
Update: Investment timing
Mellanox did not say when it invested in Excelero or the amount. We asked Excelero and Gal told Blocks & Files: “Mellanox’ investment with Excelero was an extension to Round B that was already announced last year. I can’t share the exact time or amount but it’s the fourth strategic investor we didn’t name last year.
“Mellanox, now with announcing us and WekaIO, decided to change strategy and be public about their bets in the future data center SDS technologies.”