We already knew Liqid’s PCIe 4.0 SSD was fast – and now Liqid with Dell help has proved it.
Liqid makes composable systems using a PCIe fabric to link CPUs, GPUs, storage and networking server elements. It stuffed the company’s LQD4500 PCIe gen 4 add-in-card SSD inside a Dll PowerEdge R7515 rackmount server and produced a wicked fast box.
The R7515 is a single socket server in a 2U cab. It uses an AMD EPYC 7742 CPU with 64 cores and 128 PCIe gen 4 lanes, with a base 2.25GHz clock rate boostable to 3.4GHz, and up to 4TB of DRAM.
Dell PowerEdge R7515
Dell’s R7515 also has a couple of PCIe gen 4 slots, one of which can be occupied by a Liqid LQD4500 add-in-card SSD. It has an NVMe interface and 30TB of capacity.
Liqid ran a R7515/LQD4500 system through its paces at its Colorado headquarters, with performance tests for random and sequential IO. The test system delivered 3.14 million random read IOPs (4K block size) and 961,663 random write IOPS. With 8K blocks, the benchmarks were 1.8 million random read and 479,889 random write IOPS.
Dell R7515 + Liqid LQ4500 IOPS results
Sequential bandwidth with 128K blocks was 24.471GB/sec reads and 10.052GB/sec writes. It was about the same with 256K blocks – 24.481GB/sec reads and 10.028GB/sec writes.
Socket to me
Liqid claims this R751 “powered by AMD and Liqid enables users to deploy server systems with the highest compute and storage performance on the market. This leading edge Gen4 PCIe platform is enabling next levels of performance not previously available from a single server let alone a single socket server.”
You can check out the case study for more details. PCIe gen 4 systems are going to reset everyone’s PCIe gen 3-based performance clock.
Enterprise SSD sales are rising sharply but so too are nearline disk drive sales, according to a TrendFocus SSD industry review of the first 2020 quarter. Nearline disks are ten per cent of the capacity cost of SSDs – which is the very simple reason for the robust performance of this storage media format.
About 12.7 million enterprise SSDs shipped in the quarter, totalling 20EB in capacity – a 134 per cent capacity jump year over year (Y/Y). Even so, enterprise SSD capacity shipments amount to around 10 per cent of the HDD capacity shipped in the quarter, and HDD capacity shipped was up 94 per cent Y/Y.
The chart above, by Wells Fargo analyst Aaron Rakers, shows enterprise HDD capacity shipped rising more steeply than enterprise SSDs from 2012 onwards. The curve grew more steeply in 2019. Enterprise SSD capacity shipped also grew more quickly in 2019 but at a slower rate than HDDs.
Rakers tells subscribers: “On a $/TB basis, we estimate that enterprise SSDs are approximately 10x more expensive than nearline enterprise HDDs.”
Another Rakers-produced chart, shown below, plots the $/TB price differential between enterprise SSDs and nearline disk drives. It plummets in 2013, falls at a slower rate in 2014, slower still in 2015-2018 and levelled off in 2019. The blue bars in the chart show the SSDs at nine to 10 x the nearline HDD price in 2019 and 2020’s first quarter.
More layers for 3D NAND and energy-assisted recording for HDDs will soon hit the market. These capacity advancements should help maintain the cost differentials between the two media types.
SSDs are about 2.2x more expensive on a $/TB basis than mission-critical HDDs – the 10,000rpm 2.5-inch disk drives. This helps explain why all-flash arrays are supplanting disk-based arrays for storing primary enterprise data.
The TrendFocus study shows that the category of enterprise SSDs using the PCIe/NVMe interface accounted for just over 11EB of capacity, nearly tripling Y/Y, and 55 per cent of total capacity shipped.
Rakers comments: “We continue to believe enterprise PCIe/NVMe SSDs will grow to account for the majority of the enterprise SSD capacity shipped over the next few years.”
The approximate supplier enterprise SSD capacity shipped shares in the quarter were:
Samsung – 42%
Intel – 23%
Micron – 10%
Western Digital – 7%
Those shares are different if we just look at PCIe/NVMe enterprise SSDs;
Samsung – 42%
Intel – 28%
SK hynix – 11%
Western Digital – 9%
Micron – 3%
Kioxia – 2%
There’s more competition and Micron needs to up its game.
PCIe 4.0 SSDs, that are double PCIe gen 3’s speed, are coming. These will help spread the NVMe message, and widen the access speed difference with mission-critical disk drives. Blocks & Files thinks PCIe gen 4 could be the kiss of death for this class of disk drive.
The faster PCIe interface could also make QLC (quad-level cell) SSDs more attractive for fast access archive data. The future is bright for NAND foundries and SSD manufacturers. But in the absence of technology development disasters, the future for nearline disk drives is also bright – because SSD capacity technology can’t overtake disk drive technology.
Veeam, the backup software vendor, has announced a blow-out first 2020 quarter, with customer count passing 375,000 and annual recurring revenue (ARR) growing 21 per cent.
Veeam CEO Bill Largent said in a statement today: “The first few months of the year have been unusual to say the least for all of us. We are having to conduct business in a different way, ensuring that the Veeam community – employees, partners and customers – stay safe during this period. I’m very proud of our 4,300 employees as we have continued to grow during this challenging time while not missing a beat on customer support.”
Veeam reports 97 per cent year-on-year (Y/Y) growth in Veeam Universal License subscriptions. And it pointed to the IDC Semi-Annual Software Tracker, 2019H2, which records the fastest Y/Y revenue growth for Veeam – at 20.5 per cent – among the top five data protection vendors.
This matches the company’s performance in the previous edition of the IDC tracker, when it had 22.4 per cent Y/Y growth for the first half of 2019 and estimated revenues of $427m. The company said in May 2019 that it had hit a billion dollar revenue run rate.
Veeam was bought by the private equity firm Insight Partners for about $5bn earlier this year.
StorONE CEO Gal Naor today challenged storage array vendors to reveal their prices in public. “Buying an enterprise storage system shouldn’t be as hard as buying a car, but should be as easy as buying a smartphone,” he said.
The software-defined storage startup has built an online configurator with openly available pricing for its TRU (Total Resource Utilisation) S1 software, engineering it for efficiency and speed. The software supports block (Fibre, iSCSI), file (NFS, SMB) and object (S3) use cases and runs on industry standard servers from Dell, HPE and Supermicro.
“S1:TRU price simplifies the storage procurement process,” Naor said. “No more back and forth haggling over solution costs. No more wondering if you would have gotten a better deal at the end of the quarter. S1:TRUprice allows enterprises to get the best pricing, upfront, while also getting the best performance and data resiliency features they need.”
StorONE licenses S1 software in either 1, 3 or 5-year subscriptions. When a user selects a system and confirms purchase, StorONE ships the whole system to the customer for contactless, remote installation and training. And it thinks it has a big price advantage over its rivals.
George Crump, StorONE’s chief marketing officer, said customers “can come to our site, price out their exact solution and compare it to what they are paying now. In most cases, we are confident they will find that the 3-year TCO on our solution will be less than what they are paying for one year of maintenance with their current vendor.”
Playing around with StorONE pricing configurator.
Interested people can trek over to the StorONE website’s pricing section where they can use a three-year TCO pricing configurator without registering their details, and with instant prices as they configure a system’s server, availability, media, capacity and networking cards.
We tried it out. A 736TB high-availability all flash HPE-based system had a $346,318.14 3-year TCO. Changing to a Supermicro server moved the price to $341,860.78. A 1PB standard (not HA) all-disk Supermicro server system would cost $138,36.00. It’s fun playing around with the options and easier to use than the Nimbus Data ExaFlash One pricing configurator, which we also tried out this week.
Will Dell EMC, HPE, Hitachi Vantara, IBM, NetApp and Pure Storage publish their prices? We all know the answer to that.
Update: Media share information added and StorageSphere difference from DataSphere added.
Disk drives will store 54 per cent of the world’s data in 2024, down from 65 per cent in 2019. So says IDC, which has trimmed its 59 per cent forecast from 2018.
The tech research firm thinks core data centres (on-premises and public cloud) will hold 60 per cent of the world’s data in 2024, edge devices will account for less than 10 per cent and endpoint devices will hold about 30 per cent.
The 2024 data storage capacity forecast is revealed in IDC’s latest Global StorageSphere report. IDC analysts estimate the installed base of storage capacity worldwide will grow 16.6 per cent this year to 6.8 zettabytes (ZB). And they reckon the installed base will grow 17.8 per cent CAGR between 2019 and 2024. Our maths says this means an installed base of 11.23ZB in 2024 and 6.06ZB will be stored on disk drives.
John Rydning, IDC’s Research VP, issued a quote: “The volume of data stored in the Global StorageSphere is doubling approximately every four years. While the covid-19 pandemic will hamper economic growth and IT spending, it will have little impact on the expansion of the Global StorageSphere as consumers and organisations are likely to extend the useful life of existing storage capacity to keep up with the demand for storing more data, especially near term.”
Data storage types
IDC looks at storage across six media types: HDDs, SSDs, tape, non-volatile memory – NAND, non-volatile memory – Other, and optical media. A chart from its 2018 study, sponsored by Seagate, shows the share trends from 2010 to 2025 across the six kinds of media:
Update: Rydning supplied us with media share numbers for 2018 and 2024 to provide an indication of how they will change;
HDD – 2018 – 65%, 2024 – 54%
Tape – 2018 – 14%, 2024 – 18%
Other (NAND, et al) – 2018 – 21%, 2024 – 28%
We charted them;
Why is tape increasing its share of the installed base? Rydning told us: “Tape is increasingly being used for archiving data rather than for backups, which increases the length of time tape cartridges are retained before being replaced, thus it stays in the installed base longer.”
StorageSphere and DataSphere
The Global StorageSphere measures the worldwide installed base of storage capacity, and how much of this capacity is utilised or available each year. IDC predicts that the share of utilised storage in the Global StorageSphere installed base is expected to climb seven per cent in 2019 to 67.5 per cent in 2024.
Update: We should be aware that IDC’s DataShere and StorageSphere are different. Rydning clarified the difference: ” IDC’s Global DataSphere is a measure of the amount of data created each year. The Global StorageSphere is a measure of the installed base of storage capacity, and the amount of data stored each year. Some folks tend to use DataSphere metrics as an indicator of demand for storage. The two metrics are definitely related, but it can be misleading to show the growth of the DataSphere as a proxy for the demand for storage capacity.”
READER SURVEY Relational database systems are in many ways the workhorses of IT. But how are they holding up in the modern age?
There are newer types of data stores, such as NoSQL and graph databases, waiting in the wings, capable of handling large and complex datasets your business may accumulate.
Take those, and add in technologies designed to run seamlessly across a hybrid infrastructure – think containers, serverless applications, and cloud-native software – and fast-changing programs built on modern Agile, DevOps and CI/CD principles, and it’s possible your IT infrastructure, and in particular your storage, will be stressed in ways it was never designed for.
The question, then, is what are the best ways to deal with all of this?
For instance, what capabilities are missing from your infrastructure, and what would appropriately modern storage look like to you? How well do others in your organization understand the challenges and the opportunities? In this Register and Blocks & Files reader survey, we want to find what’s really going on.
Click here to take part. Once we have analysed the results, we will report back to you on the state of play.
As usual, your responses will be anonymous and your privacy assured.
Samsung has introduced its first ‘ruler’ SSD and says the new format opens the door to large increases in server flash capacity.
The Samsung PM9A3 SSD uses the E1.S enlarged gumstick format. In a flash-optimised 1U server reference design, 32 x 7.68TB drives fit in the front bays of a 1U server, to give 245.76TB of capacity.
Jongyoul Lee, Samsung SVP for memory software development team, who presented the drive at the OCP Virtual Global Summit on May 12, said: “Offering the most 1U server-optimised form-factor, the PM9A3 will improve space utilisation, add PCIe Gen4 speeds, enable increased capacity and more. We see it eventually becoming the most sought-after storage solution on the market for tier one and tier two cloud data centre servers, and one of the more cost-effective.”
Specifications
The PM9A3 has a PCIe gen 4 interface and uses Samsung’s sixth generation V-NAND with 100+ layers and TLC (3bits/cell) format. Capacities range from 960GB to 7.68TB.
SNIA E1.S diagram
E1.S – S for ‘short’ – is an SNIA-approved variant of the EDSFF or ruler form factor and measures 111.49mmx31.5mm. The current M.2 gumstick format is 110.0mm long by 30.5mm wide. According to Intel, the E1.S design is said to be three times more thermally efficient than U.2 (2.5-inch) form factor SSDs. It is also hot-pluggable.
The E1.L – L for ‘long’ – form factor measures 318.75mm x 38.4mm wide. The greater surface area cand hold three to four times as many flash chips – potentially 30.7TB per drive and 983TB in a 1U server. Intel says the E1.L is twice as thermally efficient as U.2 drives.
Supermicro U.2, M.2, E1.S and E1.L diagram
Samsung is making the PM9A3 available in E1.S, M.2 and U.2 formats. E1.S and U.2 performance is: 900,000/180,000 random read/write IOPS, up to 6.5GB/sec sequential reads and 3.5GB/sec sequential writes.
It is slower in the M.2 form factor, reflecting the PCIe gen 3 and gen 4 speed difference; 500,000/70,000 random read/write IOPS, up to 3.5GB/sec sequential reads and 1.75GB/sec sequential writes.
E1.S reference design
Samsung is open sourcing an E1.S platform reference design to help data centre managers adopt and deploy the E1.S-based storage system. An Inspur-built reference system is available.
Blocks & Files thinks Intel, Kioxia, Micron, SK hynix and Western Digital will have E1.S designs in their pockets and bring them to market this year and next. The E1.L format will enable a bigger jump in 1U server capacity than E1.S, but adoption may be held back by concerns about the effects of drive failure. A drive crash would put more data at risk, increasing the so-called failure blast radius.
Commvault, the data management vendor, missed analyst revenue estimates for the fourth quarter ended March 31, citing the effects of covid-19.
Revenues fell nine per cent to $164.7m. Full fy2020 revenues were down six per cent to $670.9m and net loss was $5.6m (2019: $3.6m net income).
CEO Sanjay Mirchandani expressed confidence in the “long-term opportunities for the company, our strategy, and our return to profitable growth. Our products are mission critical; our large enterprise customer base remains strong; and our employees are resilient.
“All of this, when combined with our financial stability, will enable us to weather these challenges and continue to deliver industry-leading solutions to our customers.”
Steep Q4 fy2020 revenue decline due to Coronavirus pandemic.
Software and products revenue in Q4 was $66.4m, down 18 per cent. Services revenue in the quarter declined two per cent to $98.3m. For the full year, software and products revenue was $275.3m, a decrease of 11 per cent, and services revenue eased one per cent to $395.6m.
The quarter’s operating cash flow was $32.5m, down a tad from the year-ago $36.6m. Full year operating cash flow was $88.5m, down a tad more from fy2019’s $110.2m. Total cash, restricted cash and short-term investments were $339.7 million at quarter end.
In the earnings call Mirchandani said the company experienced a “decline in the volume of smaller portfolio transactions, due to – likely due to this SMB customers that may be disproportionately challenged. Additionally, we believe customers may differ routine capacity add-ons until economic conditions … begin to stabilise. Even with the mission critical nature of our products, we expect new customer signings to remain challenged, because they require a higher touch sales process.”
Growing pains
Mirchandani joined Commvault just over a year ago, with the remit to return the company to growth impatient for growth. In the earnings call he said: “I joined Commvault with a commitment to return the company to responsible growth and this continues to be our number one priority.” He is convinced Commvault will weather the storm and succeed in the long run.
However, the company has reported three declining quarters in a row and it has the additional complication of having to deal with the activist investor Starboard Management, which sank its claws into the company last month.
Mirchandani said: “We’ve had a number of constructive conversations with [Starboard] …And I think … their expectations and their wants are the same as ours. We’re aligned in terms of the long-term shareholder value and a balanced growth profile for the business and we’re excited
Comment on subscriptions
Commvault is experiencing a triple whammy: a prolonged switch from perpetual licenses to subscriptions; a switch from on-premises to SaaS-delivered services; and intensified competition, led by Veeam, Cohesity and Rubrik. Earlier this month it sued Cohesty and Rubrik for patent infringement.
Mirchandani is bullish about the switch to recurring revenue from subscriptions: “They are [a] growth driver for us in fiscal year 2021. In Q4, we added approximately150 subscription customers and revenues now represent over 40 per cent of our software and product revenue. With fiscal year 2021 as our first full renewal cycle, we are focused on this opportunity.”
Commvault started its move to subscriptions in 2018, with three-year contracts. These come up for renewal in the current fiscal year, fy2021. Wells Fargo analyst Aaron Rakers told subscribers: “Commvault’s shift to subscription (now around 40 per cent plus of software revenue) has been a headwind on top-line revenue growth given the pricing difference vs. traditional perpetual licenses.”
He added: “The company estimates $50m of software renewal opportunity for fy2021 – mostly weighted to 2H fy2021; company noting conservatism at this point on upsell opportunity.”
In other words the next quarter and the one after that may still show revenue decline but then, at long last, revenues could increase and go on increasing: “We think Commvault’s ability to sustain an approximate 90 per cent subscription renewal rate, coupled with upsell opportunities … should be viewed as positive growth drivers into 2021 plus.”
CFO Brian Carolan said next quarter’s revenue outlook is $150m to $155m, a six per cent decline from a year ago at the $152.5m mid-point. He said: “Our revenue outlook is underpinned by the successful renewal of two of our largest subscription customers. These renewals were signed in Q4 before their contract expirations and will represent combined software revenue of approximately $10m in Q1, FY 2021. We are working diligently to exceed our guidance and to deliver year-over-year software revenue growth.”
Saganworks, a US startup, aims to revitalise file:folder user interfaces with virtual reality rooms that create a memory or mindmap of user files.
The thinking is that storing files in a 3D space makes finding them easier than storing them in lists in a 2D space such as on a computer desktop. This is analgous to mind mapping, as Shanley Carlton, Saganworks QA testing and customer support manager, explains.
Users navigate the VR rooms with arrow key functions with file or folder objects stored on make-believe tables, bookshelves or the walls and double click on objects to open them. Files – documents, pictures, spreadsheets, etc. – or folders are dragged into the rooms and dropped where you want them placed.
CEO Donald Hicks issued a canned quote: “We don’t live in a 2D world, yet that’s how we’ve been retaining knowledge and memories from the time of early cave dwellers until now. Human interaction is due for innovation. We can do that by giving people tools to store their memories, work and interests in a 3D space that they can create and relate to.”
Sagan stands for Spatially Accessible Gallery of Archived kNowledge and is a tribute to Carl Sagan, the celebrated American astronomer. User rooms – or ‘Sagans’ – are stored in Azure and delivered as a service. The video below shows the software in action:
There are sample free, individual and various advanced pricing plans with a range of features. Individual pricing is $59.99 per year and gives you unlimited rooms, 80GB of storage, sharing, room furniture, and unlimited community access. Family access is coming shortly. Business plans exist but pricing details are not public. Saganworks apps run on IOS and Android smartphones and tablets.
Saganworks was founded in Ann Arbor, Michigan by Hicks, the co-founder and CEO at LLamasoft, a venture-backed supply chain software startup, which was sold in 2017 to a private equity consortium. LLamasoft had $55m revenue in 2016 and more than 700 customers.
Micron has updated its NVMe SSD product range with two 2TB devices – one using TLC flash memory and the other using QLC.
The 2210 uses 96-layer QLC flash and the 2300 has 96-layer TLC NAND. They sit above the 1300 and 2200 client SSDs in Micron’s portfolio and the company is pitching the drives as desktop and notebook disk drive replacements. It claims they are up to 15 times more power efficient than similar capacity disk drives.
Both devices are single-sided M.2 format and have SLC write caches (which Micron brands ‘Dynamic Write Acceleration’). The SLC caching means the random write IOPS performance is better than the read performance.
The 2210 is effectively a QLC makeover for the 2200, which uses 64-layer TLC flash and tops out at 1TB in its single-sided M.2 form factor.
The 2210 handles up to 265,000/320,000 random read and write IOPS and has up to 2.2 GB/sec sequential read and 1.8 GB/sec write bandwidth.
The 2300 benefits from TLC flash’s faster access speed to deliver up to 430,000/500,000 random read/write IOPS and up to 3.3/2.7 sequential read/write bandwidth in GB/sec terms. TLC flash also has greater endurance – a longer working life – than QLC. At the 2TB capacity level the 2210 has a 720TB-written rating and the 2300 has a written rating of 1,200TB.
Both drives have two million hours MTBF rating and support TCG Opal 2.0 and Pyrite 2.0 for security You can check out a 2210 and a 2300 product brief. Both products are available now but we don’t have pricing information. Micron’s announcement does talk about offering flash capabilities at hard disk drive-like price points.
Scale Computing, the hyperconverged infrastructure supplier, has launched the HC3250D, an all-NVMe data centre system for performance-centric database, analytics and VDI work.
Dave Demlow, Scale VP of product management, said in a statement: “Both persistent and non-persistent VDI workloads thrive on the newly redesigned underlying storage layer, which is created to maximise performance. With the new pressure on IT to enable remote work whenever necessary, this appliance is an excellent foundation for VDI deployments from a several hundred users to a few thousand.”
Steve McDowell, an analyst at Moor Insights & Strategy, supplied a canned quote: “Enterprise and SMB customers have a need for an HCI solution that can service larger databases, mid-range VDI, and other performance-intensive use cases.”
The HC3250D is the first model in the HC3000 Series and operates in enterprise and SMB data centres and also at rack-equipped remote and branch office and edge sites. Its NVMe storage architecture requires no manual configuration and consumes less system RAM, according to Scale, which means more RAM is made available to virtual machines and their applications.
The HC3250D has more CPU power, faster networking and speedier storage than its HCxxxx sisters. This HC3000 Series sits between the HC1000 entry-level data centre appliances and the high-end HC5000 appliances.
A comparison with the HC1000 and HC5000 shows the HC3250D is the only product in Scale’s HCxxxx range going above 10gigabit Ethernet, with its support of 2 x 25GbitE links. It has the same 128GB to 1538GB memory range as the HC5250D. Also the HC3250D is the only Scale data centre product with all-NVMe SSD storage – up to 76.8TB raw capacity in its 1U cabinet. Other models use slower access SDS and/or nearline disk drives.
HC3250D in minimum cluster configuration of 3 appliances.
All Scale system run HyperCore OS, with applications running in KVM virtual machines and accessing ‘Scribe’ storage resources. They feature intelligent automation for self-healing and high availability to keep clusters running through component and appliance failures. Scale supplies integrated disaster recovery capabilities to protect data and workloads to remote sites for fast failover and recovery.
Recently HPE boosted its SimpliVity 325 HCI system for VDI use by giving it dual AMD EPYC 7002 processors. The SimpliVity 325, 1U in size, has more memory than Scale’s HC3250D, at 2,048GB, but slower networking, with only 1 and 10GbitE. It also has less storage; up to 6 x 1.92TB SSDs, understood to be SAS interface drives.
Scale had not released HC3250D pricing and availability details, at time of publication.
StorageOS, a UK startup, has released version 2 of its eponymous software, which effectively reinvents virtual SANs for containers.
Alex Chircop, CEO, said StorageOS 2 delivers production grade storage in increasingly complex large clustered Kubernetes deployments.
According to StorageOS, traditional storage arrays cannot handle the complexity of clustered deployments at scale. “Kubernetes users working with increasingly complex deployments require storage that delivers predictability for replication and failover,” Chirchop said in a statement. “Users are also deploying more mature Kubernetes environments resulting in a need for production-grade storage.”
StorageOS was founded by three storage-skilled execs at major financial institutions and a fourth exec who was involved enterprise storage. Such institutions are customers for mainstream enterprise storage arrays, virtually all of which have CSI plug-ins for Kubernetes – meaning they can provide storage for containers. Such institutions will also have much experience of VMware and knowledge of VSAN.
The companysaid paying customers include financial services and life sciences firms and service providers, which have made more than 3000 cluster installs since releasing the first version of its product.
StorageOS architecture
StorageOS aggregates the local disk storage in a cluster of servers (nodes) into a one or more virtual block storage pools. The cluster can be a traditional cluster or a hyperconverged system or a set of several clusters. The multi-cluster scenarios we can envisage include topologies such as a centralised storage cluster with satellites consuming the storage, and data replicated between the clusters for high-availability and disaster recovery
There must be a minimum of three nodes in the cluster. Storage in the pool is carved out into thinly provisioned virtual volumes. Each node has a storage container in which StorageOS is deployed. Application containers in the nodes mount and access these virtual volumes via the storage container.
Blocks & Files diagram of StorageOS’ architecture. Disk drives can also be SSDs ad it’s optimised for NVMe SSDs
Any container can mount a StorageOS virtual volume on any node, regardless of whether the container and volume are colocated on the same node or if the volume is remote. Applications may be started or restarted on any node and access volumes transparently, meaning on another nodes physical storage. Volumes are cached in DRAM to improve read performance and compressed to reduce network traffic.
This is, in effect, a virtual SAN set up expressly for containers. It has a control plane to set it up and maintain operation. The containers perform their storage IO, using a data plane.
StorageOS runs on physical servers, virtualized servers or in the public cloud. It interoperates with Kubernetes, OpenShift, Amazon EKS, Azure EKS, Rancher and Docker. A StorageOS cluster will typically map one-to-one to a Kubernetes or similar orchestrator cluster.
The StorageOS container runs on all nodes in the cluster where app containers need to consume storage. The etcd distributed key:value store is used to to maintain state and manage distributed consensus between nodes. This helps in recovery from node failure.
StorageOS V2.0
Storage OS V2.0 improves four aspects of the software. General performance has been accelerated and recovery time from a failure has been reduced. Synchronous replication has been added for high-availability, supporting up to five replicas of a primary volume.
Resiliency has been strengthened in large clustered environments that may experience more transient failures. A Delta Sync feature reduces the time to recovery on transient failures, allowing faster cluster convergence by only replicating the missed data to the node. It is designed to cope with unpredictable failure scenarios.
Also the StorageOS container runs like any other application, with no dependencies on proprietary kernels, storage protocols or other layered services meaning customers don’t suffer from these kinds of lock-in.
V2.0 is designed to enable security at every layer of the stack, improving security with encryption in transit. Traffic between nodes is encrypted and authenticated.
A free developer edition of StorageOS V2.0 with 5TB is available and users can upgrade to Project and Platform editions for enterprise capabilities and comprehensive product support.
Competition
Blocks and Files asked StorageOS about competing products from Portworx, Kasten, Diamanti and mainstream storage array providers.
B&F: How is StorageOS better than/different from Portworx?
StorageOS: StorageOS differentiates itself in the market through a number of features:
Disaggregated Consensus for volume management – each StorageOS volume has a “mini-brain” capable of managing recovery and placement independently, effectively reducing the blast radius of any component failure. This enables the highest levels of reliability for clusters at scale by distributing redundancy whilst improving the bandwidth and lowering latency for deterministic performance.
Delta Sync – reduces the time to recovery on transient failures, allowing rapid cluster convergence by only replicating the missed data to the node
Runs Anywhere – deployed as a container, all data services are optimized and integrated inline to the StorageOS data plane ensuring the lowest resource overhead and low latency performance. The StorageOS container runs like any other application, with no dependencies on proprietary kernels, storage protocols or other layered services.
Secure by default – StorageOS enables security at every layer of the stack with automated certificate management, secure endpoints and encryption of data between nodes.
B&F: How is StorageOS better than/different from Kasten?
StorageOS: StorageOS and Kasten are complementary and provide different services.
StorageOS is a software defined, cloud native storage platform, providing automation and self-service via a control plane and data services (such as replication, compression, encryption) through a data plane. Kasten is a data management product, providing, for example, backup and restore services that layer on top of other storage solutions.
B&F: How is StorageOS better than/different from Diamanti?
StorageOS: StorageOS is a software-only solution and has no dependencies on hardware. Diamanti has a special focus on providing Kubernetes solutions in a hardware appliance form factor.
B&F: If I have a storage array with a CSI plugin (e.g. Infinidat) or strong Kubernetes support (NetApp) where is the benefit in using StorageOS?
StorageOS: StorageOS provides a number of benefits compared to a traditional storage system:
As a software defined solution, deployed as a container, StorageOS runs anywhere a container can – on-prem, in the cloud or any hybrid.
StorageOS is Application Centric – storage volumes are provisioned to an application not a server/node, allowing storage to be able to follow an application as it scales, grows and moves between platforms.
Cloud Native Scalability – Cloud Native Workloads are very demanding, supporting thousands of containers, advanced workflows and dynamic scaling. StorageOS performs provisioning and cluster operations in milliseconds.
Comment
StorageOS performs provisioning and cluster operations in milliseconds, which does not seem that fast when compared with the sub-millisecond latencies that all-flash NVMe arrays deliver.
Wouldn’t any all-flash array with a CSI plug-in be as fast?
Dell EMC VP for Technology (PowerStore, XtremIO, VxFlexOS) Itzik Reich says: “Of course [but] it has nothing to do with the SSD drives, it’s all about the control path. What matters is how fast the mapping of a volume (and in some cases, the time it takes to format the FS).”
We asked storage consultant Chris Evans for the reasons an enterprise storage customer would use StorageOS software when they can get all-flash arrays with CSI plugins or VMware/Tanzu.
He said: “I guess the question is why use any SDS solution. Flexibility, cost, lock-in, these are the most obvious, however, I think with container integrated solutions the benefits are a bit more nuanced.
“I can integrate provisioning and other functionality into my application workflow in a way that operates way better than using a fixed array. I can make that infrastructure truly portable – move it to the cloud, another platform, all abstracted with nothing more than Kubernetes and some local disks.
“It means I can build on-demand test environments and tear them down five minutes later etc., etc. I doubt any storage array could cope with creating and destroying hundreds of volumes per hour (or more), whereas on StorageOS, those constructs are mainly in software on the same node as the application, so can be created/destroyed in milliseconds.”