Dell EMC PowerProtect arrays (formerly branded Data Domain) have gained copy data management functionality via Actifio, through a deal prompted by an unnamed customer.
Actifio CMO Brian Reagan told us: “The primary driver here was one of our largest joint customers, who was looking to extend the useful life of their DD estate and also capitalize on the installed Actifio data management platform. They needed an integrated solution that could scale to support their global environment. Actifio and Dell share a commitment to customer success, so it was a natural conclusion.”
Actifio supplies copy data management software that makes virtual copies of databases and other data. It provides them on-demand to copy data users, such as test and dev, and compliance officers. It has used its OnVault software to store archival copies of this data on public clouds – AWS, Azure, GCP, IBM COS and others – or on-premises object storage systems – Dell EMC ECS, Hitachi Vantara HCP, NetApp StorageGRID and Scality.
The deal with Dell sees Actifio step outside the ranks of public cloud and on-premises object storage systems for the first time.
Now it is adding PowerProtect storage to that list. This entails Actifio integrating PowerProtect interface functionality into OnVault, enabling the software to act as a source for data stored on PowerProtect arrays. According to Actifio, users get high-performance and scalable instant access to data stores of any size.
OnVault can capitalise on a 7:1 fan-in ratio (7:1) for cross-appliance deduplication and better space efficiency. That refers to seven Actifio appliances pushing data to a single PowerProtect system with cross deduplication of data from those seven source appliances.
Actifio said it does not intend to sign similar deals with other suppliers of backup-to-disk (B2D) target systems, such as Commvault, Exagrid and Quantum: “We do not have plans to support other B2D systems,” Reagan said. “Our primary interoperability strategy with OnVault is object storage platforms on-premises or in the cloud, including EMC ECS, IBM cloud object storage, HDS content platform, Scality, and all hyperscalers.”
Deepening partnership
In a recent Silicon Angle webcast sponsored by Actifio, Reagan said: “We’re fortunate to have partnered with Dell EMC as one of our focus infrastructure partners.
Actifio CMO Brian Reagan appearing on Silicon Angle’s theCube
“We have reference architectures for converged infrastructures using the rail and the rack designs [with] the VxFlex OS underneath and really going after the database cloning market opportunities. So, bringing an, essentially, data centre pod architecture with Actifio software running inside to power these databases as a service opportunities that exist in the large enterprises.”
Separately, Reagan told us: “OnVault will be included as part of our OEM relationship with Dell, so all Database Cloning appliances will support this functionality.”
We understand the Actifio partnership with Dell EMC will be extended in the coming months.
V.2 of Nutanix’s Karbon front end wrapper for Kubernetes will enable Kubernetes clusters to run in a network-free isolation zone.
At time of writing, Nutanix had not officially announced Karbon 2.0, but Alexander Ervik Johnsen, Nutanix senior system engineer, discussed some features of the upgrade last week in a blog – currently 404ing: “Nutanix Karbon 2.0 introduces the availability of the Karbon Air Gap and Kubernetes upgrades. You can upgrade the Kubernetes version of your cluster using … karbonctl and use the Karbon Air Gap to manage your Kubernetes clusters off-line.”
Nutanix Karbon, introduced last April, is a wrapper around Kubernetes that makes it simpler to use and so set up and run cloud-native applications.
A defence against infection from computer malware and ransomware is to store data offline in tape cartridges, separated from network access by a physical air gap. Another, called logical air-gapping, is to run applications with no network access, where that is feasible.
Until now that was not possible with Kubernetes-orchestrated containerised workloads. In operation, Kubernetes requires access to registries on the internet to download various containers.
Mind the Air Gap
“The Air Gap uses a local Docker registry, hosted on a separate VM, to provide Karbon services,” Ervik Johnsen wrote. “Deploying the Air Gap requires Internet access to download the deployment package from the Nutanix Support Portal and transfer it to a local web server. For [further] deployment steps, refer to ‘Deploying the Karbon Air Gap’ in the Nutanix Karbon Guide.”
With Karbon 2.0, users can download a bundle of containers from the Nutanix Support Portal and upload it to an air-gapped Nutanix environment.
Either Nutanix, a system integrator or the customer puts the downloaded deployment package binaries on a mobile storage device, takes it to the site, and installs the binaries there.
With Karbon 2.0, Nutanix has enabled access to Karbon through the Prism management utility, allowing administrators to use their Prism Central + Active Directory (AD) setup to add ‘read-only’ Prism users.
Nutanix Karbon 2.0 also allows IT admins to initiate one-click upgrades to upgrade Kubernetes on their clusters. Until now a Kubernetes upgrade could mean redeploying clusters or applications.
IBM has launched faster FlashSystem arrays, in a move that also sees it simplify its block access storage array line-up.
The company now goes to the non-mainframe market with a single storage array family under the FlashSystem name, that uses the same hardware chassis – more or less – and the same software stack for all models. The company is retiring the Storwize brand.
Eric Herzog, IBM’s chief storage marketing officer, produced a slide in the announcement webcast yesterday to illustrate the product line complexity of some of the company’s competitors. He said IBM needed only one product family to cover entry, mid-range and high-end requirements. By contrast Dell EMC has five product families, HPE has four, Hitachi Vantara has three and NetApp and Pure Storage has two.
Photo taken from IBM webcast.
The constituent models of the FlashSystem family are the 5000, 7200, 9300 and 9200R.
Blocks & Files old and new FlashSystem family positioning diagram.
They replace the FlashSystem 900, V9000, 9100 and A9000R systems and the Storwize hybrid flash/disk arrays.
The 9200R – the R stands for ‘Rack’, – has 2, 3 or 4 x 9200 controller chassis in its 42-inch rack and is a pre-configured scale-out, clustered 9200. The other systems come in a dual-controller, 2U rack shelf with 24 front-mounted 2.5-inch drive bays, and the ability to increase capacity using expansion chassis.
New for old
IBM has not specifically said which new FlashSystem replaces which old FlashSystem. But we understand the A9000R replaces the 9200R, while the 9200 replaces the 9100. The 7200, based on the Storwize V7000, appears to replace the V9000.
The old 900 is retired and the re-branded Storwize 5000 is the new entry-level FlashSystem.
The 5000 is actually a renamed Storwize V5000 array and can use SAS disk drives as well as standard SAS SSDs. It delivers around 130,000 IOPS. There is also a FlashSystem 5100 in the 5000 sub-family, renamed Storwize 5100F, and Storwize 5100 models which support SSD, NVMe SSDs, IBM’s proprietary Flash Core Module drives, and also storage class memory (SCM) drives.
The new FlashSystems are positioned in IBM’s overall storage product family in the slide below.
The Storwize arrays do not appear on this slide. Herzog confirmed that the Storwize line is being replaced, telling me via a tweet yesterday that there is “One platform for all our Spectrum Virtualize products.”
The upshot is IBM can claim single system block array bragging rights before Dell EMC, which is yet to launch its MidRange.next system. This is set to replace Dell EMC’s SC, Unity and XtremIO products.
Performance
IBM said the new FlashSystems can deliver up to 18 million IOPS, with latency down to 70μs and bandwidth up to 45GB/sec from a single 9200. That scales out to 180GB/sec. The 7200 and 9200s can deliver up to 10 million IOPS and 136GB/sec, IBM said in a Redbook.
IBM claims the numbers beat the competition and specifically cites Pure Storage’s X90 with SCM cache, which it said has a longer latency of 150μs and slower 18GB/sec bandwidth.
It also calls out Dell EMC PowerMax’s 100μs latency and 15 million IOPS from 80U of enclosures. IBM said the 9200R can deliver its 18 million IOPS from just 8U.
This is a big improvement on the 9200R’s immediate predecessor, the FlashSystem A9000R, which delivers up to 2.4 million IOPS with 250μs latency and up to 26GB/sec. This is largely due to new flash drives.
According to an IBM Spectrum Virtualize FAQ document, the all-flash 9200 has 4 x 16-core processors while the all-flash or hybrid 7200 has 4 x 8-core CPUs, making it less powerful.
Drives and SCM
The all-flash 7200 and 9200 use gen 3 Flash Core Module drives, built from 96-layer Micron NAND, with a dynamic SLC cache. The modules have smart data placement, with hotter read data placed on lower latency pages. The new modules have more IOPS and higher bandwidth than the gen 2 drives. For example, read latency is reduced almost 40 per cent and IOPS increased by 10 to 20 per cent.
All-flash 9200 system with 4 x 16-core processors.
These modules are fitted with FPGAs, NVMe interfaces, and hardware compression and encryption. There is system-wide FlashSystem encryption to the FIPS 140-2 standard with USB-based or key server management.
The capacities are 4.8TB, 9.6TB, 19.2TB, and 38.4TB – the latter is twice the limit of the previous Flash Core Modules. That enables up to 4PB of effective capacity in a 2U chassis. The 9200 can scale up to 32PB of effective capacity and out to 128PB with a 4-way cluster.
Hybrid 7200 with 4 x 8-core processors.
The 7200 and 9200 support up to 4 Intel Optane and Samsung Z-SSD storage-class memory drives in capacities of 375GB and 750GB for Optane, and 800GB and 1.6TB for the Z-SSD. They are supported as an additional tier of storage, not as a cache.
A pre-existing EasyTier function, with AI technology included, automatically identifies the coolest blocks of datas and moves them to the most economical storage tier, looking at SCM, SSD and disk tiers.
Software and more
IBM FlashSystems run Spectrum Virtualize software, as before, and inherit capabilities such as EasyTier and HyperSwap business continuity and disaster recovery. There is 2- and 3-site replication, with replicated volumes and stretched logical volumes available. Data can be sent to the cloud for business continuity and DR, DevOps, analytics and so on.
The software supports VMware VASA, vVol and SRM protocols – making the life of the vSphere admin’s easier – and also Kubernetes, Red Hat OpenShift and the Container Storage Interface, facilitating use in cloud-native environments. Red Hat’s Ansible playbook orchestration is also supported for the automation of common storage admin tasks.
IBM Storage Insights service monitors the FlashSystem arrays, receiving 23 million telemetry readings a day from each array, according to Herzog and fixes up to 66 per cent of problems automatically.
The arrays can be monitored by an AI-based system for malware and ransomware infections, and data stored on air-gapped tape or logical air-gapped cloud volumes for extra security. Alternatively Spectrum Control can manage and monitor the arrays on-premises.
Consumption model
The systems can be bought, leased, paid for quarterly on a utility model, or obtained on a subscription basis. There are so-called Flash Momentum upgrade arrangements so users can upgrade controllers and capacity with no upfront payments.
Users can get configuration best practices for target databases, etc, from an IBM Advisor website.
Availability and pricing
The Storwize V5100 and V7000, and FlashSystem 9100 and A9000 will continue shipping until the end of the year. IBM says migration to the new FlashSystems should be straightforward as they share the Spectrum Virtualize operating system.
Pricing starts down at $16,000 for FlashSystem 5000 and rises to a million bucks or more for a 9200R.
Fungible Inc, a US composable systems startup, wants to front-end every system resource with its DPU microprocessors, offloading security and storage functions from server CPUs.
The company has not announced product yet, but a set of videos and white papers on its website provide a fairly detailed picture. We think product launch is likely in mid-to-late 2020.
Fungible’s DPU concept
Fungible is building a fully programmable microprocessor – it calls this a data processing unit (DPU) – that interlinks all resource elements in a composable system infrastructure.
According to Fungible, server CPUs are compute-centric and are ill suited for data-centric workloads. The company notes general-purpose server CPUs were developed to process as many instructions per second as possible. However, Moore’s Law is plateauing and CPUs can no longer keep up with the IO burden of today’s data-intensive environments.
Fungible DPU diagram
Many workloads are already offloaded to specialised GPUs and FPGAs which are more efficient at processing certain kinds of tasks. But Fungible claims server CPUs and other specialised compute resources can be used much more efficiently by linking them to storage resources across an Ethernet network. The network and its DPU edge devices perform the data-centric work, freeing server CPUs for application processing.
The server CPUs, GPUs, FPGAs, ASICs are thereby made fungible. (A fungible resource is one whose function can be carried out by any other unit of that resource, just like a single dollar bill can be replaced by any other dollar bill, or a gallon of gas by another gallon of gas.)
The server processors can be dynamically composed into processing systems sized for particular workloads, and their elements returned to the pool for re-use when the job is finished.
Fungible’s DPUs enable this composability of the varied compute resources, plus persistent memory (Optane), SSDs and disk drives. They offload certain storage and security-related functions from the server CPUs, such as encryption and decryption, compression and decompression and firewall services.
Fungible fact file
Fungible was founded in 2017 by CEO Pradeep Sindhu and Bertrand Serlet, and is based in Santa Clara, California.
LinkedIn lists 181 employees and the startup has taken in $292.5m in three rounds of funding, including a $200m C-round led by Softbank Vision in 2019.
Pradeep Sindhu (left), Betrand Serlet.
Sindhu and Serlet both worked at Xerox PARC back in the day.
Sindhu was the founding CEO and chairman of Juniper Networks, then Vice Chairman, CTO, and Chief Scientist. He left the company in February 2017 to found Fungible.
Serlet founded Upthere, a consumer cloud storage business acquired by Western Digital in 2017. Prior to that he was SVP of Software Engineering at Apple.
Composable competition
Fungible will face competition from five composable vendors.
At time of writing, only Liqid can compose a set of resources as extensive as Fungible aims to provide. It is funded to the tune of $50m – considerably less than Fungible – but has live product in the market.
Fungible says its DPUs offload storage and security services from server CPUs. On that basis, a Fungible setup with the same set of server resources as a Liqid system should outperform the latter in terms of application workload. Let’s wait for launch and performance benchmarks.
Four years ago, Pure Storage pioneered fast object storage with the launch of its FlashBlade system. Today fast object storage is ready to go mainstream, with six vendors touting the technology.
Object storage has been stuck in a low performance, mass data store limbo since the first content-addressed system (CAS) was devised by Paul Carpentier and Jan van Riel at FilePool in 1998. EMC bought FilePool in 2001 and based its Centera object storage system on the technology it acquired.
Various startups including Amplidata, Bycast, CleverSafe, Cloudian, Scality developed object storage systems. Some were bought by mainstream suppliers as the technology gained traction For instance, HGST bought Amplidata, NetApp bought Bycast and IBM bought CleverSafe.
Objects became the third pillar of data storage, alongside block and file. It was seen as ideal for unstructured data that didn’t fit in the highly structured database world of block storage or the less highly structured file world. Object storage strengths include scalability, ability to deal with variably-sized lumps of data, and metadata tagging.
Object storage systems typically used disk storage and scale-out nodes. They did not take all-flash hardware on board until Pure Storage rewrote the rules with FlashBlade in 2016. Since then only one other major object storage supplier – NetApp with its StorageGRID – has focused on all-flash object storage. This is a conservative side of the storage industry.
Commonsense is one reason for industry caution. Disk storage is cheaper than flash and object storage data typically does not require low latency, high-performance access. But this is changing, with applications such machine learning requiring fast access to millions of pieces of data. Object storage can now be used for this kind of application because of:
Standardisation on S3 object interface across object storage vendors,
Addition of file access gateways to object storage,
Emergence of machine learning at edge computing locations.
A look at products from MinIO, OpenIO, NetApp, Pure Storage, Scality and Stellus shows how object storage technology is changing.
Minio
MinIO develops open source object storage software that executes very quickly. It has run numerous benchmarks, as we have covered in a number of articles. For instance:
MinIO has demonstrated its software running in the AWS cloud, delivering more than 1.4Tbit/s read bandwidth using NVMe SSDs. It has added a NAS gateway that is used by suppliers such as Infinidat. Other suppliers view MinIO in a gateway sense too. For example, VMware is considering using MinIO software to provision storage to containers in Kubernetes pods, and Nutanix’s Bucket object storage uses a MinIO S3 adapter.
All this amounts to MinIO object storage being widely used because it is fast, readily available, and has effective S3, NFS and SMB protocol converters.
OpenIO
OpenIO was the first object storage supplier to demonstrate it could write data faster than 1Tbit/sec. It reached 1.372Tbit/s (171.5GB/sec) from an object store implemented across 350 servers. This is faster than Hitachi Vantara’s high-end VSP 5500‘s 148GB/sec but slower than Dell EMC’s PowerMax 8000 with its 350GB/sec.
The OpenIO system used an SSD per server for metadata and disk drives for ordinary object data, with a 10Gbit/s Ethernet network. It says its data layer, metadata layer and S3 access layer all scale linearly and it has workload balancing technology to pre-empt hot spots – choke points – occurring.
Laurent Denel, CEO and co-founder of OpenIO, said: “We designed an efficient solution, capable of being used as primary storage for video streaming… or to serve increasingly large datasets for big data use cases.”
NetApp StorageGRID
NetApp launched the all-flash StoregeGRID SGF6024 in October 2019. The system is designed for workloads that need high concurrent access rates to many small objects.
It stores 368.6TB of raw data in its 3U chassis and there is a lot of CPU horsepower, with a 1U compute controller and 2U dual-controller storage shelf (E-Series EF570 array).
NetApp SGF6024
Duncan Moore, head of NetApp’s StorageGRID software group, said the software stack has been tweaked and there is scope for more improvement. Such efficiency was not needed before as the software had the luxury of operating in disk seek time periods.
Pure Storage FlashBlade
FlashBlade was a groundbreaking system when it launched in 2016 and it still is. The distributed object store system uses proprietary hardware and flash drives and was given file access support from the get-go, with NFS v3. It now supports CIFS and S3, and offers up to 85GB/sec performance.
Pure Storage markets FlashBlade for AI, machine learning and real-time analytics applications. The company also touts the system as the means to handle unstructured data in network-attached storage (NAS), with FlashBlade wrapping a NAS access layer around its object heart.
Pure Storage AIRI system
The AIRI AI system from Pure, with Nvidia GPUs, uses FlashBlade as its storage layer component.
Scality
Scality is a classic object storage supplier which has seen an opening in edge computing locations.
The company thinks object storage on flash will be selected for edge applications that capture large data streams from mobile, IoT and other connected devices; logs, sensor and device streaming data, vehicle drive data, image and video media data.
Stellus Technologies, which came out of stealth last week, provides a scale-out, high-performance file storage system wrapped around an all-flash, key:value storage (KV store) software scheme. Key:value stores are object storage without any metadata apart from the object’s key (identifier).
An object store contains an object, its identifier (content address or key) and metadata describing the object data’s attributes and aspects of its content. Object stores can be indexed and searched using this metadata. KV stores can only be searched on the key.
Typically KV Stores contain small amounts of data while object stores contain petabytes. Stellus gets over this limitation by having many KV stores – up to 4 per SSD, many SSDs and many nodes.
The multiple KV stores per drive and an internal NVMe over Fabrics access scheme provides high performance using RDMA and parallel access. This is at least as fast as all-flash filers and certainly faster than disk-based filers, Stellus claims.
Net:net
There are two main ways of accelerating object storage. One is to use flash hardware with a tuned software stack, as exemplified by NetApp and Pure Storage. The other is to use tuned software, with MinIO and OpenIO following this path.
Stellus combines the two approaches, using flash hardware and a new software stack based in key:value stores rather than full-blown object storage.
Scality sees an opening for all-flash object storage but has no specific version of its RING software to take advantage of it – yet. Blocks & Files suggests that Scality will develop a cut-down and tuned version for edge flash object opportunities, in conjunction with an edge hardware system supplier.
We think that other object storage suppliers, such as Cloudian, Dell EMC (ECS), Hitachi Vantara, IBM and Quantum, will conclude they need to develop flash object stores with tuned software. They can see the possibilites of QLC flash lowering all-flash costs and the object software speed advances made by MinIO, OpenIO and Stellus.
Autonomous and near-autonomous vehicles will need black box facilities to help with accident cause analysis in the case of a crash. They may also need a ‘reverse’ content delivery network (CDN) to handle generated data upload to manufacturers and fleet operators.
These are some of the findings from our interviews with three self-driving car experts on autonomous and near-autonomous vehicle (AV, NAV) data storage. We have previously looked at AV, NAV storage in general, taken a first look at the specifics, and zoomed into self-driving car data generation.
In this article we take a closer look at the requirements for in-vehicle storage and data transmission to remote operational sites.
To recap, autonomous and near-autonomous vehicle need on-board data storage to drive along the highway, find their way using maps and enabling tactical obstruction avoidance of other vehicles, street objects and pedestrians. They also need to communicate with manufacturers and fleet operators to upload and download information.
Consumer AV, NAV on-board data generation per day ranges from 1TB to 15TB, and a robo-taxi will create anything from 60TB and 450TB a day.
Our panel
Our expert panel members are Christian Renaud, an analyst at 451 Research; Robert Bielby, senior director of Automotive System Architecture at Micron; and Thaddeus Fortenberry, who spent four years at Tesla working on Autopilot architecture.
Left to right: Christian Renaud, Robert Bielby, Thaddeus Fortenberry.
Blocks & Files: Will the AV-generated data have to be stored in the vehicle and, if so, for how long?
Christian Renaude, 451: Some of it is ephemeral for real-time decision making, so relevant within a 10-15 second window only, and a subset of summarized data will be sent off vehicle for training/inferencing to the OEM. That data storage interval is an unknown to us right now as no one is shipping a full AV yet.
Robert Bielby, Micron: Except for when the vehicle in training phase, or data collection phase, a nominal amount of AV-generated data will be collected and stored in the vehicle.
Nominal amounts of data that captures and logs overall vehicle system performance, usage statistics and the such, will be maintained similar to data logging that exists in today’s vehicles.
One exception to this is in the area of the black box which is an emerging requirement for all vehicles with level 3 and above capabilities. In this case, a 30 second snapshot of all the relevant system data prior to an accident, or an event that caused the automatic emergency brakes to be applied, must be saved to memory. In some instances, 30 seconds after the event, for a total of 1 minute, will be required to be stored in memory.
Additionally, it may be required that up to eight instances of different incidents needs to be stored. While this application area is still evolving, there are different strategies and philosophies regarding the use of compression to manage down the number of bits that get stored.
With system data rates in the range of 3 Gbit/s to 40 Gbit/s, a considerable amount of storage could be required for the black box if 8 incidents of 1 minute at 40 Gbit/s of data needs to be stored. The period that this data is required to be retained is not as great as other applications as it is expected that the contents of the black box, when of interest, will be written to another storage medium for more in-depth analysis.
Thaddeus Fortenberry: Connectivity will continue to be the main constraint for getting data off customers vehicles, however getting data and learnings off cars is critical to support the rapid publication of relevant data. We will see cars with increasing onboard storage to support optimal transfers, pre-processing, and intelligent network utilisation (low-orbit, Wi-Fi, 5G, etc.).
Blocks & Files: How will the data be uploaded to a cloud data centre? How often?
Bielby: The two primary times when data will be uploaded to the data centre will be in the creation and updating of real-time maps – which is a relatively low bandwidth operation – and in instances where there is a disparity detected in the results of the [on-board] AI algorithm. Additionally, general vehicle health and maintenance information will be transmitted to the cloud in addition to driver profile data.
Uploading will occur through cellular connections for updating of maps and detected algorithm disparity. For other non-time critical data local Wi-Fi connections will be used when the car is parked and not in use for over the air uploads and downloads.
Fortenberry: The best way to get the fleet driving more accurate is to build a data portfolio of localisation, routes and environmental conditions. Obviously, there will be location and events which some cars will care about and many they will not. Therefore we will see a quality of service (QoS) parameter with data.
My belief is that a well-designed data ingest infrastructure is both crucial and key to a successful autonomous vehicle solution. Storage vendors should realise that creating a policy-managed gateway/accessor solution with storage caching is a huge opportunity. [More on this below].
Blocks & Files: What is the maximum amount of storage capacity that will be needed in an AV to cope with the data generation load and the worst case data transmission capability?
Renaude: Excellent question. Honest answer is it’s too soon to tell. If you were to take the average duty cycle I said before (2 hours) and take the average data generated during that time and break it down into that 10-15 second relevancy window, that would answer the on-board storage answer. Less than 500GB certainly, possibly less than 50GB.
Bielby: In looking at Mobileye’s REM technology which provides the basis for creating real-time mapping, data rates associated with REM are on the order of 10 KB / Km. Other real-time mapping technologies are also focused on the basis of this type of sparse data generation.
Assuming a cellular shadow region that exists for 1 Km or even several Kms, the amount of data would need to be buffered until connectivity is restored is modest at best and on the order of 10’s of KBs. Additionally, to the real time mapping information, HD maps database is permanently stored in the car, with densities up to 160GB. Those maps are usually being updated every couple of months either when the car is connected to a Wi-Fi station or over the air.
Another event that creates heavy data traffic to the cloud is when the AI inference process suspects that a certain road situation needs to be retrained. In that case sensors’ data are captured in local storage, typically for one minute around the event, and uploaded to the cloud for AI retraining. Such an event requires up to 300GB of local storage, which will be uploaded to the cloud.
Fortenberry: The only real way to answer this is to establish the value of data for the car company. This is a currently a tough one to come up with because the ML training process is too disconnected from incoming data. In fact, most all companies are leveraging engineering vehicle data for their development.
The answer would also be dependent on the type and regularity of networks the vehicle connects to. We will be able to use quite a lot of local storage, but the vehicle BOM pressure will be substantial for some years.
Blocks & Files: Will disk drives or flash storage be used or a combination?
Renaude: Flash storage.
Bielby: While there are platforms today that are based on HDD, the continued decline in cost per bit of solid state-based storage and the inherent robustness that SSDs offer over rotating storage mediums ultimately are driving an aggressive trajectory to displace HDDs.
Last year, Micron announced a 1TB BGA SSD which provides requisite storage capacity for today and tomorrow’s projected storage requirements in an area of only 16x20mm. It’s clear that the lower power, lower area, and higher reliability of the SSD provides significant compelling benefits that are today driving out the design of traditional HDDs.
Fortenberry: I see no scenario where disk storage will be leveraged except for archiving in the data center. Currently disk makes sense for Object Storage as final tier before archiving, but I see this short lived. Performance in handling vast amounts of data is more important.
Blocks & Files: Assuming flash storage is used will the workload be a mixed read/write one? If so, how much endurance should the flash have? (AVs could have a 15+ year working life.)
Renaude: Yes mixed read/write. There is a spec for this that I haven’t been able to find the name of right away that dictates performance, operating conditions, and endurance.
[Blocks & Files: We understand this is an AEC-Q100 document, where AEC is the Automotive Electronics Council. It publishes Q100 and Q200 series documents relating to automotive components.]
Bielby: This value is highly dependent upon the application space where the flash memory is used – as well as other factors including the efficiency of the Flash File System. In the extreme case, a black box, which, depending upon the architecture, can drive endurance requirements reaching levels of double-digit petabytes, whereas for applications with more modest workloads – 30 to 400 Terabytes / TBW present a reasonable endurance that will be required over the lifetime of an autonomous vehicle.
Fortenberry: Everything in the vehicle should be automotive-grade and allow for fairly high endurance. It is likely we will see design choices being made based on usage (personal vehicle vs Fleet vehicles), but collectively the value is in data.
Blocks & Files: Will the flash have to be ruggedised to cope with the AV environment with its vibrations and temperature/moisture variations?
Renaude: Yes, same spec.
Bielby: Basically yes. All Micron memory devices that are designed into an automobile are designed, tested, and qualified to operate within the harsh, demanding environment of the automobile. Micron has over 28 years of experience and market leadership supporting and delivering the richest portfolio of memory solutions the automotive market, supporting the following:
ISO 9001/IATF 16949 Quality Management Systems
AEC-Q100 qualification methodology
Zero defect target approach
-40 to 105°C: NAND, SSD, UFS, and e.MMC, Multi-Chip
In addition to providing products that are compliant with industry requirements, Micron also has multiple labs worldwide to help assist automotive customers to successfully design their application to ensure lowest risk getting into production.
Fortenberry: Probably need to comply to automotive grade components (AEC-Q100). Certainly, some companies will use non-AEC grade, but I think in the long run it will be cheaper for manufactures to avoid service events.
Reverse Akamai
Fortenberry offered this thought; “I would imagine a substantial storage opportunity of what amounts to a reverse content delivery network for ADAS (Advanced Driver Assistance System) data.
A content delivery network (CDN) provides video and other content produced by relatively few producers to thousands, if not tens of thousands of end systems that consume the content. A reverse CDN would provide content upload facilities from tens of thousands of endpoints to a relatively small number of manufacturers and fleet operators.
The reverse CDN will provide an effective pipeline for vast amounts of incoming data that is:
Policy compliant. Support international policy, contract policies, QoS enabled, etc.
Optimize transfers. Perform routines to reduce and optimize all data.
Caching. Focus on last mile optimisation to vehicle by enabling gateway caching.
Auditable. Reproducibility will become increasingly important as ADAS matures.
Encrypted. All ADAS data should be considered sensitive.
Operationally manageable.
Net:Net
The amount of in-car storage for AVa/NAVs ranges between 50GB to 500GB, according to Christian Renaude. Bielby suggests around 461GB, made up from 160GB map data, 300GB for sensor-generated data and a few kilobytes (<10) to cover cellular coverage gaps.
Fortenberry doesn’t yet feel there is enough data to reliably estimate a realistic number. However, a 500GB on-board storage capacity does not sound particularly onerous.
All agree it will be based on automotive-grade (AEC Q100 spec) NAND with disk ruled out completely. Some of this storage will be an aircraft-equivalent black box to be looked at in case if accidents. It will need to be stored in a way that us resistant to high impact pressures, explosions, fire, flooding and extreme cold.
Lastly, Fortenberry is emphatic that a reverse content-delivery gateway service will be needed for AV/NAV data upload to cloud-based AV/NAV manufacturers and fleet operators. This is a good shout.
Snowflake Computing has taken in $479m in a surprise G-series funding with Salesforce as co-lead investor in the cloud data warehousing startup. The monster $12.4bn valuation is more than three times higher than Snowflake’s previous funding round in October 2018.
Frank Slootman, Snowflake CEO, told The Financial Times that company is poised to turn cashflow positive this year. He said revenue grew by 174 per cent last year and would soon top $1bn. The company has raised $1.4bn to date.
In an interview with San Francisco Business Times, Slootman said the company was “not in need of capital at all. This is not a traditional fundraise. It is part of a strategic alliance with Salesforce that we initiated. We wanted to advance our content strategy. We need core data assets, or content, put onto the Snowflake platform and that is why we are doing this.”
Snowflake is preparing an IPO with the earliest possible date this summer.
Slootman said the current $12.4bn valuation will likely be a lot higher following an IPO. “The reason is that our growth trajectory is so fierce and our addressable market is so large,” he told San Francisco Business Times. “When companies grow so fast, as Snowflake has, the valuation may seem like a big number now but not later. When I was with ServiceNow (as CEO), the valuation was $2.5bn when we went out and now it has a $65bn valuation.”
Slim pickings for this week’s enterprise storage round-up. But let’s kick off Couchbase making its NoSQL database available as a cloud-based service.
DBaaS for Couchbase surfers
NoSQL database supplier Couchbase has introduced Couchbase Cloud, a fully-managed Database-as-a-Service (DBaaS). It launches this summer on Amazon Web Services and Microsoft Azure, followed by Google Cloud Platform. Couchbase Cloud decouples DBaaS from the underlying cloud infrastructure, which enables customers to purchase IaaS directly from their Cloud Service Provider and leverage reserved instance pricing to optimize total cost of ownership.
Couchbase claims its DBaaS has higher performance, lower latency, and stronger security than un-named competing offerings. It is able to distribute across global infrastructure regions in an active-active configuration while providing data locality, workload isolation, and disaster recovery.
There are separate data and control planes, with a single pane of glass control plane for multi-cloud orchestration (where users can manage and deploy clusters across multiple clouds from one single view), user management, cluster management, monitoring, and billing,
A customer’s data data is hosted entirely within their own Virtual Private Cloud (VPC). Couchbase Cloud has flexible packaging options; one option is on-demand hourly pricing, and another is the ability to purchase Couchbase Credits that can be used over a one year period. Use of credits is fully flexible with no usage limitations during the one year period.
DataCore has integrated its SANsymhpony software with Veeam, using Veeam’s Universal Storage API Plugin. Veeam Software Backup and Replication users can take snapshots and backups of VMware data stores residing on SANsymphony virtual storage pools with minimum impact on production workloads. Separate SANsymphony nodes, ideally in a different location, can also perform the role of a Veeam Ready Repository where backup copies can be stored. From there, users may offload older backup files onto lower-cost, elastic object storage through Veeam Cloud Tier as part of the Scale-out Backup Repository in the Veeam Availability Suite.
File sync and share supplier FileCloud has announced FileCloud Community Edition for small teams and individuals. The price is $10 per year for up to five full users and an unlimited number of web accounts. All proceeds will be donated to charity.
IBM has released a new IC922 AI inferencing server, already part of a supercomputing deployment at the US Department of Defense (DoD) that was built in a shipping container. It has an amazing 1.3 petabytes of SSD storage capacity and is a 6 PetaFLOP system used for AI training and inferencing.
Igneous, a supplier of file management services for multiple tens of billions of files, now supports Wasabi cloud storage as a target for NAS backup and archive. This enables the movement of unstructured data from any primary NAS system to Wasabi storage at a predictable and cost-effective price point. The two companies say Wasabi provides an affordable cloud storage pricing model with no egress fees or API requests, unlike other cloud storage providers.
Data protector NAKIVO said its 2019 revenues grew 40 per cent over 2018, with EMEA revenue growing 48 per cent. It has more than 14,000 customers in 137 countries worldwide, and expanded into Fiji, Botswana and Qatar in 2019. The largest software order in Q4 2019 was $465K.
European cloud service provider Scaleway has announced a Block Storage offering, with a guaranteed 5,000 IOP, scaling up to10TB and 99.99 per cent availability SLA. It costs from $0.08/GB/month for 5,000 IOPS with no transfer fees. A second offering at €0.12/GB/month for 10,000 IOPS is expected.
Supermicro has announced Q2, fy2020 results with revenues of $870.9m vs $931.5m a year ago. Net income of $23.7m was down on the year-ago $26.3m.
US SSD supplier Transcend has announced its MTE662T M.2 SSD featuring NVME v1,3 running across a PCIe gen 3×4 link. It uses 96-layer TLC 3D NAND with 512GB and 1TB capacity points, has an SLC cache for faster IO, and boasts endurance of 3,000 P/E cycles. Random read/write performance is 340,000/355,000. Sequential read/write bandwidth is 3,400/2,300 MB/sec.
Object storage supplier Scality has announced one month paid leave for the second parent after a first child’s birth, regardless of their country of residence and their gender or status. It’s also announced total carbon offset to compensate for its air travel carbon footprint.
Snowflake, supplying a data warehouse in the cloud, announced general availability of its platform on Amazon Web Services (AWS) in Tokyo, following the opening of its first Japan office in Shibuya City, Tokyo.
sk Hynix is preparing a contingency plan to deal with supply chain disruption from the Coronavirus outbreak centered in China, according to a Reuters report. It has a NAND chip foundry at Wuxi in eastern China. There is no outbreak-caused disruption there but component supplies could be disrupted by outbreak control restrictions elsewhere in China.
TrendForce has reported no impact of production at Chinese memory fabs so far from the virus. It is predicting a limited impact on first 2020 calendar quarter NAND flash contract pricing.
Replicator WANdisco says its revenues for the year to December 2019 are expected to be at circa $16m against previously downgraded expectations of $24m and the prior year’s $17m.It attributes this to several significant deals at the end of 2019 being delayed. WANdisco said its 2020 guidance for revenues was $32.2m with the delayed 2019 deals expected to close in 2020.
People
Nathan Owen, former co-founder and CEO at Blue Medora, has joined backup supplier HYCU as COO, assuming day-to-day responsibilities for operations, corporate development and strategic alliances.
Scale out filer Qumulo has appointed Barry Russell as its SVP and GM for Cloud. Russell was previously the global VP for cloud business at F5 and before that spent many years at AWS in the Marketplace and AWS Service Catalog businesses.
The flash assault is progressing to the point where disk drives could will disappear from most data centres in a few years. That’s because flash affordability and performance will overcome disk’s advantages. Also recovering from disk failures will take far too long.
These are the views of GigaOm analyst Enrico Signoretti, responding to Western Digital disk capacity increases, and Wells Fargo managing director in equity research, Aaron Rakers, reacting to Western Digital’s latest 3D NAND news.
GigaOM guru
According to Signoretti the hard disk “will disappear from the small datacenter. … No more hard drives in the houses, no more hard drives in small business organisations, no more hard drives in medium enterprises as well. These kinds of organisations will rely on flash and the cloud, or only the cloud probably!”
He said upcoming 18 and 20TB capacity disk drives, going on up to 50TB, will increasingly be designed and built for hyperscalers such as Facebook and eBay, and public cloud service providers like AWS and Azure.
Signoretti thinks it will take weeks to recover data held on a failed high-capacity disk drive: “Large capacity will also mean longer rebuilding times in case of a failure. It can take a week to rebuild a 14TB HDD today, think about rebuilding a 50TB one!”
New disk technologies, such as host-managed shingling, multiple actuators and zoning will require code changes in applications that use current disk drives. These faster plug-in replacements for disk drives will look more attractive as SSDs become cheaper.
Signoretti argues that “all the added complexity will make hard disk drives unpractical for small organisations without enough data to store in them (and we will easily pass the 1PB mark in this case). Think about that: 1PB equals 20 hard drives, with parity and spare drives it will be 24. It provides you throughput, but very few IOPS and the risk of data loss is high due to rebuilding times. Good luck with that!”
He concludes: “Hard disk will disappear from your data centre if you don’t need several petabytes of cold storage installed locally on your premises… The all-flash data centre will become a reality, and you will store more and more of your cold data in the cloud.”
NAND layer cake
Western Digital and Kioxia’s 112-layer 3D NAND announcement last week has prompted Rakers to dive into product process costing. In a mail to subscribers he said this 112L NAND (the BiCS5 process) is a cost-optimised process which is up to 30 per cent cheaper per bit than the existing 96L product, at mature wafer yield rates.
Rakers notes the 112L product its basically two 56L dies stacked together whereas Samsung’s 128L die is a single stack design. The WD-Kioxia 112L wafer has “a widened bit density advantage vs Samsung” – about 7.8GB per square millimetre vs Samsung’s 6.8GB/square millimetre. He calculates WD-Kioxia has 10-15 per cent bit density advantage over Samsung’s 128L technology, and resulting cost advantage.
This means WD-Kioxia has substantially advanced the affordability of NAND especially when the 112L is formatted with QLC flash.
Most of today’s NAND is TLC (3bits/cell) and QLC has a shorter endurance, at around 1,000 cycles, than TLC, with its 3,000 cycles. It also has a read time of 100μs readvs TLCs 25μs. But it is still faster than disk, which is hobbled by seek time in the 8 – 12ms area.
Today’s high-capacity disk drives are increasingly used for nearline storage, which require medium to low write access rates. QLC 112L NAND can compete better in this market than TLC 96L, using wear-levelling and over-provisioning techniques.
This leads Rakers to conclude that 100+ layer 3D NAND will become a dominant technology in terms of capacity shipped.
He writes: “The continued advancement in controller technology and software enhancements that improve upon raw NAND’s characteristics, is one of the main reasons we believe QLC adoption will accelerate and broaden its adoption over the coming years.”
Rakers has reproduced this 3D NAND roadmap;
SDD technology will be awash with 200-300 layer 3D NAND SSDs in 2022 and 2023, and consequently even lower cost per bit. He concludes: “We have… continued to see the price gap ($/GB) between enterprise SSDs and nearline HDDs shrink… In 3Q19 enterprise$/GB was approximately 8x higher than high-capacity/nearline HDDS, or down from about 9x and 15x in the prior and year-ago quarters.
“Our discussions with industry contacts have suggested that a decline into the 5x premium range could make a strong competitive case for the beginning of enterprise SSD replacement of nearline HDDs in select workloads.”
Further: “This could be the beginning of an inflexion in high-capacity enterprise SSD adoption.”
Background
Blocks & Files has covered the topic of HDDs vs SSDs in enterprise data centres in several articles, including these three below.
In common with Signoretti and Rakers, we think enterprise QLC SSDs with >100 layers will start to replace nearline disk drives in data centres outside the hyperscalers and public cloud operators. 200L and >300L NAND flash will reinforce this trend.
But hyperscalers will stick with HDDs because they can use massive numbers with requisite software and system architectures to render them cost-effective against the flash assault – a point that Seagate and Infinidat forcefully make.
Nexsan today announced a dense flash addition to its mid-range E-Series array and also added fast RoCE access and blockchain security to the Assureon archive line.
Mihir Shah, CEO of StorCentric, which owns Nexsan, issued a quote saying the new E-Series has the “best cost/performance ratio in the market. [It] is one of the many new products we plan to announce in the next few months as we execute on our 2020 strategy.”
Surya Varanasi, CTO of StorCentric, said: “With the release of Assureon 8.3, we have implemented RoCE to provide over a 2x performance improvement and private blockchain technology for a secure, immutable data structure.”
There are three Nexsan array lines – Unity, E-Series and Assureon, all designed for medium and large enterprise use. Unity is a high-performance file and block array. The E-Series is for capacity-optimised storage and Assureon is for longer-term, archive-class storage. Let’s take a closer look at the announcements.
E-Series and QLC
Nexsan’s new E-Series E18F array is intended for faster-than-disk access in big data, business intelligence, decision support, AI and machine learning (ML), and content delivery (video on demand and content streaming) environments.
The array is designed to use QLC (4bits/cell) SSDs. This type of flash has slower speed and a shorter working life than the TLC (3bits/cell) flash used in most SSDs currently. It is also cheaper on a $/TB basis. That means it can be used for applications that need faster-than-disk performance – 10x faster, Nexsan claims – but not expensive high-speed all-flash array performance.
The E18F accompanies Nexsan’s E-Series 18P, 48P and 60P hybrid flash/disk array systems. These can use TLC flash and have X-variant expansion units: E18X, E48X, E60X.
According to Nexsan the E18F delivers up to 70,000 IOPS. It comes as a 2U rack shelf with 18 drive bays for 2-5-inch SATA QLC SSDS having 1.92, 3.84 or 7.68TB capacity.
Raw chassis capacity scales from 34TB to 138TB, and then again to 1.05PB, with expansion units adding 120 more drives. Host access is via 16Gbit/s FC and 1 and 10Gbit/s iSCSI. ‘Active Drawer Technology’ allows drives to remain active when the drawer is open for hot-swap drive management.
The E18F can be mixed with hybrid E-Series systems with up to two (E18X, E48X or E60X) treated as E18F expansion systems.
Assureon, RoCE and blockchain
V8.3 Assureon software adds Private Blockchain and end-to-end RDMA (Remote Direct Memory Access) over Converged Ethernet (RoCE). Blockchain means data can be stored in an immutable data structure, and uses cryptography to secure transactions. There is an automated integrity audit at redundant sites to maintain data integrity and transparency.
Assureon software already fingerprints data with a unique hash and supplies a serial number to files to help ensure that data is stored correctly and verifiably. Blockchain is another layer atop this data integrity stack.
Nexsan Assureon blockchain graphic.
Nexsan has also adopted high performance and low latency RoCE (RDMA over Converged Ethernet) to eliminate the performance-robbing layers of the network stack. This runs across 40Gbit/s Ethernet to Assureon Edge servers.
The new software offers virtual shortcuts to data that require zero disk space and reside purely in memory as reference points to physical files in the Assureon archive, Nexsan said. Data is retrieved directly from user-space with minimal involvement of the operating system and CPU. So-called zero-copy applications can retrieve data from the Assureon archive without involving the network stack.
QLC Flash support
Nexsan is the third storage array vendor to use QLC flash. The others are startup VAST Data and Pure Storage with its FlashArray//C. NetApp says it will adopt QLC flash later this year.
Blocks & Files expects Dell EMC, Hitachi Vantara, HPE, IBM and Kaminario to all add QLC flash support later this year.
Why the need for speed?
Overall Assureon is now a RoCE rocketship in data access terms. It is possibly the fastest archive array in the industry.
But at first glance, adding RoCE to Assureon is odd. Why would an archive array need lightening fast RDMA over Ethernet? RoCE is normally used for tier 0 primary data access to all-flash arrays; for the hottest of data and not the low access rate cold stuff in archives. It is already used in StorCentric’s Vexata all-flash arrays and its addition would be appropriate to Nexsan’s Unity arrays.
Since most on-line archives are stored on disk, superfast (microseconds level) access to disks with their millisecond-level response times is generally thought inappropriate – like building a motorway-class on-ramp to a country lane. It would make more sense if RoCE access was added to an all-flash archive array.
Nexsan answers
We asked Nexsan: “Why is there RoCE support on Assureon?”
A spokesperson said: “Some of our customers want to place data in a secure location such as Assureon for live data as well as older, less frequently used data. Assureon with RoCE gives customers a way to achieve real time access to their live data and thus consolidates the typical deployment of a primary storage along with a backup/archive solution.
“The driver for this solution was to provide enough performance from a data vault to eliminate the need for a dedicated primary storage system coupled with an archive device.”
Blocks & Files asked a second question: “What about an all-flash Assureon?”
The spokesperson replied: “Yes – StorCentric has a multi-pronged approach to flash media on the Assureon product line. First – with the Assureon 8.3 release, SSDs have been made standard for all the metadata and other control path storage.
“Second – the data stored can be stored in a E18F using QLC Flash. The use case for this solution is a faster data vault that eliminates or significantly reduces the need for a large primary storage device.”
“Please watch for further announcements from StorCentric on this topic through the first half of this year.” We will Nexsan, we will, because this fast-access archive strategy is interesting.
Compute-in-storage pioneer NGD has raised $20m in a C-series round to develop production of its technology and invest in sales and marketing.
The investment was led by MIG Capital with participation from Western Digital Capital Global, and existing investors including Orange Digital Ventures, Partech, BGV and Plug-N-Play.
NGD Systems was founded in 2013 and raised $6.3m in 2016, $4m in 2017 and $12.4m in 2018. This round takes total funding to $45m.
Dan Flynn, president of Western Digital Capital, said: “As applications such as AI and ML continue to accelerate the proliferation of edge computing, new storage solutions are needed to address their evolving and dynamic workloads. Leveraging the power of NVMe, NGD Systems’ innovative Computational Storage Drives (CSDs) are purpose-built to support the volume, velocity and variety of data at the edge.”
U.2 format Newport drive
NGD builds in-situ processing Newport SSDs and its latest product is a 4TB or 8TB M.2 format SSD with an on-board ARM Cortex-A53 core running a 64-bit version of Ubuntu Linux. It has an NVMeoF link to the host server. This followed on from a 16TB 2.5-inch format product launched in March 2019.
NGD characterises its focus in general as localised analytics/processing of mass data sets. Nader Salessi, NGD Systems CEO, said: “NGD is solving issues that no other legacy storage architecture can address.”
It has two main markets: the developing data-intensive edge server market and the AI/ML scene. Others include content delivery networks and hyperscalers. The company says they all need to process large amounts of data quickly. Processing some of this data at drive-level without moving it to the host server CPU or GPU gets tt work done faster and offloads the host CPU.
Customers could be organisations with this processing need or public clouds wanting to offer such services. Booking.com is one such customer. Peter Buschman, product owner, storage at Booking.com, quoted in NGD’s funding announcement, said: “Two of our key datacenter metrics are Watts/TB and write-latency. At only 12W of power for a 32TB NVMe SSD, we found the NGD Systems drives to be best in class with respect to this combination of characteristics.
“The latency, in particular, was consistently low for a device with such a small power draw. With power, not space, being our greatest constraint, and environmental impact a growing concern, this technology holds great promise for use in next-generation datacenter environments.”
Our take
We might envisage an edge computing environment with streaming data fed to Newport drives which automatically process it in some way and alert a host server CPU to the results. The host server might also command the NGD drives to tune on-drive processing jobs in parallel and then deliver the results for higher-level processing. That could lower overall latency considerably.
Blocks & Files also thinks intelligent SSDs, like Newport, could provide on-drive object storage and/or key:value storage. The on-drive CPU is just another general-purpose processor, not specifically engineered to do a particular job, like an ASIC or FPGA.
These drives are application-specific drives in effect, with the application coded for the drive’s CPU and interacting with a host server’s application suite.
Dell EMC is not yet publicly talking about the launch of its much anticipated mid-range storage system, MidRange.next, but planning is very much underway.
Because of Dell EMC’s market position, the industry is watching closely. MidRange.next replaces Dell EMC’s Unity, XtremIO and SC arrays, and is due to be completed by the end of February. Jeff Clarke, Dell EMC vice chairman for products and operation, said in November last year: “We will have the product completed by the end of the fiscal year and it will be released. “
MidRange.next represents the consolidation of Dell EMC’s three mid-range arrays into a single product line. The announcement data was set to take place in September last year but then delayed.
We expect MidRange.next will have a Power-something moniker to fit in with Dell EMC’s Power-based branding scheme – eg, PowerMax, PowerSwitch, PowerEdge and PowerProtect. The PowerStore trademark is currently owned by Dell Technologies.
Earlier this month a Dell EMC spokesperson told Blocks & Files: “I’m happy to share we now have a number of early access customers, and feedback to date has been very strong. Customers are excited with what they’re seeing. I’ll be back in touch soon with more details on our broader launch plans.”
Now we have received an update. Dell EMC emailed us this statement: “Customers have taken part in Midrange.next’s early access program since Q4FY20 and feedback to date has been very positive.”
Good. So when is the launch?
“Based on the opportunity to address feedback from our early access program and the fiscal year-end selling cycle on existing storage deals, we made the decision to launch Midrange.next general availability this spring. Ahead of the launch, we’re gearing-up by educating our sales force at several, annual global and regional events over the next few months.”
That means the March-April-May period. There is a Dell Technologies World event in Las Vegas, May 4-11. We think that Midrange.next aka PowerStore will be announced then.
A look at the agenda shows nothing indicating any new mid-range system. It includes breakout sessions on the to-be-replaced SC, Unity XT and XtremIO arrays:
The lack of any mention of the MidRange.next system in the agenda could be strategic.
Dell EMC’s spokesperson said: “Dell Technologies remains the undisputed storage leader, and we’re continuing to take share and drive momentum in key storage categories over the past eight quarters. In Q3, our storage business grew 7 per cent and, over the past two years, we’ve taken 375 basis points of storage share.
“We expect to continue this momentum with the launch of Midrange.next. It’s an important moment in the industry, and we’re excited by what the product will offer our customers. When they are ready to migrate to the platform, we have designed the offering to allow them to do so quickly and seamlessly from Dell EMC storage products including VNX, SC Series, PS Series, Unity and XtremIO.”