Home Blog Page 418

Arcserve goes head-on against Commvault, Rubrik, Unitrends and Veritas

Watch out Commvault, Rubrik, Unitrends and Veritas; Arcserve is coming right at you.

It has updated its UDP appliance with 9000 series models that integrate backup, disaster recovery and backend cloud storage, enabling it to compete more strongly with other unified data protection vendors.

They replace the previous, second generation UDP 8000 series, which were purpose-built backup appliances similar to those of Data Domain.

Arcserve claims this is the market’s first appliance purpose-built for DR and backup. Compared to the UDP 8000s this all-in-one option for onsite and offsite backup and DR boasts:

  • Cloud services that enable companies to spin up copies of physical and virtual systems directly on the appliance, and in private or public clouds
  • Twice the effective capacity as previous models (In-field expansion up to 504TBs of data per appliance and up to 6PBs of managed backups through a single interface)
  • A new hardware vendor that enables Arcserve to deliver onsite hardware support in as fast as four hours, and high redundancy with dual CPUs, SSDs, power supplies, HDDs and RAM.

Who is the HW supplier? Arcserve says it cannot say but it is the #1 hardware vendor in the world and is U.S. based. [Who said Dell?]

Arcserve schematically shows the 9000 appliance’s deployment;

The A9000 series features;

  • Up to
    • 20 x86 cores,
    • 768GB DDR4-2400MHzRAM
    • 504TB effective capacity
    • 6PB backups managed
  • 20:1 dedupe ratio with global dedupe
  • SAS disk drives and SSDs
  • 12Gbit/s RAID cards with 2GB non-volatile cache
  • Expansion kits to bulk up base capacity up to 4x
  • High-availability add-on
  • Cloud DRaaS add-on
  • Real-time replication with failover and failback
  • Pump data offsite to tape libraries

There are 11 models; 

Arcserve 9012, 9024 and 9048 appliances deliver up to 20 TB/hour throughput based on global source-side deduplication with a 98 per cent deduplication ratio. The 9072DR, 9096DR, 9144DR, 9192DR, 9240DR, 9288DR, 9360DR and 9504DR deliver up to 76 TB/hour throughput based on global source-side deduplication with a 98 per cent deduplication ratio. 

The appliances can protect VMware, Hyper-V, RHEV, KVM, Nutanix AHV, Citrix and Xen VMs with agentless and agent-based backup. They can back up and recover Office 365, UNIX, FreeBSD, AIX, HP-UX, Solaris, Oracle Database, SAP HANA and more.  

Supported back-end clouds include the Arcserve Cloud, AWS, Azure, Eucalyptus and Rackspace. 

A dozen appliances and 6PB of backups can be managed through one interface.

Arcserve says it can take as little as 15 minutes to install and configure the appliance, and it comes with 4-hours and next-business-day on-site support options.

 Competition

Arcserve says its biggest competitors with this announcement are Unitrends, Rubrik and Veritas [US] and Veritas, Commvault and Rubrik [outside of US]. Unitrends, Rubrik and Veritas use a different, less preferred hardware vendor outside of the U.S.

It makes the following competitive claims;

Unitrends

  • Unitrends disaster recovery is very limited and there are no expansion capabilities
  • On-appliance recovery is only supported for Windows, and only if backup is done with Windows-based imaging
  • You can’t backup VMware/Hyper-V VM host-based and use on-appliance recovery
  • Any of their Instant Recovery rehydrates the entire VM, and that’s why it’s very slow (at least 20 minutes per TB).
  • You must allocate 100 per cent of Windows machine used storage for IR, removing it from backup storage.’
  • No support for hardware snapshots – Arcserve supports NetApp, HPE 3Par and Nimble
  • No UEFI on-appliance

Veritas:

  • No on-appliance DR or HA – appliances do not include storage and require storage shelves, driving costs and complexity
  • Veritas does not claim dedupe ratios.
  • Very high cost of units, software, maintenance, expansion
  • Complex NetBackup software at the core, consuming IT pro time, lowering IT productivity

Rubrik:

  • 80TB is capacity of Rubrik model r6410.
  • 400TB is based on an advertised dedupe efficiency of 5:1.
  • High list price, and targeted primarily at enterprise
  • No on-appliance DR or HA – separate infrastructure required, driving up cost and complexity
  • No 4-hour, or even NBD support commitment – best effort only

Commvault

  • HyperScale cluster can scale infinitely. 262TB is for 3x Commvault HyperScale
  • 3300 appliances with 8TB drives. Commvault does not claim deduplication ratios.
  • Requires a minimum of three appliances to operate.
  • Very high cost of units, software, maintenance, expansion
  • Complex Simpana software consumes IT pro time, lowering IT productivity
  • No on-appliance DR or HA – separate infrastructure required, driving up cost and complexity

Development plans

New focus areas for the next generation of Arcserve Appliances will centre on expansion enhancements. 

Arcserve will launch a new version of its UDP software featuring Nutanix and OneDrive support, as well as a next-gen of our RHA which will include a host of new support for Linux HA to AWS, Azure, VMware and Hyper-V, Windows Server 2019, and more.

Arcserve Appliance customers will get a free upgrade to the new versions with all these features as part of their maintenance benefits. Currently general availability for these new products is targeted at late spring / early summer 2019.

Availability and pricing

All new trial and licensed customers of Arcserve UDP Appliances are using the new Arcserve 9000. New Arcserve UDP Appliance customers will be able to use the new series within the next month.

The new Arcserve Appliance series is available now worldwide through Arcserve Accelerate partners and direct.

The starting list price for the backup only (DR not included) Appliance is $11,995. The starting list price with DR included is $59,995.

WekaIO goes higher than Summit

Well, well; remember back in November when WekaIO, with its Matrix filesystem, took second place in IO-500 10 Node Challenge, with the Summit supercomputer taking first place?

About turn, because the Virtual Institute for I/O (VI4IO) , which maintains and documents the IO-500 10 Node Challenge List, has recalculated its results and awarded WekaIO first place.

Why 10 nodes?

By limiting the IO-500 10 Node Challenge benchmark to 10 nodes, the test challenges single client performance from the storage system. Each system is evaluated using the IO-500 benchmark that measures the storage performance using read/write bandwidth for large files and read/write/listing performance for small files.

With the November scoring, WekaIO served up 95 per cent of the Summit supercomputer’s 40 storage racks IO using half a rack’s worth of its Matrix scale-out fast filer software. Matrix ran on eight Supermicro BigTwin enclosures, and scored 67.79, coming within 5 per cent of the Oak Ridge IBM Summit Supercomputer’s 70.63 score. Summit ran its system on a 40-rack Spectrum Scale storage cluster.

Bug detection alert

The VI4IO people found a bug in their tests and state; “we fixed the computation of the mdtest score that had a bug by computing the rate using the external measured timing.”

The new IO-500 10 Node Challenge List ranking results.

The new score for WekaIO is 58.35 while the now second-placed Summit system scores 44.30. A Bancholab DDN/Lustre system is third with 31.50.

That means Matrix is 31 per cent faster than IBM Spectrum Scale and 85 per cent faster than DDN Lustre.

So it’s official; WekaIO’s Matrix is the fastest file system in the world, based on the IO-500 10 Node Challenge List, beating Spectrum Scale running on the world’s fastest supercomputer, Summit. That means WekaIO is higher than Summit.

Seven Pillars of IBM Storage wisdom

You would think IBM was a storage chemist; there are that many IBM Storage Solutions floating around. And now we have three more with another four point storage product announcements as well in a Herzog blog blast.

IBM Storage Solutions are blueprinted bundles of product which are pre-tested and validated for specific application areas.

Eric Herzog is IBM’s Storage Division CMO and its VP for world-wide channels. His latest blog says;

  • IBM Storage Solution for Blockchain runs on either NVMe FlashSystem 9100 infrastructure or LinuxONE Rockhopper II. It includes Spectrum Virtualise, Spectrum Copy Data Management, and Spectrum Protect Plus. IBM claims it increases blockchain security with 100 per cent application and data encryption support and reduces test, development and deployment time for both on/off-chain solutions. Get this; Big Blue also claims it improves time to new profits from days to hours.
    • Perhaps you could suggest to IBM that you pay for it by giving IBM a percentage of these new profits? That would lower your CAPEX and OPEX needs.
  • IBM Storage Solution for Analytics is based on IBM Cloud Private for Data, which is supported on the NVMe-based FlashSystem 9100. It is said to accelerate data collection, orchestration and analytics, and simplify Docker and Kubernetes container utilisation for analytics apps.
  • IBM Storage Solution for IBM Cloud Private, with IBM Cloud Private being a Kubernetes-based platform for running container workloads on premises. This has Spectrum Scale parallel access file storage added to the existing block and object storage.
  • FlashSystem A9000 gets AI added to its deduplication capability to help work out where best to place data for maximal deduplication. It analyses metadata in real time to produce deduplication and capacity estimates without, IBM claims, any performance impact.
  • IBM Cloud Object Storage gets added NFS and SMB file access to its object storage.
  • IBM Spectrum Protect gets retention sets to simplify data management and reduce data ingest amounts.
  • IBM Spectrum Protect Plus has added data offload to public cloud services; IBM COS, AWS. Azure, IBM COS on-premises and Spectrum Protect. This is for data archiving and/or disaster recovery.  Spectrum Protect Plus has enlarged its protection capabilities to include the MongoD (NoSQL database) and Microsoft Exchange Server.

Here we have a worthy set of seven incremental software announcements to broaden and extend the appeal of existing products. 

Commvault has a new puppet-master pulling its strings

Commvault has appointed Sanjay Mirchandani, who ran Puppet, as its new President and CEO, and the man is fresh to enterprise data protection and management but full of energy.

Ex-CEO and President and Chairman Bob Hammer becomes Chairman Emeritus after running Commvault for 20 years, while Nick Adamo becomes board chairman. He became a board member in August last year, being a board new broom following activist investor Elliott Management’s involvement with Commvault.

It was Elliott’s influence that caused Commvault to initiate its Advance restructuring project and Hammer’s agreed resignation last May.

Al Bunte, who has served alongside Hammer for more than two decades, is stepping down from his role as COO while maintaining his board position. Both Hammer and Bunte will remain with the company through a transitionary period, with Hammer stepping away from the transition effective arch 31, 2019.

In a Forbes article Mirchandani wrote; “Once-dominant stalwarts are being disrupted by well-financed software startups that, rather than building a better mousetrap, decided to engineer a better mouse.”

That sounds catchy if we think of the mouse as software and mousetrap as hardware, with Mirchandani as the mouse-man.

Puppet products automate the production, delivery and configuration of software, and Commvault reckons it needs a modern software-focussed CEO.

Sanjay Mirchandani

MIrchandani has senior exec, including CIO, experience at Microsoft, Dell EMC and VMware in his CV, and ran Puppet for almost three years.

Kevin Compton, co-founder and partner at Radar Partners and Puppet board member, said: “We are truly indebted to Sanjay for the incredible impact he’s had on Puppet. Under his leadership, Puppet acquired two companies and opened five new offices in Seattle, Singapore, Sydney, Timisoara, and Tokyo. Sanjay also oversaw a $42 million fundraise and took Puppet from a single product company to a multi-product portfolio company. We’re incredibly grateful for his leadership.”

Yvonne Wassenaar replaces him as CEO at Puppet.

Why Commvault?

 In a briefing Michandani told us it was time for a change at Puppet, and he wanted to move back to the East coast of the USA; he has family in the New Jersey area, where Commvault is head-quartered.

Commvault is attractive to Mirchandani because it is a bigger company and in good shape. We asked if it could become a billion dollar company: “The space is growing really rapidly. I’m not going to put a number on it just yet. I”m very excited about this space. Infrastructure and applications are coming together” and “data is paramount. We’re in a great position to define that to our customers.”

He likes where Commvault is located in the marketed: “I think we’re in a great place. The amount of data we manage in the cloud is approaching an exabyte and growing rapidly.”

What about the well-funded upstarts such as Cohesity, Rubrik and Veeam?

“There’s something to be said for having been around for a while. Their funding is validation of the space being important.” Also, in comparison to Cohesity and Rubrik: “Commvault took less and does more. … A [customer] CIO needs someone to trust and one throat to choke. We have an unrivalled capability that spans from mainframes to containers.”

He said Commvault is investing in its partner eco-system but nothing about products and  strategies. 

Comment

Blocks & Files thinks Mirchandani has a learning curve ahead of him but already sees the need to nurture Commvault’s installed base. Apart from that his time at  Puppet shows he is willing to acquire needed technology and grow a product portfolio. 

Cohesity, Rubrik and Veeam now have a fight on their hands, as Mirchandani won’t want to let the/his stalwart Commvault be disrupted by these cash-rich upstarts. He could be a quick study, taking Hammer’s legacy and getting Commvault growing to the billion dollar revenue level and beyond. We expect him to hit the ground running – fast.


Acronis’ hyper-converged backup appliance

Acronis’ SDI Appliance is the first appearance of the company’s hyper-converged product line. It is a purpose-built backup system intended to be a storage target for its Backup and Backup Cloud offerings. 

This software product combines hyper-converged infrastructure (HCI) and cyber protection, and  is based on updated, pre-configured Acronis Software-Defined Infrastructure (SDI) software. This was formerly called Acronis Storage and turns customers’ X86 server-based hardware into a hyper-converged system. It supports block, file, and object storage workloads and delivers cyber protection by incorporating Acronis’ CloudRAID and Notary products. The latter uses blockchain technology.

It addresses five aspects of cyber protection — safety, accessibility, privacy, authenticity, and security.  Other features include virtualisation, high-availability, AWS S3 compatibility, software-defined networking, and monitoring.

The appliance is delivered pre-installed on hardware that has been developed, built and shipped by German-based RNT RAUSCH GmbH, a manufacturing and logistics company. 

It comes in a 3U rack mount form-factor, carrying five nodes, each fitted with an Intel 16-core processor, 32GB RAM (up to 256GB) and 3x Seagate 4/8/10/12 TB SATA disk drives; up to 180TB capacity in total.

Acronis SDI Appliance is currently available in the U.S., Canada, U.K., Germany, Switzerland, Austria, and North European countries.

Comment

Earliier this month Acronis said it was going to launch a software-defined data centre product. This purpose-designed backup appliance is it.

Add some flash storage and more CPU horsepower to this appliance and it becomes a potential general-purpose hyper-converged system which Acronis has said it’s developing. Virtuozzo, another company part-owned by Acronis co-founder and CEO Serguei Beloussov, has such an HCI system coming that’s destined for service providers selling virtual private clouds.

Blocks & Files wouldn’t be at all surprised if there wasn’t some collaborative software development taking place between Acronis and Virtuozzo.

NVMe/TCP needs good TCP network design

Poorly-designed NVMe/TCP networks can get clogged with NVMe traffic and fail to deliver the low latency that NVMe/TCP is designed to deliver in the first place.

The SNIA has an hour-long presentation explaining how NVMe/TCP storage networking works, how it relates to other NVMe-oF technologies, and potential problem areas.

NVMe over TCP is interesting because it makes the fast NVMe fabric available over an Ethernet network without having to use lossless data centre class Ethernet components which can carry RDMA over Converged Ethernet (RoCE) transmissions.

Such Ethernet components are more expensive than traditional Ethernet. NVMe/TCP uses ordinary, lossy, Ethernet and so offers an easier, more practical way, to advance into faster storage networking than either Fibre Channel or iSCSI

The webcast presenters are Sagi Grimberg from Lightbits, J Metz from Cisco, and Tom Reu from  Chelsio, and the talk is vendor-neutral.

This talk makes clear some interesting gotchas with TCP and NVMe. First of all every NVMe queue is mapped to a TCP connection. There can be 64,000 such queues, and each one can hold up to 64,000 commands. That means there could be up to 64,000 additional TCP connections hitting your existing TCP network if you add NVMe/TCP to it.

If you currently use iSCSI, over Ethernet of course, and move to NVME/TCP using the same Ethernet cabling and switching, you could find that the existing Ethernet is not up to the task of carrying the extra connections.

Potential NVMe/TCP problems

NVMe/TCP has more potential problem areas; latency higher than RDMA, Head-of-line blocking adding latency, Incast adding latency, and lack of hardware acceleration.

RDMA is the NVMe-oF gold standard and NVMe/TCP could add s few microseconds of extra latency to it. But, in comparison to the larger iSCSI latency, the extra few microseconds are irrelevant, and won’t be noticed by iSCSI migratees. 

The added latency might be noticed by some latency-sensitive workloads, which wouldn’t have been using iSCSI in the first place, and for which NVMe/TCP might not be suitable.

Head-of-line blocking can occur in a connection when a large transfer can hold up smaller ones while it waits to complete. This may happen even when the protocol breaks large transfers up into a group of smaller ones. Network admins can institute separate read and write queues so that, for example, there are separate read and write queues. NVMe also provides a priority scheme for queue arbitration which can be used to assuage any problem here.

Incast

Think of Incast as the opposite of broadcast, with many synchronised transmissions coming to a single point, which forms a congestion bottleneck, through a buffer overflow, with the sessions backing off, and the affected packets dropped, causing a retransmission and added latency.

It could be a problem and might be fixed by switch and NIC (Network Interface Card) vendors upgrading their products and possibly by TCP developers with technologies like Data Centre TCP. The idea would be to tell the sender somehow, by explicit congestion detection and notification,  to slow down before the buffer overflow happens, The slowing itself would add latency but not as much as an Incast buffer overflow. Watch this space.

HW-accelerated offload devices could reduce NVMe/TCP latency below that of software NVMe/TCP transmissions. Suppliers like Chelsio and others could introduce NVMe/TOEs; NVMe TCP Offload Engine cards, complementing existing TCP Offload Engine cards.

The takeaway here is that networks should be designed to carry the NVMe/TCP traffic and that needs a good estimate of the added network load from NVMe. 

This SNIA webcast goes into this in more detail and is well worth watching by storage networking and general networking people considering NVMe/TCP.


Look to the visible Horizon and witness the future of HDD vs SSD

Disk drive capacity shipped and NVMe SSD sales will both boom over the next few years because more data needs to be stored and accessed faster when processed.

So says Stephen Buckler, chief operating officer at Horizon Technology, a multinational IT asset disposal company.

Stephen Buckler, Horizon COO

He sent us his HDD vs SSD market brief and we are republishing extracts with permission. Buckler sees enterprise HDD sales in 2019-2020 and beyond will remain robust for capacity drive storage but customers will turn increasingly to SSDs for performance data storage requirements.

In the consumer segment SSD adoption for local storage will pick up. Users are increasingly using the public cloud for capacity data and SSDs are getting less expensive relative to HDDs for local performance data,  Buckler says.

In particular, he notes:

  • Tom Coughlin, writing for Forbes in December, forecast HDD shipments to grow from 869 exabytes for 2018 to 2.6 zettabytes by 2023.
  • NVMe and NVMe-oF dramatically increase the performance and usefulness of SSD, while the price erosion in 2018 makes SSD much more accessible…NVMe-based SSD sales eclipsed SATA/SAS based sales in 2018.
  • With supply continuing to outpace demand and the roll-out of 96 layer and quad cell NAND, we expect to see continued SSD price erosion into 2020. Combined with increasing appetite for low latency storage driven by AI, MI and IoT applications, SSD will cut further into enterprise HDD sales.
  • Looking into Q1 2019, we see a hangover from Q4, with manufacturers working down supply from the previous quarter, having reportedly built 14 to 15 million nearline drives against a demand of 12 million. The hyperscaler market is on pause, and absent a visible catalyst to jumpstart sales we do not see that correcting itself until the third quarter of 2019.
  • WDC, in its recent earnings call, announced it will ship a 15TB energy-assisted drive later in the year.
  • Pricing will remain soft regardless. Although we may see exabytes growing, we feel there will be downward pricing as manufacturers work off excess stocks from Q4 and the market digests its last build cycle.
  • On the consumer side, the most significant change is the cloud. With more and consumers utilising SaaS services and opting to share and store files in the cloud, the storage performance value equation has radically changed. Size is less important than performance, since consumers only require a limited amount of personal storage, with most interactions now stored in the cloud.
  • No longer is it an apple-to-apples comparison when comparing HDD to SSD. Since end users no longer need a lot of storage, HDD’s primary cost advantage has been marginalised.
  • We are seeing over 50 per cent of notebooks shipping with SSD. Further price drops only make SSD that much more attractive. Both WDC and Seagate see the writing on the wall, with WDC limiting the roll-out of new generations and Seagate pivoting away from sub 1TB capacities.
  • Looking at the near term, it is a question whether Windows 10 upgrades will offer enough of a tailwind to offset the seasonal slowdown after the holiday build. Surveillance, a sector requiring low cost storage that is optimised to hard drives, remains an active spot for HDD in the client space.

Buckler’s snapshot view confirms the main disk and flash storage media trends. Disk, either on-premises or in the cloud, is the capacity-focused storage choice du jour with flash the favourite for fast access data. 

Samsung squeezes notebook-scale storage into mobile phones

Samsung has announced terabyte-size mobile phone flash memory.

Mobile phones with this 1TB drive should be able to shoot video at 960 frames/sec, turning them into mini-camcorders. Users can also ditch buying additional memory cards to bulk out phone storage. 

Never mind notebook-class – this article was written on a 2013 desktop with 1TB of storage. It’s taken just five years for mobile phones to match desktop levels of compute and storage. Where next?

Record capacity

Samsung is introducing 512GB and 1TB eUFS drives for smartphones, beating Toshiba’s UFS drives with their 128GB, 256GB and 512GB capacities.

Samsung expects strong demand for the drive and is expanding production of 512Gbit V-NAND at its Pyeongtaek plant in Korea throughout the first half of 2019.

Unlike Toshiba, Samsung is happy to reveal detailed, juicy performance numbers.

  • Sequential reads at up to 1,000MB/sec
  • Sequential writes up to 260MB/sec
  • Random reads are up to 58,000 IOPS, a 38 per cent increase over Samsung’s prior 512GB product
  • Random writes are up to 50,000 IOPS, 500 times faster than a microSD memory card

Toshiba uses 96-layer 3D NAND. Samsung does not want to reveal actual layer count, but is in the same area with its gen 5 VNAND tech, specified as having 9x (90 to 99) layers.

Sixteen stacked layers of this 512Gbit VNAND memory make up the 1TB capacity, with a Samsung controller inside the 11.5mm x 13.0mm package.

Samsung’s card is compatible with the UFS v2.1 interface whereas Toshiba’s card is UFS v3.0 compatible.


Hyperconverged infrastructure heads closer to the edge

Interview: Get hyperconverged infrastructure (HCI) product development and marketing wrong and your business becomes a turkey. Witness Maxta, which crashed into the buffers this week, as an example of when it goes wrong.

With heavyweights like Nutanix and Dell/EMC/VMware dominating the market, mis-steps by smaller players can lead to disaster. 

So how to survive and thrive? Blocks & Files spoke Jeff Ready, CEO of Scale Computing, about his views on HCI and storage-class memory, NVMe-over-Fabrics, file support. Also where does he think the HCI market is going.

Overall, Ready reckons the edge is key, and automated admin and orchestration are key attributes in making HCI suitable for deployment outside data centres.

Blocks & Files: Does storage class memory have a role to play in HCI nodes?

Jeff Ready: It will, once there is sufficient demand for applications supporting it.  With the high cost this is still a niche play, but that will be rapidly changing as the price improves.  

Blocks & Files: In scaling out, does NVMe over Fabrics have a role to play in interconnecting HCI nodes?

Jeff Ready: Absolutely… today, Hyper-Core Direct utilises NVMeoF, and our near term plan is to use NVMeoF in HC3 as the storage protocol underpinning scribe, performing all of the node to node block operations for data protection, regardless of underlying devices (even IOs destined for HDD will utilise NVMeoF).  The simplicity and efficiency of the protocol is the reasoning behind this,

Blocks & Files: Does file access – thinking of Isilon, Qumulo and NetApp filers – have any role in HCI? If we view HCI as a disaggregated SAN can we think about a similarly disaggregated filer? Does this make any kind of sense?

Jeff Ready: Sure, it can make sense. Often depends on the applications that are running and what makes it easy for them. We’ve incorporated file-level functionality into our HCI stack as it makes it easy to (among other things) do file-level backup and restore from within the platform itself. Not every deployment needs it, but it can be handy. They key is offering the functionality without complicating the administration, otherwise you’ve missed a huge area of time savings with HCI.

Blocks & Files: Is software-defined HCI inherently better suited to hybrid cloud IT?

Jeff Ready: Yes, assuming that there is an architecture that exists in both places.  For example with our Cloud Unity product partnership with Google, the resources of the cloud appear as part of the same pool of resources you have available locally. Thus, the applications can run in either location without being changed or otherwise aware of where they are running. HCI implemented in this way creates one unified platform.  

Blocks & Files: Does Scale support containerisation? What are its thoughts on containerisation?

Jeff Ready: Yes. We find most organisations are still in the very early stages of considering containers. Some applications and deployments make great sense for containers, others less so. Our philosophy is to provide a single platform that can run both VMs and containers, so that the architecture does not stand in the the way of the right deployment method for a particular app.  

Blocks & Files: How does Scale view HCI with separate storage and compute nodes, such as architectures from Datrium and NetApp? Are there pros and cons for classic HCI and disaggregated HCI?

Jeff Ready: Having storage nodes available to be added to a mixed compute/storage node cluster is something that we and others have done for some time. 

So long as it all pools together transparently to the applications and administrators, then this makes sense. If the disaggregated resources then require separate administration or application configuration, you’ve sort of missed the entire point of HCI. 

Our approach is that the apps should feel like they are running on a single server, and that the virtualization, clustering, and storage take care of themselves. If you are back to managing a hypervisor (VMware or otherwise), or managing a storage stack (using protocols, caring out LUNs and shares, etc.), or piping compute nodes and storage nodes together as an admin, then you really don’t have a hyperconverged stack, in my opinion.

Ready, Steady, Go!


Blocks & Files: How do you see the HCI market evolving?

Jeff Ready: We’re seeing different features for HCI emerging in different markets.  

Edge computing is focused on cost, footprint, and manageability. Other areas are focused on streamlined performance and optimisation. Some are general purpose deployments that need a bit of each.  

For us, the emergence of edge computing is very exciting, where automation and orchestration is an absolute must-have, and this is where our own tech really shines: deploying into places where there are often zero IT admin resources – stuff needs to work without humans touching it when you are trying to manage thousands of locations. 

The edge is a natural complement to cloud computing and I expect this to be the fastest growing area in IT for some time to come. HCI is a technology that applies at the edge, but the automation and orchestration are really key drivers.

Comment

Smaller HCI suppliers  need a sharp eye on new technology, adopting it promptly, and also they need to understand where HCI fits in the IT market.

OK so this is easy to say, much harder to put into practice. But the importance of doing the simple things well is illustrated by developments this week, with HCI startup Maxta  running into the buffers and a big beast, Cisco, flexing its HCI muscles. (See my report on Cisco strengthening its HyperFlex HCI offering with all-NVMe systems, Optane caching, and edge systems.)

In edge locations with no local IT admin, the HCI system has do all the IT things needed to keep the branch, office, shop, whatever, operating and hooked up to the data centre and, probably, public cloud.  Forget that and the HCI vendor is heading nowhere.

Pliops gets funding to build GPU-like Storage Processing Unit

Pliops has secured a $30m investment to bring its Storage Processing Unit into production and is targeting a launch in mid-2019.

The Israeli company aims to accelerate storage stack processing for hyperscale companies that use applications such as MySQL and Cassandra cloud databases.

“Storage stack” encompasses storage work by the application as well as the storage IO processing steps by the Linux OS on the host server. Pliops’ hardware accelerator will speed both aspects.

We envisage an add-in PCIe card containing specialised semiconductor hardware and firmware – ASICs or FPGAs, for example.  Pliops says its technology can interface to local storage on the host or external flash storage using NVMe over Fabrics.

Blocks & Files view of Pliops scheme

Pliops says its product will enable more efficient scaling, via a 90 per cent reduction in compute load, a 20x reduction in network traffic, a 50x improvement to latency, and over 10x application throughput. 

Pliops emerged from stealth late last year. The new funding follows seed and A-round funding of $10m.

Macronix: we can build 3D NAND cheaper than all the rest

Macronix, a Taiwanese semiconductor firm, is to build 48- and 96-layer 3D NAND – and it thinks it can cut industry prices by up to a third.

Macronix is a small player in the NAND market and also makes NOR flash and ROM products. It will start production of 96-layer 3D NAND at the end of 2020, and will use 12-inch wafers and output 128Gbit and 256Gbit product, according to a Digitimes report.

Macronix has researched 3D NAND technology for some time. Company founder Miin Wu thinks Macronix could cut 3D NAND prices by up to a third, an EE Times report in September 2018 said.

This involves making 3D NAND with a new architecture that offers a 30 per cent lower cost/bit. Macronix will initially make ordinary 3D NAND at the rate of 50,000 wafers/month. 

When that business is established it will introduce the new architecture, a Single-Gate, Vertical Channel (SGVC) scheme. Existing 3D NAND uses a Gate All Around architecture. 

SGVC features a smaller cell size and pitch scaling capability which enables fewer stacking layers to reach a specific density. That means a lower cost/bit.

SK Hynix, the semiconductor giant, is looking at 512Gbit and 1Tbit densities in its 3D NAND products. Macronix’s focus is on lower density products and lower prices.

Restructuring pays off as Commvault delivers record quarter

Commvault has posted record quarter revenues and its biggest net income in five years.

The data protector and manager’s financial third quarter, ended December 31, 2018, saw revenues of $184.3m, up two per cent y-o-y, and$13.4m net income. This contrasts starkly with the $58.9m loss recorded a year ago but then that reflected US government corporation tax accounting changes.

The company is restructuring in response to the overtures of activist investor Elliott Management.  One initiative sees Bob Hammer, the CEO, agreeing to retire.

Bob Hammer

William Blair analyst Jason Ader commented: “The top-line improvement was driven mainly by a pickup in large enterprise deals (up 18 per cent.)”

He said: “This was the first clean quarter from Commvault in a while and suggests real progress with Commvault Advance, the company’s multi-year strategic initiative to simplify products, licensing models, and pricing, and to shift field resources toward supporting channel and alliance partners.” 

The chart shows how the quarter stacks up.

Subscription farming

Commvault has begun moving to subscription revenues and total repeatable revenue was $121.m, an increase of 15 per cent y-o-y. Software and products revenue was $84.5m, an increase of 4 per cent y-o-y. Subscription and utility annual contract value more than doubled year-over-year to approximately $90m.

Services revenue in the quarter was $99.8m,  up 1 per cent y-o-y. Operating cash flow totalled $31.1m,  compared to $31.2m a year ago.  Total cash and short-term investments were $457.9m at the end of the quarter.

Hammer said the company’s revenue performance, “coupled with our continued successful efforts to right-size the business, allowed us to continue to deliver significant year-over-year earnings growth … We believe Commvault is well positioned for both our fiscal Q4 and fiscal 2020.”

Commvault fourth quarter guidance was c.$189m. That would produce fiscal 2018 revenues of $718.6m, a record.

The company is looking for a new CEO, and Ader commented: “With respect to the CEO search, this appears to be the swan song earnings call for long-time CEO Bob Hammer, who commented that an announcement on a new CEO would come soon. … Mr. Hammer can now walk off into the sunset with a measure of gratification that his team’s business transformation efforts are bearing fruit.”