Home Blog Page 373

The three Cs of Actifio 10c: Clouds, containers and copy data

Actifio has released a major update of its eponymous copy data manager software.

Actifio 10c focuses on three ‘Cs’: cloud, containers and copy data. The company emphasises backup as a means of getting data into its orbit and using it for archive, disaster recovery, migration and copy data provisioning within and between public clouds and on-premises data centres.

CEO Ash Ashutosh said in a press briefing in Silicon Valley last week: “Copy data begins with backup. And goes all the way to archive.” He said cloud backup is traditionally a low-cost data graveyard but Actifio “provides instant access and re-use.”

Actifio CEO Ash Ashutosh

Warming to his theme, Ashutosh added: “The disaster recovery workload is just metadata,” claiming 10c offers single-click DR. In this worldview, Virtual Machine recover and migration and disaster recovery are just another form of copy data management.

Peter Levine, general partner at Andreessen Horowitz, the venture capital firm which has invested in Actifio, said in the same briefing that “where software eats the world, data eats software.” Actifio is “in the exact right place – hybrid cloud,” he said. “Hybrid and multi-cloud have to work together… to move data seamlessly between all these repositories… It was ahead of its time but the time has now grown into Actifio – [which is] right on the cusp of cracking open a new layer in the software stack.”

Mostly cloudy

10c backs up on-premises data to object storage in the cloud and supports seven public clouds: Alibaba, AWS, Azure, GCP, IBM COS, Oracle and VMware

Data movements in an idealised picture of Actifio’s multi-cloud hybrid universe

Actifio’s objects can be used to instantly recover virtual machines. However, at this point, only AWS, Azure, GCP and IBM COS are supported for direct-to-cloud backups of on-premises VMware virtual machines.

The 10c product stores database backups in the cloud using their native format and can clone them. Actifio positions this as a facility for in-cloud test and development. Developers and testers will use Jenkins, Git, Maven, Chef or Ansible and request fresh clones through them, via a 10c API. The Actifio SW sends database clones from these backups to the testers’ containers running in AWS, Azure, GCP and the IBM COS clouds. 

10c also brings simple wizards for SAP HANA, SAP ASE, Db2, MySQL database backup and recovery and external snapshot support for Pure Storage and IBM Storwize arrays.

Actifio’s objects are self-describing – which aids their movement between clouds as they carry their metadata within them. Ashutosh said. “You can’t scale with a separate object metadata database.” He noted object storage supplier Cloudian, for example, uses a Cassandra database for metadata. 

10c speed

The 10c speed angle is strengthened by Actifio’s ability to create and provision 50TB clones of an Oracle database from a 17TB object, and deliver them to five test developers as virtual object copies in eight minutes. It can deliver five production copies, in block format, in 13 minutes. (An IBM/ESG document describes this test.) Actifio said Oracle RAC’s own procedures would take 90 minutes at best and possibly days to produce five block-based copies.

An on-premises cache can be used to speed self-service on-premises recoveries and lower cloud egress charges. The device uses SSD storage and can cache reads-from and writes-to cloud object storage to increase the overall IO speed. 

Actifio partners

Actifio 10c is generally available in the first quarter of 2020. The enhancements in Actifio 10c will also be available in deployments of Actifio GO, the company’s multi-cloud copy data management SaaS offering, as well as Actifio’s Sky and CDX products.

Actifio has more than 3,600 enterprise customers in 38 countries. Hitachi and NEC are big resellers in Japan and Lenovo is also a reseller. IBM resells Actifio’s software as its Virtual Data Pipeline, and this competes somewhat with IBM’s own Recover software. There is no partnership with HPE and nor with NetApp but, Ashutosh says: “We’re friendly with Dell EMC.”

It will sell software in the SMB market through resellers.

HAMR or MAMR? Western Digital’s disk drive roadmap could go either way

Blocks & Files met Siva Sivaram, President for Technology and Strategy at Western Digital, in San Jose. We discussed various aspects of disk drive technology, NAND and storage-class memory, and a fascinating discussion it was too, touching on energy-assisted drives, multi-actuators, penta-level cell flash and storage-class memory.  

Siva Sivaram, President for Technology and Strategy at Western Digital.

We summarise the disk drive technology discussion in this article and will follow it with one on solid state technology.

Energy-assist technologies

Microwave-assisted magnetic recording (MAMR) has been Western Digital’s public path to capacities greater than 16TB, along with shingled magnetic recording (SMR) to provide a further boost. The company’s recent 18TB and 20TB drive announcement reflected this.

Sivaram said: “Current PMR (perpendicular magnetic recording) doesn’t scale.” The jump from 14TB drives to the next-generation 18TB conventional and 20TB shingled generation was accomplished by adding an extra platter, taking the count to nine. WD also used some aspects of its MAMR technology – but not to the extent of beaming microwaves at the bit recording areas on the aluminium platters from a spin torque oscillator in the write head. 

The read/write head has a ‘MAMR-like structure’ but areal density improvements derived from WD’s more accurately positioned micro-actuator heads. This and the addition of the extra platter took the capacity to the 18TB- 20TB level without full energy-assist technology.

By enhancing this current technology, WD can see its way to the next capacity level, which we assume is 22TB – 25TB, although Sivaram did not specify numbers. After that, WD will need to use a full energy-assist technology.

WD has not yet decided if MAMR or HAMR will produce the best mix of areal density, performance, reliability, and total cost of ownership. The company has spent $500m on HAMR research and development, according to Sivaram, with many filed patents, more than for MAMR technology.

WD patents in different technology areas. Note the high HAMR patent count

Our take is that WD has the luxury of being able to defer the decision for a couple of years when the relative merits of each technology will be clearer.

Multi-actuator read/write heads and costs

Existing drives have a single actuator device moving the read/write heads across the disk’s platter surfaces and providing a single IO channel. Seagate is openly developing dual actuator technology to enable two IO channels and a consequent increase in IO capacity.

There may be a need for dual actuators in the disk drives that follow the 18TB – 20TB disk drive generation, according to Sivaram. But “up to the 18TB capacity level we didn’t need dual actuators as customers are not saying they need them.” It is pointless to add technology and increase cost when there is no customer demand, he added. “WD focuses on the total cost of ownership.”

Value-based pricing

This led Sivaram to discuss pricing; “Disk capacities are scaling beautifully. We want to move towards value pricing.” Customers are getting substantial added value from the linear scaling in disk drive capacity to the current levels, according to Sivaram. They would get even more from future capacities. 

With larger capacity drives the manufacturing cost in $/TB terms comes down. He suggested that, perhaps, it is time to think about moving from manufacturing cost-based pricing centred on $/TB towards value-based pricing: “$/TB is no longer the right metric.”

A customers gets more value from a larger capacity drive, he said. “They can monetise more data.” This extra value could be larger than the sheer savings of the lower $/TB cost that WD can pass on. In that case WD could pass on only a proportion of that cost-saving to the customer and so charge for some of the extra value.

Host-managed shingled drives

Shingled magnetic recording (SMR) increases disk capacity by overlapping write tracks so that more can be crammed onto a disk platter without affecting the narrower read tracks. Why is a disk’s write track wider than a read track?

Sivaram said it was due to atomic vibrations in the recording media. A bit’s atomic vibrations have the potential to affect a neighbouring track bit’s value, changing its value. Consequently, bits have to have a moat around them and along the tracks to absorb the vibration effects. This stops them affecting bits in neighbouring tracks, and makes the tracks wider. Once written, tracks can be read by scanning a narrower central section. They can also can be partially over-written by the next track – the shingling effect – without affecting a track’s readability.

Shingling write tracks

If disk drive tracks did not have to cope with atomic vibration, write tracks would be the same width as read tracks and shingling would not be necessary. At absolute zero there is no atomic vibration and the read and write tracks could have the same width. As the temperature rises above absolute zero, atomic vibration starts and its amplitude increases as the temperature rises. Hence write tracks have to widen to cope with this.

Shingling drives are built with the shingled tracks in circular zones separated by empty areas. This means that, when data is rewritten, only the group of overlapping tracks in a zone have to be changed and not the whole drive. Thus process of rewriting data in a zone of overlapping tracks can be handled by the drive controller or by the host server system.

When shingling is managed by the disk drive controller no changes to the host systems software are needed. The shingled drive is a drop-in replacement for a conventional, non-shingled drive. Host-managed shingled drives on the other hand do require the host system’s software to change so that it manages the shingled drive write process. This has limited their adoption.

In Sivaram’s view the adoption is starting to happen: “The benefits of the additional terabytes tempts customers. Adoption is starting to happen. Lots more customers are comfortable with it.”

TL:DR

Western Digital sees no need to rush to delivering new technology when it is not required by customers and when it can match and surpass competitors with current technology. It regards itself as the disk industry’s areal density leader and will closely watch drive technology costs, and customer needs and benefits, to decide when to move ahead.


Kioxia has cunning plan to make Penta-level flash production more feasible

Kioxia, formerly known as Toshiba Memory, has devised a way to make 3D NAND cells smaller and so reduce the layer count needed to reach a capacity level.

A floating gate is the part of a Kioxia NAND cell that holds the charge designating a binary one or zero. An electric current senses if the charge state flows through the gate or not, indicating the binary value. The structure is called a floating gate because the charge value and electricity flow changes.

In Kioxia’s 3D NAND technology, known as BiCS (bit column stacked), the floating gate has a circular form and the cells are built in layers with the layer count and cell bit count increasing to make higher capacity NAND chips. 64-layer 3D NAND is being succeeded by 96-layers and TLC (3bits/cell) is progressing to QLC (4bits/cell). Penta-level cell technology (PLC or 5bits/cell) is on the horizon.

However, QLC flash needs to support 32 different voltage levels per cell instead of QLC’s 16, and progressing beyond 96 layers to 100-plus is creating manufacturing difficulties. It is harder to etch holes precisely in the structures, craft cells with a uniform size and get good yields from NAND wafers. These problems are worse still with PLC NAND.

Simulated voltage distributions in QLC and PLC NAND cells.

Kioxia’s engineers have lowered the time to write data to cells by splitting the circular gate into two semicircular halves. This design results in lower electron leakage, and the cells can be made smaller too. In turn, more cells will fit on a wafer and fewer layers are needed to reach a capacity level. Kioxia calls this ‘Twin BiCS’ and says the technology makes PLC NAND manufacturing more feasible.

If this technology enables a significantly lower number of layers and the use of PLC NAND at any capacity point, such as 1TB, rival NAND manufacturers will have to develop comparable technology to keep their costs and prices in line with Kioxia.

Kioxia announced Twin BiCs at the IEEE International Electron Devices Meeting (IEDM) held in San Francisco, CA on December 11.

Lexar demos world’s fastest consumer SSD

Lexar is demonstrating a prototype 7.5GB/sec NVMe PCIe 4.0 SSD which looks set to be the world’s fastest consumer SSD when it launches next year.

PCIe 4.0 doubles the current PCIe 3.0 bus speed from 1GB/sec per lane to 2GB/sec. Current 4-lane PCIe 3.0 SSDs deliver up to 3.5GB/sec and PCIe 4.0 SSDs such as Gigabyte’s Aorus PCIe 4.0 SSD for gaming systems deliver 5GB/sec. Lexar’s prototype drive goes 50 per cent faster.

The as yet un-named demo drive, as reported by The SSD Review, uses 96-layer 3D NAND organised into TLC (3bits/cell) format and has a 1TB capacity in its M.2 2280 gumstick form factor. Lexar has not revealed controller details except that it is built with a 12nm process and 4 PCIe 4.0 lanes.

LDPC (Low Density Parity Check) error correction code is being used. This can maximise information transfer across a channel suffering noise by filtering out the signal from the noise, i.e. when reading data from NAND cells.

The 7.5GB/sec bandwidth was tested using the IOMeter benchmark. A Crystal DiskMark run delivered sequential reads at 6.2GB/sec and writes at 4.2GB/sec.

Notebooks, desktops and servers using PCIe 4.0 should run significantly faster than today’s PCIe 3.0 systems. Enterprise-class PCIe 4.0 SSDS should go faster still. For example, Liquid’s LQD4500 is an AIC format enterprise SSD using 16 PCIe 4.0 lanes to deliver a massive 24GB/sec. 

The Lexar consumer SSD business was spun off by Micron in 2017 and is now owned by Chinese vendor Longsys. Lexar is expected to announce its PCIe 4.0 drive with 512GB, 1TB and 2TB capacity points in Q2 2020. Blocks & Files expects Samsung will have announced its line of PCIe 4.0 class drives by then.

Scale Computing takes HCI to the thin edge

Last week Scale Computing launched a cigarette box-sized HCI edge appliance, the HE 150. You can read our story on HE 150 specs and availability here. In this article we cover the company’s rationale for launching the industry’s first thin edge HCI appliance.

Fat edge systems are IT systems outside the core data centre but still require IT staff. Thin edge systems have fewer or no IT staff and range from remote and branch offices to kiosks and telegraph pole installations. 

Scale co-founder Scott Loughmiller said: “Scale is hyper-focused on the edge and can manage 1,000 site edge computing deployments.”

Scale co-founder Scott Loughmiller

In a press briefing in San Francisco last week, he said thin edge locations often have several computer systems running retail tills, video surveillance and various control systems. Each system has separate management and support services, and the site may require a staff member to become a part-time IT person.

Loughmiller said; “Edge sites have no IT staff and often up to eight specific IT silo systems. Scale shrinks footprint and eases management … bringing high-availability and remote management and security to remote sites and replacing the part-time IT person.”

Pre-production HE150 with iPhone 6 for size comparison.

With the HE 150, Scale has designed a product that doesn’t need a server closet, networking switch or air conditioning. In many ways it’s a classic embedded system which runs classic X86 software and is customisable, so that it can manage other computer-based systems at the edge sites and become an all-in-one box. 

Out of the HCI trenches

Scale’s entry into thin-edge computing gives the company a foothold in virgin HCI territory, far from the madding crowd of vendors jockeying for position below the dominant players VMware and Nutanix

The general hyperconverged infrastructure (HCI) market suppliers focuses on enterprise data centres and smaller sites where there is a compute closet and, generally, an IT person.

With Dell EMC/VMware and Nutanix supplying over half the HCI market and growing, other vendors are duking it out for leftovers. Competition is intense, with Cisco, DataCore, Datrium, HPE, NetApp, Pivot3 and Scale battling for revenues.

Scale Computing pulled in a $34.8m funding round late last year. The intended move into thin edge computing systems presumably excited the VCs enough to cough up the cash.

Panasas mulls ‘Ludicrous Mode’ latency-killer for PanFS

GPU-using workloads such as artificial intelligence and machine learning require faster access to data and metadata to keep them busy. And this presents a challenge for high performance storage vendors such as Panasas.

The company’s PanFS high-performance parallel file system supports mixed IO and high bandwidth but GPU-heavy workloads require lower latency access than the system currently provides.

Panasas is exploring a ‘Ludicrous Mode’ speed accelerator addition to PanFS and software architect Curtis Anderson recently briefed Blocks & Files about this. 

He said Panasas engineers are investigating ways of adding lower latency access such as NVMe tier insertion. However, there is no formal commitment to delivering these technologies, which are internally dubbed ‘Ludicrous Mode’, in homage to Tesla’s vehicle acceleration. Blocks & Files thinks that they could come to fruition in the second half of 2020 – subject, of course, to the engineering efforts panning out.

In today’s PanFS setup, front-end client systems access back-end Object Storage Devices (OSDs), using Panasas DirectFlow. The clients use DirectFlow protocol to send IO requests to out-of-band metadata handling devices called ActiveStor Directors (ASD). They set up direct links between the client systems and the OSDs. High bandwidth is utilised to send data from many OSDs at once to the clients.

Anderson said: “One way of adding lower latency would be to make the OSDs all-flash systems and use faster links to them.” But this would be highly expensive when HPC systems use hundreds of terabytes of capacity. Also, not all HPC workloads require very low latency access.

Ludicrous Mode inserts an NVMe-over-Fabrics (NVMe-oF) all-flash storage array between the clients and the OSDs. We think it could provide sub-half millisecond access to data in the NVMe-oF array for the client systems.

Diagram key; OSD = Object Storage Device, ASD = ActiveStor Director, DFC – DirectFlow Client.

They would use NMVe-oF-capable NICs (network interface cards) and RDMA (Remote Direct Memory Access) to access the data. Access by these clients and others to the back-end OSDs would take place in the standard PanFS way.

The NVMe array would use commodity hardware to save cost. Users would manage the NVMe-oF storage using so-called Extended Attributes of PanFS to control data set placement between the backend high bandwidth OSDs and the low latency NVMe-oF store.

The OSDs have been upgraded with multi-tiered data placement:

  • NVDIMM for transaction logs, DRAM for data caching
  • NVMe for metadata, SSDs for small files, HDDs for large files
  • Low latency for small files and metadata, high bandwidth for large files


Hyperconverged stretching lead over converged as VMware extends lead over Nutanix

A wide gap has opened up between hyperconverged system sales and converged infrastructure (CI) in IDC’s latest Worldwide Quarterly Converged Systems Tracker, as VMware stretches out its lead over Nutanix in the HCI market.

In the third quarter of this year the overall converged systems market revenue grew 3.5 per cent year over year to $3.75bn. All this growth as due to the hyperconverged infrastructure (HCI) segment as Certified Reference Systems and Integrated Infrastructure (CI) and Integrated Platforms, lost ground.

  • Certified Reference Systems and Integrated Infrastructure: $1.26bn, 33.7 per cent revenue market share and and 8.4 per cent decline on the year
  • Integrated Platforms: $475m, 12.6 per cent share, and a decline of 13.9 per cent,
  • Hyperconverged Systems; $2.02bn, 53.7 per cent share, and 18.7 per cent growth 

Citing a challenging datacentre infrastructure environment, Sebastian Lagana, IDC research manager, infrastructure platforms and technologies, said :“Hyperconverged solutions remain in demand as vendors do an excellent job positioning the solutions as an ideal framework for hybrid, multi-cloud environments due to their software-defined nature and ease of integration into premises-agnostic environments.” 

Trends since the second 2017 quarter show how HCI revenues first overtook Integrated Platforms and then, a year later, CI systems. The HCI – CI gap is increasing. 

IDC issues numbers for the top five HCI system supplies for the quarter, based on brand and, separately, the HCI software supplier. Dell Technologies is the branded products leader and Nutanix is second:

The software supplier-focused numbers show VMware, with Nutanix in second place again but on a higher number that reflects sales through other suppliers’ channels.

The chart below of the eight most recent quarters show VMware overtaking Nutanix in Q2 201. It has and has maintained its revenue difference at a fairly constant level.

IDC’s HCI numbers include compute, networking and storage components. Storage revenues are not revealed but are obviously smaller than the overall HCI number.

Stratoscale crashes to earth

Hyperconverged software startup Stratoscale is closing down.

The Israeli company was founded in 2012 by Ariel Maislos and Eyal Bogner and has raised $70m in funding to date. Investors include the venture capital arms of Cisco, Intel and Western Digital.

Stratoscale developed a unique type of HCI system, with deployed systems comprising an AWS-compatible region supporting EC2, S3, EBS, RDS and Kubernetes. 

In 2017 it bought Tesora, a privately-owned database-as-a-service and developed a fully-managed relational database service compatible with Amazon’s offering.

According to a Calcalist report Stratoscale talked to several other companies about a merger after finding that sales were growing too slowly.

Western Digital cranks up the speed with WD Blue SN500 NVMe SSD

Western Digital has upped the speed of its gumstick format PC+notebook boot drive by adding two more PCIe lanes.

The SN520 is a 250GB, 500GB, or 1TB drive M.2 2280 drive using 64-layer, TLC (3bits/cell) flash and NVMe running across 2 PCIe gen 3 lanes. The new Blue SN550 has the same capacities but uses 96-layer TLC flash and with 4 PCIe lanes.

This gives it a notable performance improvement. The SN520 does up to 270,000/280,000 random read/write IOPS, whereas the SN550 cranks out at 410,000/405,000. The SN520’s sequential read/write bandwidth of up to 1.7/1.4GB/sec increases to 2.4/1.95GB/sec with the SN550.

The endurance is 0.3 drive writes a day for five years – that translates tos 600TB written at the 1TB capacity point. The SN520 maxed out at 300TB written. 

WD said content creators will get faster content access, meaning death by PowerPoint slide decks can be devised faster.

The WD Blue SN550 NVMe SSD is available in the U.S. at select Western Digital retailers, e-tailers, resellers, system integrators, and the WD Store. Prices in the WD’s UK store are £50 for the 250GB, £70 for the 500GB and £125 for the 1TB.

DDN Enterprise adds AIOps twist to Tintri roadmap

DDN is organising itself into two divisions: DDN At Scale is its traditional high performance storage business; and DDN Enterprise is home to three recent acquisitions – Tintri, Nexenta and IntelliFlash. They retain their individual brands: Tintri by DDN, Nexenta by DDN and IntelliFlash by DDN.

The Enterprise division, run by GM Tom Ellery, is shaping up to be quite different from At Scale, in technology, approach and culture, as enterprise storage array supply differs from academic HPC purchasing.

In a press briefing this month, Ellery and CMO Mario Blandini ran us through the technology roadmaps. His division is developing an AIOps focus to give it a distinctive identity. This should help it make progress against the strong competition in the enterprise storage market.

DDN said the overall enterprise storage market is looking for faster access to more data that is stored in arrays which are simpler to manage and operate more efficiently than today’s products. The data will be stored in on-premises arrays and in the public cloud.

Picking up the pieces

But first a bit of scene setting. DDN’s acquisitions are a decidedly mixed bag.

Tintri went into Chapter 11 in July 2018, shortly after a disastrous IPO. DDN bought the company for $60m in a competitive auction. Last month, it said Tintri had booked $80m revs in the 12 months since acquisition, is profitable, growing and already accretive to DDN revenues.

Nexenta, a software-defined storage startup, raised $145m in 12 funding rounds since its inception in 2005. In October 2017 then CEO Tarkan Maner (now at Nutanix) said Nexenta had accumulated lifetime revenues of $100m and added he was gunning for $100m a year revenues by 2020. DDN scooped up the company in May 2019 on undisclosed terms, so we can infer Nexenta was nowhere near the $100m per year trajectory.

DDN bought Western Digital’s IntelliFlash business in September 2019 , again on undisclosed terms, when the disk drive maker decided it was not suited, after all, to run the data centre systems business it had assembled through the acquisition of Tegile in August 2017.

Now to the roadmaps.

Database-aware Tintri

Tintri’s storage is VM-aware and the company will add a SQL database-aware feature to enable the management and provisioning of database storage elements, in the same fashion as users can handle a group of VMs. This is due for release this month.

This database-aware function will be extended. As a database’s IO needs change over time Tintri OS will reconfigure resources without admin involvement to provide what the database needs within set policy limits. The admins can get real-time and historic analytics reports.

Blandini said existing array management services are based on telemetry – or log data – that is examined and analysed, using fault-fix and predictive analytics, and then given to customers. The company aims to offer real-time analysis using AI operations (AIOps) techniques to produce intelligent storage infrastructure that integrates with an overall data centre.

DDN’s view of AIOps

AIOps in theory vs. AIOps in reality

Blandini said AIOps ideally means telemetry from data centre components feeds int to an analytics intelligence function which munches the data and produces insights and actions to modify component layer items in the data centre.

The reality falls short of the ideal, with incomplete component layer log data going to different places and getting analysed in different ways. DDN aims to improve on this inadequate AIOps reality by introducing autonomous in-array functionality. (See presentation slide below.)

The AIOps software will execute in the array to provide the real-time capability and there will also be predictive analytics based on historical telemetry. Array log data is received and analysed by the array’s Tintri OS to see how the array is operating against its planned state and changes are made autonomously in real time to optimise performance.

Nexenta and IntelliFlash

The IntelliFlash arrays are positioned as high performance flash arrays while the Tintri products are for virtual server and, in the future, database environments. Nexenta’s main product, NexentaStor, is software-only and runs on commodity hardware.

DDN’s focus products in its At Scale and enterprise divisions.

NexentaStor and IntelliFlash software are both based on ZFS. Our understanding is that DDN will work to combine the two into a single environment.

This will be available as software-only to run on the IntelliFlash appliance hardware as that develops, with higher-capacity SSDs supported for example.

Some limited cross-fertilisation is happening already: there is a Nexenta VSA available for Tintri arrays, for example. Tintri is developing AIOps management functionality, while the IntelliFlash arrays come with IntelliCare cloud-based predictive analytics. This uses telemetry from the array.

We did not hear if DDN is to develop in-array AIOps capability for the IntelliFlash appliance. But Blocks & Files thinks IntelliCare will also gain a real-time AIOps function.


Virtana fleshes out AIOps story with VirtualWisdom 6.3

Virtana has extended its IT infrastructure monitoring software with global storage capacity management, wider system support and best practice sharing.

VirtualWisdom 6.3 introduces Capacity Auditor which provides a storage array usage report. The tool counts overall usage across multiple vendors’ arrays in the customer’s data centres.

Tim Van Ash, Virtana’s SVP for products, told us in a telephone briefing that Capacity Auditor is a “marquee analytics capability. Vendor storage resource management tools are not meeting customer needs today.”

An independent cross-vendor tool is needed that can be used in an AIOps system to predict capacity and increase it if required, he said. Capacity Auditor is a single pane of glass look at capacity usage rolled up across multiple vendors’ systems, and presented in a form suitable for decision makers. Today these are people. Tomorrow it could be an AIOps decision-making function, Ash said.

He cited a telco customer who had one person spend a week every month manually preparing a similar report covering myriad filers. Now it’s a screen button press.

Virtana was formerly called Virtual Instruments and is building an AIOps capability so that IT infrastructure monitoring can become more autonomous.

Virtual Wisdom 6.3 adds support for Red Hat KVM, Oracle Solaris and Pure Storage’s FlashArray. FlashBlade support will come next year.

The new release provides customisable 24-hour snapshot reports for servers, storage, and networks, called Dynamic Entity Insights. It also has portable reports and dashboards for sharing best practice across an authorised group of people, such as a customer’s business units.


Scale Computing launches mini-HCI edge box

Scale Computing has announced a miniature hyperconverged system for space-constrained edge computing sites.

The HE150 slots in below the HE500 edge HCI box, is the size of a small hi-fi system, and uses Intel’s NUC micro-PC processing hardware. HC3 Edge Fabric technology eliminates the need for a backplane network switch and reduces the HE500’s 4 ports per node to two. Scale said this lowers the total cost of ownership and makes network connectivity simpler. 

HE150.

Scale Computing CEO Jeff Ready said: “Our ability to deliver HCI technology in a smaller form factor and lower price point is making edge computing capabilities and resources more accessible to many organisations.”

HC3 Edge Fabric diagram.

The HE150 uses NVMe flash storage and needs little electricity. It doesn’t require installation in a server closet and is robust enough for industrial locations. The unit incorporates disaster recovery, high availability clustering, rolling upgrades and integrated data protection features.

Scale is introducing HC3 Edge general support for Intel NUC-based systems and will support Lenovo’s Smart Edge portfolio of fan-less, small form factor PCs.

NUC dimensions are: 116.6mm x 112.0mm x 39.0mm. You can check out NUC reference information and the HE150 is available now.