Home Blog Page 387

MemSQL joins cloud database gang

MemSQL has released Helios, a cloud-native version of its eponymous in-memory database.

Helios provides access to MemSQL, running-in memory on AWS and the Google Cloud Platform. Azure support is on the way.

MemSQL is a distributed, in-memory relational database accessed through SQL. It is said to be much faster than other relational databases as it runs in memory,  avoiding time-sapping IO to local or networked storage.

MemSQL Co-CEO Nikita Shamgunov, sent out a quote: “With the majority of growth in the database market moving to the cloud, the time is right to release MemSQL Helios for enterprises looking for a viable alternative to legacy on-premises vendors like Oracle and SAP.”

Oracle has its own cloud version of its database and SAP’s in-memory database HANA is available in the cloud too. So MemSQL is following them…

Helios is freely available via MemSQL’s website for preview. Pricing for the on-demand service has not been revealed but expect it to be cheaper than running MemSQL on-premises in your own servers.

MemSQL Helios diasgram.

SingleStore

MemSQL yesterday also released a beta version of MemSQL 7. The updated database manages data in a new way, called SingleStore. According to the company this reduces the pain of choosing between a rowstore or a columnstore for workloads. A columnstore approach is read-optimised for large data sets but works less well with lots of individual record queries. Rowstores are optimised to read index data records but are slower at mass data reads.

The company said MemSQL 7 reduces the differences between the two approaches. It claims it is the fastest database in real-world conditions and offers the lowest cost for performance.

MemSQL has system of record features – it is the authoritative source for multiple sources of the same data held inside an enterprise. Accordingly, it offers incremental backup, and synchronous replication with little performance penalty to ensure data is secure and available.

MemSQL 7.0 beta is available for download now, and will be generally available in the cloud and for download later this year.

Data Domain joins Dell’s Power gang, debuts bigger faster backup arrays

Dell EMC has changed the name of its Data Domain product line to PowerProtect, to coincide with the launch of bigger, faster systems.

Data Domain is Dell EMC’s deduplicating backup to disk array target and is the market leader in purpose-built backup arrays, according to IDC.

It has a back-end cloud storage facility with multiple public clouds and works with various backup software vendors to deduplicate backups before they are sent to the arrays. This accelerates data ingest speed.

Speeds and feeds

The old product line starts with the software-only DD Virtual Edition and includes various scale-up appliances from the mid-range DD3300 through DD6300, DD6800, and DD9300 to the top-end DD9800.

The DD6300 and DD6800 are replaced by the new DD6900, the DD9300 by the DD9400 and the DD9900 replaces the DD9800. That’s four products replaced by three.

The new systems are faster, using PowerEdge servers, and store more data in less rack space; they use 8TB disk drives instead of the 4TB ones mainly used before. They store metadata in SSDs.

PowerProtect enclosure and bezel

Why not use larger drives still as 14TB and 16TB ones are available? Beth Phalen, President, Dell EMC data protection division, told us that the 8TB drives provided the ideal mix of cost, reliability and performance.

Dell EMC said it has improved logical capacity by up to 30 per cent and data reduction by up to 65x. Backups are up to 38 per cent faster and restores up to 36 per cent quicker.

The product can provide instant access and instant restore of up to 60,000 IOPS for up to 64 virtual machines; it was 40,000 IOPS before. Phalen said this was achieved with the help of a larger cache and improvements to the file system inside the system’s software.

The new systems also support 25GbE and 100GbE network speeds. 

Here is a speeds and feeds table:

Logical capacity is based on deduplication effectively increasing the raw capacity. Active Tier is the main storage tier. It can be extended to locally-attached storage (Extended Retention) and to the public cloud (Cloud Tier.)

We have positioned the old and new model ranges, using usable capacity and ingress speed, in the chart below.

In addition, PowerProtect Software has stronger integration with vSphere, enabling self-service recovery. It supports cloud disaster recovery for automated disaster recovery fail-over and failback of VMware workloads in the public cloud. The software integrates with PowerProtect Cyber Recovery to protect against malware. This provides automated data recovery from a secure, isolated vault to ensue clean data. 

Phalen told us Dell EMC is now on a quarterly release cycle for software updates to the PowerProtect systems, and the company will release new features more quickly.

PowerProtect DD VE and DD3300 products are available now, as are the PowerProtect Software and Cyber Recovery. The DD6900, 9400 and 9900 are globally available from September 30. 

Dell EMC is standardising on PowerX branding across its product range; witness PowerMax arrays, PowerEdge servers, and PowerSwitch networking products. Now we have PowerProtect backup boxes and software.

Igneous builds cloud-native data protection service atop Amazon S3

Igneous has built a software-only version of its DataProtect backup and archive, using Amazon’s four S3 stores as a file backup target. According to the company this removes the need for on-premises backup hardware and offline tape storage for archived data.

Announcing the data protection-as-a-service, Igneous yesterday described AWS Glacier Deep Archive as ‘an economic game-changer, allowing customers options never before considered – such as moving on-premises backup and restore services to public cloud”. 

The four AWS S3 targets are S3, Standard-IA. Glacier, and Glacier Deep Archive. Igneous said these cloud targets are vastly more scalable than legacy or existing on-premises target stores.

The cloud is also cheaper, according to Igneous’s numbers for 1PB of capacity for five years:

On-premises appliance input costs:

  • Hardware (usable TB) – $!28 – $280
  • Data centre space (RU) – $210 – $275
  • Power and cooling – (TB/Y) – $5.36 – $7.92
  • Networking (per port) – $136 – $512
  • R and S time ($317/RU) – $317
  • Time to implementation – 4 weeks

The total is $178,469 to $354,492.

Backup to cloud input costs:

  • Cloud Object (usable TB/Yr) – $12.7 – $49.16
  • Request Costs – $0.10 – $0.50
  • Networking (per port) – $136 – $512
  • 10GbitE Direct Connect (Yr) – $19,710
  • Restore Cost (3%/TB/Yr) – $52 – $62
  • Time to implementation – 4 hours

The total is $77,303 to $260,910.

At the mid-points the on-premises appliance costs $266,481 compared to $169,107 for cloud backup..

The house that Igneous built

Igneous Inc. is a venture-backed startup. Based in Seattle, the company first came to market with an unstructured data management-as-a-service. Its software is cloud-native and can handle billions of files.

With its new DataProtect-as-a-Backup-Service Direct to AWS, on-premises primary file storage is backed up using Igneous agent software running in a virtual machine. The company has built the service using three home-grown technologies, DataDiscovery, DataProtect and DataFlow.

DataDiscover provides the file discovery and indexing service, building an indexing and metadata store in an Igneous cloud. Scans run up to 1.6 billion files per hour. Data Protect kicks off backups from on-premises targets and is policy-driven. Data transfers via DataFlow uses parallel ingest streams to speed things up.

DataDiscover has API access to popular NAS targets such as NetApp, Isilon, Qumulo and Pure Storage filers, which accelerates file discovery on these systems. The software handles petabytes of data in multiple locations, Igneous said.

DataFlow copies, moves and syncs files automatically, and this enables Igneous’s DPaaS to migrate files – for instance from site-to-site via replication or from site-to-cloud. If local storage is needed the target can be an on-premises version of S3.

DataDiscover GUI showing file systems’ capacity and files grouped by age.

Comment

Providing data protection as a service is the coming trend in backup. Many on-premises backup suppliers can send backups to the cloud – Veeam and Rubrik and Cohesity, for instance. Like Druva and Clumio, Igneous says its software is cloud-native and can therefore operate more efficiently. It also relieves customers of the need for an on-premises backup hardware appliance.

Cloud-native DPaaS providers will almost inevitably offer protection against cyber-threats and layer on data analytics and copy data management services on top. This is the new frontier for backup. 

Seagate: our disk drives are safe from SSDs for at least 15 years

A Seagate analysts’ day last week showed the company thinks the $/TB cost advantage of disk over SSDs will last for 15 years.

As reported by Wells Fargo senior analyst Aaron Rakers, Seagate said it can increase disk area density to maintain the $/TB cost differential with SSDs. The company is also introducing twin read/write heads, known as multi-actuator technology, to increase data IO speed.

Some technology analysts argue buyers will move from disk drives to SSDs when the $/TB cost of SSDs falls to five times or less that of disk. If disk drive makers can put more bits on a platter their $/TB cost goes down.

Seagate is introducing HAMR (Heat-Assisted Magnetic Recording) to cram more bits on a disk’s surface, and anticipates 20 per cent compound annual growth in areal density out to 2034. At the analyst briefing it provided a chart that shows the growth out to 2026:

Accordingly, Seagate thinks it can match the reduction on $/TB of SSDs coming from QLC flash (4 bits/cell) and increased 3D NAND layer counts. The NAND industry is introducing 96-layer NAND and plans to move to 128-layers after that. It is also looking at PLC penta-level cell (5bits/cell) technology.

Seagate does not see the disk and enterprise SSD curves in the chart getting closer.

All of which means Seagate opposes the idea that SSDs will kill disk drives.

Hammering down on details

Seagate CTO Dr John Morris told analysts that Seagate has built 55,000 HAMR drives and aims to get disks ready for customer sampling by the end of 2020. Some customers are already testing early samples.

A few HAMR tech details emerged at the briefing. The drives use glass platters with Iron Platinum (FePt) media, the heads use a Near-Field Transducer design and the bits are oriented perpendicularly to the surface of the drive. The writing process completes in about two nanoseconds and involves heating the bit area using a laser and then cooling it (which is a passive part of the process).

Seagate’s disk drive roadmap sees nearline 7,200rpm 18TB conventional HAMR drives and 20TB shingled magnetic recording (SMR) HAMR drives ramping to mass production in the first half of 2020. These will be helium-filled and have 9 platters. Capacities will grow to 30TB+ in 2023 and 50TB+ in 2025.

OEMs are qualifying the multi-actuator heads which are shipping to customers now in 14TB and 16TB drives. The 14TB drive’s sequential bandwidth is up to 520MB/sec with twin heads.

Single head drives operate at up to 250MB/sec. SSDs operate up to 2.5GB/sec with four PCIe v3 lanes and well beyond that with more lanes. With PCIe 4.0 we can expect the SSD speed advantage over disk to increase.

Market size

In his presentation Seagate CEO Dave Mosley categorised IT history into four phases, with the number of connected devices increasing in each phase.

  • IT 1.0: Centralized architectural with mainframes in 1960-1980; ~10 million connected devices
  • IT 2.0: Distributed compute client-server phase 1980-2010; 2bn connected devices,
  • IT 3.0: Centralized phase 2005-2025; ~7bn connected devices; mobile-cloud
  • IT 4.0: Distributed edge compute 2020+; trend to 42 billion connected devices.

These connected devices will need storage. The total capacity needed in 2025 will be around 17 zettabytes – 17 million PB. Generally speaking, half of that data will be stored in the public cloud and half on-premises.

Mosley reckons the total addressable market for disk drives will grow from $21.8bn in 2019 to $24bn in 2025. Some 80 per cent of storage will be mass capacity, which is where nearline disk drives slot in.

Comment

Seagate’s management is betting the company’s future on disk drives, unlike its rival Toshiba and Western Digital which make SSDs too.

Seagate has no NAND chip-making interest and a small SSD selling business. Any migration away from disk drives will hit it severely.


Huawei builds world’s biggest all-flash storage array

Huawei used its annual shindig HUAWEI CONNECT in Shanghai last week to show off its latest all-flash storage array, the OceanStor Dorado V6. And it’s a monster – with the biggest capacity that Blocks and Files has seen in a storage array.

The company has not announced availability but prior to launch it has pumped out some big numbers via a press release and a marketing page.

The Dorado V6 performs up to 20 million I/O operations per second (IOPS) – twice as much as the next-best player according to Huawei, which did not name the rival. We think it is referring to Dell EMC’s PowerMax 8000 which delivers up to 10 million IOPS. Read latency for the Huawei system is down to 0.1 ms.

Speeds and feeds

The five Dorado V6 models support Huawei’s Hi1812E NVMe SSDs and NVMe-oF access. They scale by IO port counts and the maximum number of SSDs supported, as the table below shows.

Basic maths says the maximum raw capacities are:

  • 3000 – 36.8 PB
  • 5000 – 49.2 PB
  • 6000 – 73.73 PB
  • 8000 – 98.3 PB
  • 18000 – 196.61 PB

As it is 2019, the Dorado V6 of course has an AI processor – a first for a storage array, Huawei claims. The system uses this for performance tuning and management. The Dorado V6 is also packed with fault-tolerance features and Huawei claims a one-second switchover with uninterrupted links in the event of controller failure.

The array features inline deduplication and compression, thin provisioning, remote replication, continuous data protection, quality of service, cloning and snapshots and cloud backup.

Huawei has not revealed pricing yet but the V6 will probably cost $2.5m or more, judging by SPC-1 benchmark results for Dorado V3 and V5 arrays. You can apply for pricing on Huawei’s website by providing project details and a budget range into a general pricing inquiry pop-up.

Samsung heralds the era of super-smart SSDs

Samsung has started shipping smart PM1733 and 1735 SSDs with up to 30-plus TB capacity, gen 4 PCIe for faster data transfer and features that enable them to operate for longer and with more users.

Speeds and feeds first and then we’ll look at the extra features.

Both SSDs use NVMe over PCIe gen 4. They come in U.2 (2,5-inch) and half-height, half-length Add-in-Card formats. The capacity ranges are;

  • PM1733 U.2 – 0.96TB – 30.72TB
  • PM1733 AIC – 1.92TB – 15.36TB
  • PM1735 U.2 – 0.8TB – 12.8TB
  • PM1735 AIC – 1.6TB – 12.8TB

The PM1733 is optimised for read performance with endurance of a single drive write per day (DWPD) for fiv years. The PM1735 is optimised for mixed read/write use and endurance is three DWPDs for five years.

Samsung PM1733 SSD exploded view
Samsung PM1733 exploded view

Bandwidth is up to 3.8GB/sec sequential write for both SSDs in both formats. Bandwidth up to 8GB/sec sequential write in AIC format and 6.4GB/sec in U.2 format. These speeds are at least twice as fast as current PCIe gen 3 drives.

Random read IOPS are up to 260,000 for the PM1735 U.2 with an amazing 1.45m random read IOPS. Samsung has not supplied IOPS numbers for the PM1735 AIC or the PM1733 in U,2 or AIC formats.

Feature fun

The performance numbers are great but the standouts are three extra controller features.

Fail-in-place. If a fault is identified in any NAND chip inside these drives – there are 512 inside the 30.72TB model – FIP software activates error-handling algorithms automatically to effectively bypass the chip. This is conceptually similar to disk bad block handling and enables the SSD to keep working if NAND chips inside it fail. 

Virtual SSDs. The SSD is presented as up to 64 virtual and smaller SSDs, providing independent, virtual workspaces for multiple users. Server CPU tasks such as Single-Root I/O Virtualization (SR-IOV) can be done by the SSD controller, offloading the server host.

V-NAND machine learning technology. This detects any variation among circuit patterns through big data analytics. It looks at and verifies cell characteristics and predicts how cell behaviour will develop. As we understand it, the SSD controller monitors telemetry from the chips and runs it through machine learning models to track cell performance.

Samsung said this means its SSDs have higher levels of performance, capacity and reliability because they can sweat their cell assets better. As SSD chips move from TLC to QLC with more voltage levels in the cells, the precision management of cell characteristics becomes more important.

Western Digital’s zoning concept is another example of extra drive features. Blocks & Files expects this approach to spread quickly to other enterprise SSD suppliers.We are heading towards an era of PCIe 4 speed-accelerated and much smarter SSDs.

Western Digital exits data centre systems – sells IntelliFlash to DDN, puts ActiveScale on the block

Western Digital is abandoning the storage systems business and is selling the IntelliFlash array unit to DDN. It has also put its ActiveScale archival storage array business up for sale.

This is an abrupt and unexpected about turn for Western Digital which acquired IntelliFlash when it bought Tegile in August 2017 for an undisclosed sum. As recently as July this year WD extended IntelliFlash capabilities with entry-level NVMe models, a higher-capacity SAS array, live dataset migration and an S3 connector.

The sale to DDN suggests to Blocks & Files that the Tegile acquisition was a WD mistake. This excursion into enterprise storage arrays and archive systems reflects poorly on Western Digital leadership.

The conclusion many will draw is that selling data centre storage systems to the highly competitive enterprise market was a step too far for WD. This business gets more than 80 per cent of its revenues from selling disk drives and SSDs to OEMs and consumers, where it faces limited competition.

A WD statement described: “Western Digital’s strategic intention to exit Storage Systems, which consists of the IntelliFlash and ActiveScale businesses. The company is exploring strategic options for ActiveScale. These actions will allow Western Digital to optimize its Data Center Systems portfolio around its core Storage Platforms business, which includes the OpenFlex platform and fabric-attached storage technologies.”

Mike Cordano, WD COO, said in a prepared statement; “Scaling and accelerating growth opportunities for IntelliFlash and ActiveScale will require additional management focus and investment to ensure long-term success.”

Alex Bouzari, CEO and co-founder of DDN, said in a canned quote: “We are delighted to add Western Digital’s high-performance enterprise hybrid, all flash and NVMe solutions to DDN’s… data management at scale product portfolio.”

IntelliFlash inside DDN

The joint DDN-WD announcement said IntelliFlash customer will benefit from DDN’s focus on storage and data management challenges, deep expertise in service and support and a rich, broad technology portfolio. DDN has a set of capabilities that WD lacks as well as a willingness to invest.

IntelliFlash staff will join DDN, which now has more than 10,000 customers and 500 partners worldwide. WD and DDN will work to deliver a seamless transition for customers and partners with ongoing product availability and support continuity.  DDN is to invest in an accelerated roadmap of the IntelliFlash line.

The deal includes a mutual global sourcing agreement in which Western Digital will become a customer of IntelliFlash from DDN and a preferred HDD and SSD supplier to DDN.

DDN bought the crashed Tintri business for $60m in September 2018 and Nexenta for an undisclosed sum in May this year. DDN now has three newly-acquired product lines to integrate and locate in its product space and marketing messages; Tintri, Nexenta and Tegile. This is already starting to happen with Nexenta file capabilities added last month to Tintri systems.

Broadly speaking, Tintri and IntelliFlash compete. IntelliFlash is an enterprise array as is Tintri. It has hybrid and all-flash models but lacks Tintri’s capabilities in virtualized server integration. This software capability could be grafted onto the IntelliFlash OS and we might also expect Tintri and IntelliFlash to evolve towards a common hardware chassis.

The two lines should probably merge – unless DDN can convincingly differentiate them.

And let’s not forget ActiveScale

ActiveScale is an archival storage array, and WD got into the archive vault business when it bought HGST. As well as making disk drives HGST sold the ActiveArchive archive system. The basis for this was HGST’s acquisition of Amplidata in 2015. 

ActiveScale arrived in late 2016 and it became the lead archival product. Now it is an unwanted product. It is not an acquisition target for DDN which has its own WOS object storage line and Nexenta object storage software. It does not need a third object storage technology.

The DDN – WD transaction is expected to close later this year subject to closing conditions. WD’s storage systems business exit is expected to generate an annual non-GAAP EPS benefit for WD of at least $0.20, starting in the fiscal 2020 third quarter ending April 3, 2020. It will incur as yet unquantified restructuring and other charges.


Liqid adds memory extension tech to composable system

Liqid’s composable systems can now compose up to 16TB of Liqid Memory, enabling larger applications to reduce IO and execute much faster.

A composable system dynamically sets up a virtual server to run application workloads by pulling compute, memory, FPGA, storage and networking capabilities from a resource pool. When the server’s job is finished component resources are returned to the pool for re-use.

Liqid Memory is a DRAM and multiple NVMe SSD combination turned into virtual memory by ScaleMP’s vSMP MemoryONE software

This produces software-defined memory that can transparently replace or expand DRAM for memory-intensive applications,, according to ScaleMP. The company has written its own memory management unit to map application and system DRAM accesses to the virtual memory space which includes capacity from the NVME SSDs. The technology works without operating system or application modification.

The upshot is that more of an application’s working set executes in virtual memory and runs faster than when operating in a smaller pure DRAM memory pool with data fetched from SSDs or disk. Prefetch algorithms help achieve near-DRAM performance.

Liqid said Liqid Memory is suited for in-memory databases, including Oracle TimesTen and In-Memory Column Store, SAP HANA, DB2 BLU, Apache Spark, Aerospike DBS, and other memory-intensive applications.

Liqid Memory is available as a PCIe Add-In Card (Element LQD3900), U.2 (Element LQD3925x) drive, or within a Dell appliance with two Xeon Scalable processors and up to 12TB system memory.

Element LQD3900 PCIe add-in-card.

The LQD3900 uses an Optane SSD – as detailed in the spec sheet.

High-end, multi-socket servers can support 12TB or so of memory but low-end and mid-range ones do not. Liqid Memory is a way of giving them access to a larger pool of memory.

Data ingest and backup vendors flock to Pure Storage platform

Pure Storage has teamed up with Cloudian, Komprise and Veritas to offer customers ways of protecting FlashArray data and migrating data to/from FlashBlade.

FlashArray is Pure’s block access flash array for primary and structured data. FlashBlade is its file/object array for unstructured data. 

Cloudian

Cloudian has integrated its HyperStore object storage system with the CloudSnap feature in FlashArray’s Purity OS. CloudSnap takes a FlashArray snapshot and sends it to AWS S3-compliant storage; HyperStore in this case. 

CloudSnap connects with HyperStore as a backup, archive or DR target via an S3 API, providing policy-based, data transfer, including moving portable snapshots to and from HyperStore.  

From there Cloudian can send it on to the public cloud for longer term and cheaper retention. 

A FlashArray would be linked to Cloudian for backing up snapshot data where the fast restore feature from Pure’s FlashBlade file/object storage flash array is not needed.

Komprise

Komprise’s file information lifecycle management software can be used to ingest data into FlashBlade. The software is included in Pure’s FastStart, a customer on-boarding program for FlashBlade. This is available for a set period after which FlashBlade customers could become direct Komprise customers and also continue using the product. 

Krishna Subramanian, Komprise co-founder and COO, told Blocks & Files this week at the Pure Accelerate event in Austin TX, that this is up to twice as fast as other ways of achieving FlashBlade data ingest such as RoboCopy. That’s because Komprise has optimised its parallel IO software to take advantage of FlashBlade’s parallel IO.

Komprise supports NFS and CIFS/SMB file sources with S3 object source support coming, possibly in 2020 and in time for Amazon’s ReInvent shindig.

The Komprise software could also tier data off FlashBlade to cheap-and-deep backend targets for long-term retention.

Veritas

There is an existing though limited partnership between Veritas and Pure. Veritas’s NetBackup supports snapshot-based protection for FlashArray and FlashBlade in AI Data Hub form. FlashBlade can also be a storage target for Veritas software.

Veritas has added three more integrations:

  • InfoScale for mission-critical levels of performance and 24-hour availability
  • APTARE IT Analytics to optimize capacity, utilisation, performance and compliance with data regulations
  • The Veritas Access Appliance can be a long-term retention storage target for FlashArray data snapshots

Altogether these integrations means the entire Veritas Enterprise Data Services platform links to FlashArray and FlashBlade.

NetApp takes top billing in Gartner primary array magic quadrant

Gartner’s first combined hybrid and all flash primary array magic quadrant gives higher then expected rankings to NetApp, Pure and Infinidat, including them alongside traditional leaders Dell EMC, HPE and IBM.

The Blocks & Files standard MQ explainer says the magic quadrant is defined by axes labelled ‘ability to execute’ and ‘completeness of vision’, and split into four squares tagged ‘visionaries’, ‘niche players’, ‘challengers’ and ‘leaders’.

Here’s the quick look primary array diagram every MQ fan wants to see:

And it’s a doozy. We have added a diagonal green line indicating balanced progress to the top right high-point for ability to execute and completeness of vision.  We’ve also indicated two separate groups in the leaders’ box.

Group 1 includes top-ranked NetApp, then Dell EMC, with Pure unexpectedly in this group and in, we think, third place, with a strong vision component alongside its ability to execute. HPE is in fourth place.

A sweaker and more closely positioned group of leaders consists of Hitachi Vantara and IBM, then Huawei and, surprise, surprise, startup Infinidat entering alongside publicly-owned and more mature competitors.

The challenger’s box includes Western Digital – another surprise (thank you. Tegile); Fujitsu; DDN – strengthened through the Tintri acquisition; and Lenovo, another unexpected entrant boosted by an OEM deal with NetApp.

The niche players are China’s Inspur. Oracle, NEC, Infortrend and Synology. Playing a solo role in the visionaries’ box is Kaminario, quite close to the leaders’ quadrant. That’s 18 suppliers in total.

Get a copy of the report from Gartner, at a price, or, hopefully, from a vendor licensed to distribute it for no charge.

Boudreau replaces Clarke as head of Dell ISG, in the tale of two Jeffs

Jeff Clarke, Dell Technologies’s vice chairman and overall product and operations boss, has stepped away from day-to-day control of the server-to-storage-to-networking-to-protection ISG Group. He passes the baton to EMC veteran Jeff Boudreau, GM for storage inside ISG, who will report to him.

Dan Inbar, SVP in charge of Israel R&D, becomes head of storage and reports to Boudreau. John Roese, CTO in charge of cross-product operation, becomes CTO for all ISG products. Clarke retains his titles and overall responsibilities following the management shuffle.

Clarke spent two years wrestling the conglomeration of Dell servers, switches and storage products and the acquired EMC businesses into a single operation.

He swept away EMC’s notion of a multi-tiered product range with partially overlapping and competing products. And in its place he imposed the Dell way of operating lead brands with co-ordinated product ranges and branding across ISG (Infrastructure Solutions Group.)

The management moves constitute the completion of Dell’s integration with EMC storage operations. The public outcomes so far are PowerEdge servers, and PowerMax, PowerVault, and PowerProtect storage systems along with PowerSwitch network products.

Blocks & Files understands that Dell will have announced the full Power portfolio by or before Dell Technologies World in Spring 2020. This will include the as yet unannounced MidRang .Next storage line which will combine the SC, Unity and Xtremio storage array products.

We might expect the Isilon scale-out filer business to adopt PowerFile or similar branding. Sort out Data Domain branding to Power-something and then it would be a clean Power sweep.

Hitachi Vantara and Hitachi Consulting to merge

Hitachi is merging the Hitachi Vantara and Hitachi Consulting businesses under the Hitachi Vantara brand, with effect from January 2020.

Hitachi Vantara supplies VSP storage array, servers and business analytics software based around its 2015 Pentaho acquisition and others.

The combined business will “strengthen front line and delivery capabilities to increase alignment and unlock the synergies between Hitachi Vantara and its vertical business units.” The company will also focus on building Hitachi’s Lumada Internet of Things business.

The new entity is to be led by chairman and CEO Toshiaki Tokunaga, who chairs Hitachi Global Digital Holdings, the holding company that oversees Hitachi Vantara and Hitachi Consulting.

Brian Householder, current Hitachi Vantara CEO, and Hicham Abdessamad, CEO of Hitachi Consulting, will remain at Hitachi in executive leadership positions.

This looks like the Japanese parent corporation returning two American-led subsidiaries to Japanese control. Tokunaga is board chairman for Hitachi Global Digital Holdings and it is unlikely he will remain as the CEO exerting day-to-day control of the new Hitachi Vantara. It is also possible that either Brian Householder or Hicham Abdessamad could be the new CEO. Either would signify continuity for north America and EMEA customers. A Japanese CEO could signal a break from that continuity.

Leadership and organisational structure details will be announced in January 2020.