Home Blog Page 388

Toshiba nudges direct-to-Ethernet SSD towards the light

Toshiba logo
Toshiba logo

Toshiba and Samsung are pushing the idea of SSDs directly accessed over Ethernet as a way of simplifying the storage access stack.

This idea of directly accessing storage drives across Ethernet first surfaced with Seagate and its Kinetic disk drives. Kinetic drives implement a key:value store instead of the traditional file, block or object storage mediated through a controller manipulating the drive’s data blocks.

Samsung supports the key:value drive store idea but Toshiba opposes it.

Kinetic disk drives

Seagate was a prominent champion of kinetic drives but its technology appears to have fallen by the wayside in 2015 or2016.

Kinetic disk drives had an on-board NIC plus a small processor implementing a key:value store. The drives were directly accessed over Ethernet, with the host server operating the drive as a key:value store – as opposed to a block or file storage device.

There was complexity involved in writing host software to handle the kinetic drives. There was also the lack of significant benefit compared to the existing ways of accessing disk drives. The upshot is that customers had little appetite for kinetic drives.

Direct Ethernet access SSDs

Toshiba proposed an Ethernet-addressed SSD at the Flash Memory Summit (FMS) 2018, with a drive supporting the NVMe-over Fabrics (NVME-oF) protocol.

NVMe-oF uses the NVMe protocol across an Ethernet or Fibre Channel network to move data to and from a storage target, addressed at drive level. Data is pumped back and forth at remote direct access memory (RDMA) speeds, meaning a few microseconds latency.

Typically, a smart NIC intercepts the NVMe-oF data packets, analyses them and passes them on to a drive using the NVMe protocol. 

At FMS 2018, Toshiba put 24 of its SSDs inside an Aupera JBOF (Just a Bunch of Flash drives) chassis. They were interfaced to a server host via a Marvell 88SN2400 NVMe-oF SSD converter controller, with dual-port 25Gbit/s ethernet connectivity.

The chassis achieved 16 million 4K random read IOPS from its 24 drives – claimed at the time to be the fastest random read IOPS rate recorded by an all-flash array. Each drive was rated at 666,666 IOPS.

Ssmsung KV-SSD

In November 2018 Samsung revealed it was working on a similar Ethernet-addressed drive, the Z-SSD. The underlying device was a PM983 SSDs with an NVMe connection. This, unlike Toshiba’s NVMe-oF SSD, had an on-board key:value store, making it a KV-SSD,

Samsung said it would eliminate block storage inefficiency and reduce latency.

Toshiba’s NVMe-oF all-flash JBOF

At FMS 2019 Toshiba has gone one step further, giving its SSD a direct NVMe-oF connection. The demonstration SSD uses Toshiba’s 96-layer 3D NAND and has an on-board Marvell 88SS5000 converter controller. This has 8 NAND channels and up to 8GB DRAM, and can talk to Marvell data centre Ethernet switches and so link to servers.

Toshiba said the 88SS5000-SSD combination delivers up to 3GB/sec of throughput and up to 650k random IOPS. This is a tad slower than the FMS 2018 system’s SSDs.

Marvell partners

In an FMS 2019 press release Marvell said the idea of a direct-to-ethernet SSD is “being advanced by multiple storage end users, server and storage system OEMs and SSD makers.” That makes Toshiba the first one to go public with what Marvell says is a market-ready product.

Marvell cited Alvaro Toledo, VP for SSD marketing at Toshiba Memory America, who talked of an SSD demonstration – with no commitment to launch product yet. Another Toshiba exec, Hiroo Ohta, technology executive at Toshiba Memory Corporation, was quoted: “The combination of our products will help illustrate the significant value proposition of NVMe-oF Ethernet SSDs for data centres.”

Blocks & Files thinks Toshiba and Marvell could usefully demonstrate some big name software product running faster and cheaper on their direct-to-ethernet SSDs.

It remains to be seen if Samsung will pick up the Marvell 88SS5000 converter controller for its KV-SSD. It will have a tougher job marketing the KV-SSD than Toshiba with its NVMe-oF SSD, because the key:value store idea adds another dimension to the sale for customers to take on board.

The composable systems connection

Western Digital, Toshiba Memory Corp’s flash foundry joint-venture partner, has a flash JBOF product: the Ultrastar Serv24-HA.  

It also has an OpenFlex composable systems product, which uses NVMe-oF to interconnect the devices.

The obvious next step for Western Digital is to bring out its own direct-to-ethernet SSD for the Serv-24-HA chassis, and to also use it in the OpenFlex system.

Toshiba supports the DriveScale Composable Infrastructure, and another obvious possibility is for DriveScale to support Toshiba’s direct-to-Ethernet SSD.

Clumio clambers aboard SaaS backup bandwagon

Clumio, a data protection as a service startup, came out of stealth today. It is touting a cloud-native way of simplifying disaster recovery and contrasts this approach with on-premises rivals and their legacy baggage.

“It’s a huge market we are disrupting. This hasn’t been done before. It’s a hard problem to solve,” CEO Poojan Kumar told us in a telephone briefing.

Clumio was set up in 2017 and has raised $51m to date in two funding rounds. The company installed v1.0 product in the first customer sites in May 2019.

The founders are three veterans of PernixData, the developer of a hypervisor memory-based caching scheme, which Nutanix bought in August 2016. They are CEO Poojan Kumar, CTO Woon Ho Jung and engineering VP Kaustubh Patil.

Clumio co-founders. From left; Engineering VP Kaustubh Patil, CEO Poojan Kumar and
CTO Woon Ho Jung.

May the SaaS force be with you

The attraction of DPaaS, Clumio says, is based on the complexity of on-premises-based backup. These handle backup servers, software, and storage products along with replication and secondary backup storage for disaster recovery, and coverage of on-premises and public-cloud based servers. These can include bare metal, virtualized, hyperconverged and containerised servers.

Sweep it all way and run a single DPaaS service covering all the bases with central management and remove the need to provision, operate or manage your own hardware and software infrastructure.

Clumio said its service scales on demand, has predictable costs, is simpler to manage than the on-premises muddle and has policies set for security and compliance.

Cloud Data Fabric

Clumio is based on a Cloud Data Fabric hosted on Amazon S3 object storage, and backs up AWS and on-premises VMware virtual machines. No doubt it will extend coverage to Azure and Google, possibly Oracle too, and server environments beyond VMware, such as KVM.

Customers connect to clumio.com to activate the service. Payment is based on the number of protected virtual machines. Clumio says they can start backing up their first vSphere workload in less than 30 minutes. The customer deals with Clumio only.

An on-premises agent, running as a virtual appliance, selects, dedupes, compresses, and encrypts data before moving it up to AWS. The agent’s operation is controlled by an AWS-based scheduling policy. The Cloud Data Fabric holds the dedupe fingerprints and a data catalog in the AWS cloud. It also provides multi-tenant user and encryption key management.

Restoration is based on the customer running a Google-like search for backups, which looks at VM backup metadata. There is a calendar view and customers can select whole VMs or particular files, such as a financial spreadsheet, to restore.

Internally, Clumio uses just one tier of S3 storage. Shane Jackson told us: “We will add multiple tiers over time.”

Competition

Clumio sees scope for SaaS-ifying data protection (DPaaS), where it is initially focused, copy data management, analytics and log management.

Competitors such as Acronis, Cohesity, Rubrik and Veeam, are all based on on-premises software and aim to move into the cloud. Jackson said the true competition is the status quo, i.e. the current way of backing up data with on-premises software.

Blocks & Files asked Kumar if Clumio saw Druva, also based in AWS, as competition. He said: “It still fits in the previous (on-premises) category. The focus was endpoint protection. Druva is taking all that legacy and trying to pivot into this [the public cloud]”.

W. Curtis Preston, Druva’s Chief Technologist, said: “Clumio’s information is years old, and the product to which they refer is no longer available. Druva has been a cloud-native and DPaaS offering for several years. We protect datacenters running VMware, Hyper-V, Linux, Windows, SQL Server and Oracle; cloud workloads like AWS and VMC; SaaS offerings like Salesforce, Office 365, and G Suite – as well as protecting laptops and mobile devices. As to focus, most of our company’s growth in the last few years has come from datacentre, cloud, and SaaS workloads. We wish Clumio the best of luck in a space we pioneered.”

There are other cloud-based data protection suppliers, such as Carbonite. Our take is that Clumio will focus on enterprises more than the small and medium business market where Carbonite operates.

These PernixData veterans have form. That and a $51m war chest could enable Clumio to hit the ground running. Let’s see how far and how fast they go.

Pavilion Data Systems bags $25m investment

Storage startup Pavilion Data Systems has completed a $25m C-round of funding.

CEO Gurpreet Singh said in a statement: “We are pleased with the Series C oversubscribed round of funding and our ongoing customer progress. In 2019, we have delivered a large number of systems to leading organisations worldwide. Many of these customers are now purchasing additional arrays as they continue to scale out.”

Pavilion ships a multiple-controller RF100 NVMe SSD array, with an internal NVMe fabric and external NVMe-over Fabrics data access. It’s a beast: performance is up to 120GB/sec read bandwidth, 60GB/sec write bandwidth, 20 million 4K Random Read IOPS. Average read latency is 117μs and Pavilion claims the $/IOPS rating is up to 25 times cheaper than competing all-flash arrays.

Pavilion array hardware scheme.
RF100 array.

Pavilion closed its second funding round in May 2018, with Singh in the CEO slot, when it announced its array. A year or so later and along comes this business expansion investment round, showing VCs have confidence in its progress.

Pavilion has now accumulated $58m in total funding. All existing investors participated in the round, along with new investors Taiwania Capital and RPS Ventures.

The company will use the cash to accelerate the delivery of iNVMe-oF products, expand to new markets such as Asia, and hire staff to support more sales and customers.

NVMe-oF arrays can be accessed over Ethernet (RDMA), Fibre Channel and TCP/IP. Pavilion is betting that NVMe/TCP will be a natural upgrade for existing iSCSI arrays and bring users a huge rise in performance.

The general NVMe-oF array market has a lot of competition; suppoliers include Dell, Excelero, EXTEN, HPE, IBM, Kaminario, Liqid, NetApp, Pure Storage, StorCentric, Western Digital, and even AWS, which has just bought E8.

Sony and Fujifilm tape media dispute must never happen again

Comment Sony and Fujifilm this month settled a festering legal dispute that crippled the global supply of LTO-8 tape media.

The terms of the settlement are secret and neither side are talking. But it will take months to fulfil backlogged LTO-8 tape media orders and customers could wait until next year to get their cartridges, according to our sources.

Here is our take: Sony and Fujifilm have behaved disgracefully and held an entire industry to ransom. This must not happen again.

The companies are the only two tape media manufacturers in town and they have been at each other’s legal throats with a succession of IP lawsuits and US LTO-8 media import bans.

Their lengthy dispute has held up LTO-8 tape media supply for months, rendering LTO-8 tape technology useless. In turn this has led many to doubt the reliability and trustworthiness of tape-based backup and archiving at a time when disk and flash-based alternatives are becoming more attractive.

The tape backstop

Tape for IT has two main uses: longer term mainframe data storage and general server backup and archiving.

The LTO tape market exists because customers store backup and archive data on tape. It is less expensive than disk or flash systems and it is reliable enough.

LTO tape formats are for the backup and archiving of general X86 servers. The latest tape storage standard is LTO-8, with 12TB raw capacity. It succeeded the previous LTO-7 with its 6TB raw capacity.

However, disk-based backup and archive storage is becoming more viable as the cost/GB decreases with rising disk capacity. Retrieving data from disk is faster than retrieval from tape. 

Similarly with flash, where QLC (4 bits/cell) and the potentially coming PLC (5-bits/cell) technologies are lowering the cost/GB of flash to the point where fast access archive using flash are possible.

So we have three technologies varying in cost and speed:

  • Tape – lowest cost and slowest speed
  • Disk – medium cost and medium speed
  • Flash – highest cost and highest speed

There is equilibrium so long as technology development in each medium does not upset the balance between the media.

If there is a significant tape media supply interruption then equilibrium weakens and the slack is picked up by disk and flash alternativies.

The dispute

The general Fujifilm-Sony legal dispute dates back to 2016 when Fujifilm complained to the United States International Trade Commission (USITC) that Sony was infringing a dozen or so of its tape media patents. The USITC opened investigations and in March last year decided Fujifilm was right and banned Sony from importing tape media, including LTO-7 cartridges, into the USA.

Sony went to the Patent Trial and Appeal Board (PTAB) to counter the Fujifilm accusations and won. Fujifilm then appealed to the US Court of Appeals for the Federal Circuit against the PTAB decision.

In March this year a USITC order (PDF) prevented Fujifilm from importing certain LTO tape cartridges and in May the US Court of Appeals for the Federal Circuit decided Sony was in the right.

So, effectively, LTO-8 tape cartridge supply dwindled to a trickle globally and was halted in the USA.

Secret resolution

Now the dispute is over, but the terms of the settlement between Fujifilm and Sony are secret. There is no excuse for this. What is to stop the two companies doing this again?

The LTO (Linear Tape Open) organisation has three Technology Provider Companies; HPE, IBM and Quantum. They license tape manufacturers to make tape according to LTO standards, such as LTO-7 and LTO-8, which specify tape cartridge capacity and data transfer speeds.

There are only two licensed manufacturers – Fujifilm and Sony – and they form a duopoly. There were others, such as Ampex and Panasonic, but the tape media supply industry has contracted until just two are left.

The reason for having more than one licensed manufacturer is  basic; it’s to stop that manufacturer hiking prices and/or fouling up supply. Having more than one discourages this, and there are laws about manufacturers colluding to raise prices.

Even so, it’s easier for two manufacturers to collude than three, easier for three to collude than four, and so on. Two is the minimum you need. But, only having two manufacturers heightens the risk of the two competing in court to enforce their IP claims and trying to prevent each other from selling product based on the other’s patents.

And this is what has happened with the two media suppliers strangling the entire tape archiving ecosystem with their petty IP dispute.

The way forward

We can’t expect other tape media suppliers to spring into existence. Setting up a media manufacturing operation would cost hundreds of millions of dollars and tape recording media area is full of patents which would need to be licensed from the incumbents.

We are stuck with the duopoly of Fujifilm and Sony. How can the tape industry ensure that media supply is not cut off again?

Blocks & Files thinks the LTO has the collective power, ability and responsibility, to negotiate good prices and good behaviour from its tape media suppliers. OEMs. That good behaviour should include cross-licensing of patents.

Sony, Fujifilm and the LTO, should be open about the terms of the settlement they have agreed, and commit to a non-repeat of this petty and ridiculous dispute that everyone in the LTO ecosystem has been forced to endure.

AWS makes it easier to build and populate data lakes

Amazon has gone live with AWS Lake Formation, a service that can cut data lake set-up from months to days.

Data Lakes are large scale collections of data which are used for analysis runs to discover fresh information and inter-relationships and make the organisation more efficient.

AWS said building and filling data lakes can take months because an organisation has to provision and configure storage, and then copy data into the storage, typically from different sources and with different data types (schemas). A data catalogue must be set up and organised so that analytic runs can be made against data subsets.

Raju Gulabani, AWS VP for databases, analytics, and machine learning, said in a statement: “AWS hosts more data lakes than anyone else – with tens of thousands and growing every day. They’ve also told us that they want it to be easier and faster to set up and manage their data lakes.”

Filling the data lake

AWS Lake Formation automates storage provision and provides templates for data ingest. It can automatically inspect data elements to extract schemas and metadata, build a catalogue for search, and partition the data. The service can transform the data into formats such as Apache Parquet and ORC that are good for analytics processes.

Lake Formation can enforce access control and security policies, and provide a central point of management. Data can be selected for analysis by Amazon Redshift, Athena, and AWS Glue. Amazon EMR, QuickSight, and SageMaker will be supported in the next few months.

AWS Lake Formation is available today in US East (Ohio), US East (N. Virginia), US West (Oregon), Asia Pacific (Tokyo), and Europe (Ireland) with additional regions coming soon.

Azure IoT Edge runs inside an NGD SSD

Computational storage supplier NGD has embedded the Azure IoT Edge service within its SSDs.

NGD has announced the 32TB Newport SSD in the U.2 (2.5-inch) form factor, twice the capacity of the first Newport drive. This is NVMe-connected and uses less than 12W.

Azure IoT Edge facilitates access to the Azure cloud using on-premises hardware. Azure workloads can be run on-premises without having to send data to the Azure cloud for processing. 

By doing this Azure IOT Edge devices can process data faster and react to local events in less time than it takes to send the data to the cloud. 

Generally the Azure IoT Edge code runs in a server. The NGD pitch is that data comes in to such a server, gets stored on its local storage, and is then fetched into memory for processing by the server CPU.

Nader Salessi, NGD CEO, said the technology “brings compute directly to the data, so processing results at the edge, right where data is created, is possible, significantly improving latency of the system.”

The Ubuntu-running ARM CPU embedded in the Newport drive processes Azure IOT data in-situ and faster than the host server. According to NGD there are “almost no changes to current code requirements”. Code that runs in an X86 Azure IoT Edge server will need to be changed so that it can run in the NGD device’s ARM-Ubuntu system.

The 32TB Newport and other NGD computational storage devices running Azure IoT Edge are available for deployment.

SK hynix supercharges memory bandwidth

SK hynix has developed the world’s fastest high bandwidth memory, intended for applications such as supercomputing, machine learning, 5G systems and for other latency-sensitive processing,

All servers use memory (DRAM) with DIMMs containing memory chips, connecting to the CPU across the memory bus. DDR4 DIMM capacity is up to 256GB and the data rate is up to 50GB/sec.

High bandwidth memory (HBM) is a way of getting more bandwidth from a DRAM package by stacking memory dice one above the other. They are interconnected, using TSV (Through Silicon Via) technology, to a logic die which uses an interposer to connect them to a CPU or GPU.

HBM2 stacking and interposer scheme.

Think of HBM as a different way of combining memory chips and giving them closer and faster access to the CPU. The distance to the processor is only a few ㎛ units which helps speed data transfers.

HBM has developed across generations with HBM1, then generations 1 and 2 HBM2, and now HBM2 Extended (HBM2E.)

HBM2 generation table.

HBM2E is being developed. A Samsung Flashbolt HBM2E package offers 410GB/sec, 3.2Gbit/s per pin, and 16GB of capacity. SK hynix has gone beyond that with a 16GB package providing more than 460GB/sec.

The SK hynix package has a stack of 8 x 16Gbit chips and is rated at 460GB/sec.

Even this is slow compared to some other specialised forms of memory. Micron, for example, makes GDDR6 memory which pumps out data at 616GB/sec of bandwidth to a GPU. Micron is expected to bring out its own HMB2 product thus year.

SK hynix’s HBM2E package will be mass produced in 2020, which SK hynix expects to be a growth year for the HBM2E market.

Your occasional storage digest, featuring Exagrid, Broadcom, Lite-On, Toshiba and more

A sprinkling of storage news briefs to start the week.

Exagrid refresh deduplicates deduplicated files

Target deduplicating backup appliance supplier Exagrid has refreshed its appliance software to v5.2.2.

This release adds improved data deduplication for Veeam Software; the combination of Veeam’s deduplication and “dedupe friendly” compression, along with ExaGrid’s deduplication can now achieve a combined deduplication ratio of up to 14:1 for VM backups.

A new deduplication algorithm improves deduplication ratios over its previous version for backup applications that use Changed Block Tracking (CBT) or incremental backups. Most backup applications use CBT.

Joint Commvault and Exagrid customers can have Commvault deduplication enabled and ExaGrid will further deduplicate the Commvault deduplicated data to improve the deduplication ratio by a factor of 3X up to a combined deduplication ratio of 20:1.

Windows Active Directory domain credentials can now be used to control access to the ExaGrid management interface, providing authentication and authorisation to the web GUI.

Veritas’ NetBackup Accelerator technology shortens backup windows by sending only changes for both incremental and accelerated full backups, synthesizing the full backup from previous changes using the OST interface. ExaGrid can take in and deduplicate NetBackup Accelerator data and, in addition, reconstitute the accelerated backup into its fast restore Landing Zone.

Exagrid points out that, with its Landing Zone technology, Veeam can boot a VM from ExaGrid in seconds to minutes versus hours for deduplication appliances such as Dell EMC Data Domain. They store everything as deduplicated data, and that requires data rehydration for each request.

Everspin results

MRAM developer Everspin reported revenues of $8.6m for the second 2019 quarter, down from $10.8m a year ago. It made a loss of $3.67m, better than the year-ago loss of $7.37m. 

The company is working to bring its Spin Transfer Torque MRAM into use by enterprise storage drive suppliers. MRAM is a byte-addressable, non-volatile memory that operates at DRAM-like speed.

Gross margin for the second quarter of 2019 was 46.5 per cent, and compares to 42.1 per cent in the second quarter of 2018 and 47.7 per cent in the previous quarter.

Operating expenses decreased sequentially by $1.3m to $7.6m as the 1Gb STT-MRAM device transitioned from R&D into production.

Everspin MRAM.

Everspin partners with Global Foundries for MRAM die production. Phison Electronics and Sage Microelectronics provide native support for Everspin’s 1Gb STT-MRAM in their enterprise SSD controller chips.

It expects total revenue in the range of $8.5m – $9.0m next quarter.

Broadcom goes up Glass Creek

Broadcom has announced Glass Creek, a universal NVMe over Fabrics storage adapter for bare metal and virtualized servers, using the Stingray SoC (System-on-Chip). RoCE and NVMe/TCP are supported.

Glass Creek is operating system independent and supports various flavours of Linux, Windows, VMware ESXi and Microsoft Hyper-V hypervisors. It has RAID HW acceleration, deduplication and security features, and offers 1 million IOPS across a 50Gbit/s link.

Lite-On could turn its SSD lights off

Taiwan media outlet Digitimes speculates – Lite-On Technology is thinking of selling its SSD operation. The in-house Lite-On SSD business was set up as the Lite-On Storage subsidiary in April 2019.

Digitimes said Toshiba Memory Corporation, which supplies NAND Chips to Lite-On Storage, could buy the unit.

Lite-On announced a refreshed EP4 PCIe SSD with 960GB and 1.92TB capacities at the 2019 Flash Memory Summit. It uses 96-layer TLC (3bits/cell) Toshiba flash, The drive is up to 15 per  cent faster than the prior version, with sequential read/write speeds of 2.2/1.1 GB/sec. The endurance is up to 3 drive writes a day.

The Lite-On Technology group is selling its Lite-On Semiconductor to US vendor Diodes for $428m.

Toshiba Memory Corp Results

Toshiba Memory Corp (TMC) reported F1Q20 (ended June 30) revenues of ¥214.2bn (c$1.95bn). 

There was a loss of ¥95.2bn ($86.67m) following a loss of ¥19.3bn ($175.7m) in the prior quarter. This quarter’s loss includes a ¥34.4 billion (~$312m) cost for the Yokkaichi fab outage. 

TMC saw low single-digit ( 1 – 3) percentage bit growth in the quarter, which was better than the mid-single digit (4 – 6) percentage decline in the first quarter.

Short items

Actifio has expanded Actifio GO SaaS on Google Cloud to full “Copy Data Management-as-a-Service.” Actifio claims this is the first SaaS offering that eliminates the need for on-premises storage or software license management,. The software provides backup, near-instant recovery and database cloning on Google Cloud Platform. It’s for on-premises VMs, is app-aware for the major databases, and integrated with Google Big Query.  

Azure Archive Storage prices have been cut by up to 50 per cent in some of its 29 regions. This is the Azure equivalent of Amazon’s Glacier cold data storage service and exists alongside Azure’s Hot and Cool access tiers.

DataStax, which supplies a database built on Apache Cassandra, said production support on VMware vSAN now includes hybrid and multi-cloud configurations.

GigaSpace is offering its InsightEdge In-Memory Computing platform on the AWS Marketplace for big data analytics work. The software co-locates business logic, analytics, and data processing in the same memory space to speed performance. It runs analytics and machine learning models simultaneously on streaming, transactional and historical data, to improve the speed and quality of insights from big data.  

IBM has published a Spectrum LSF guide to “Using LSF on NVIDIA DGX.” LSF is a workload management platform and job scheduler.

Lenovo and AMD have announced new single-socket servers, for edge and data-intensive workloads. Pivot3 says the new Lenovo servers combined with its software will provide an edge computing video security solution capable of supporting up to 33 per cent more video cameras per node than similar systems. 

Research house TrendFocus said 78.56 million disk drives were shipped in the 2019 second quarter, 1 per cent more than the first quarter. Seagate supplied 31.39 million (up 0.3 per cent Q/Q), Western Digital 27.74 million (down 0.4 per cent), and Toshiba 19.43 million (up 5.3 per cent).

People

Cloud backup and storage management service supplier Datto has recruited Robert Petrocelli as its CTO. He was a senior software architect at Oracle.

Backup-as-a-Service supplier Druva has hired Thomas Been as CMO. He was previously CMO at IT management and analytics software supplier Tibco.

WDC EVP and CTO Martin Fink is to retire and moves to an advisory role with the company. Siva Sivaram has been appointed President of Technology and Strategy. He was EVP for Silicon Technology and Manufacturing. Fink has been involved with the RISC-V processor design effort and was deeply involved with The Machine concept when previously at HPE.

Veeam has recruited Nigel Houghton as EMEA director of alliances. He comes from being HPE’s EMEA director of alliances and was an EMEA channel sales director at Scality before that.


Samsung readies launch of 256GB 136-layer SATA SSD

Samsung has made a 100+ layer SSD using sixth generation V-NAND technology, although this is not available for sale just yet.

V-NAND is Samsung’s term for its 3D NAND technology and the company currently sells fifth generation 9x layer product i.e. between 90 and 99 layers. The company intends to sell high-speed and high-capacity SSDs made from its sixth generation technology.

V-NAND generations

Here is Samsung’s V-NAND generation layer progression and roadmap:

  • V1 – 24 layers July 2013 and 128Gb MLC (2 bits/cell) die August 2013
  • V2 – 32-layers August 2014 and 128Gb TLC (3 bits/cell) die
  • V3 – 48-layer August 2015 and 256Gb TLC die
  • V4 – 64 layers December 2016 and 256Gb TLC die then a 512Gb die
  • V5 – 9x layers May 2018 and 256Gb TLC die
  • V6 – 1xx layers June 2019 and 256Gb TLC die
  • V7 – 2xx layers and 512Gb die
  • V8 – 3xx layers from 3 stacks
  • V9 – 4xx layers
  • V10 – 5xx layers

The sixth generation is a 100+ layer design and the first product is a 250GB SATA III SSD intended for client computing. However, Samsung has not announced product branding, random IOPS, sequential bandwidth or endurance numbers, or availability information.

The company currently sells 850 Pro (32 layers) and 860 Pro (64-layer) client SSDs with a range of capacities, including 256GB. Perhaps we’ll see a 870 Pro based on V6 technology.

256GB V6 V-NAND SATA SSD

Samsung’s 256GB SATA SSD using its sixth generation V-NAND technology.

Specific details of the 250GB V6 SATA SSD:

  • 136 layers of charge trap flash cells
  • Single stack design
  • 512Gbit die
  • TLC (3bits/cell)
  • 45μs read latency
  • 450μs write latency
  • SATA 12Gbit/s interface

Samsung said it produced the 1xx die with 670 million channel holes, compared to 930 million-plus needed for the V5 9x generation. This improved manufacturing productivity by more than 20 per cent. The die is also 10 per cent faster at IO than the 9x product and draws 15 per cent less power.

Layer count transitions

Samsung said it has brought out the V6 1xx layer technology 13 months after introducing V5 9x. This is four months faster than the V4 64-layer to V5 9x layer transition.

Kye Hyun Kyung, EVP of solution product and development at Samsung Electronics, said: “With faster development cycles for next-generation V-NAND products, we plan to rapidly expand the markets for our high-speed,high-capacity 512Gb V-NAND-based solutions.”

That impliesV7 200+ layer technology will debut in September 2020, V8 3xx technology in October 2021, V9 4xx in November 2022, and V10 5xx in December 2023, assuming the 13-month transition time is adhered to.

SK hynix has suggested it will introduce its 500-layer technology in 2025, which would give Samsung a one-to-two year advantage.

Liqid unleashes the Honey Badger PCIe 4 SSD. It’s fast.Very fast

Liqid has added the LQD4500, its first PCIe 4.0 SSD to its composable systems line-up. This is possibly the world’s fastest SSD. Internally, Liqid calls the thing ‘Honey Badger’ – which we associate with crazy-aggressive, rather than super fast. But hey ho.

Liqid’s composable system uses NVMe-over Fabrics to connect pools of compute, FPGA, GPU and storage resources, from which dynamically configured servers are created to run applications. Resources are returned to the pools when the application completes.

The Liqid SSD range consists of:

  • LQD3000 – NAND – AIC format with PCIe gen 3 x 8, 16TB, 1.25m IOPS and 7GB/sec
  • LQD3250 – NAND – U.2 format with PCIe gen 3 x 4, 8TB, 850K IOPS and 3.6GB/sec
  • LQD3900 – Optane – AIC format with PCIe gen 3 x 8, 1.5TB, 1.6m IOPS, >7GB/sec
  • LQD4500 – NAND – AIC format with PCIe gen 4 x 16, to 32TB, 4m IOPS, 24GB/sec

The LQD4500 is Liqid’s first SSD to use PCIe gen 4 only, and it concatenates 16 lanes to achieve its IOPS and GB/sec performance numbers. It uses TLC (3bits/cell) NAND.

LQD4500

The Optane LQD3900’s latency is 10μs for reads and writes while the LQD4500 boasts 20μs read and  80μs write. That’s not too shabby at all for for reading.

We can envisage Liqid introducing a PCIe gen 4 version of the LQD3900 Optane card. If that had 16 lanes too we might be looking at something that could deliver 8m IOPS and 30GB/sec, possibly more.

Check out an LQD4500 spec sheet. Liqid does not reveal the available capacities below the LQD4500’s 32TB maximum in this document. The Honey Badger is available from Liqid now.

It’s a Hard Drive gonna fail: HDD failure rates revealed

Cloud storage service supplier Backblaze has revealed the annualised failure rates for its installed hard disk drives and Seagate is the worst performing manufacturer.

The percentage failure rates are revealed in a Backblaze blog and presented in a table:

The drives are listed in descending order of capacity. Blocks & Files wondered how the manufacturers would be graded if we re-ordered the table entries by Annualised Failure Rate per cent and charted them.

The results starkly illustrate that Toshiba drives are excellent, with its 14TB and 4TB drives experiencing zero failures. Western Digital’s HGST has four of the next five positions, and a 10TB Seagate drive pops up at number 5.

But then it’s wall-to-wall Seagate with positions 9 to 13.

We averaged the AFR percentages per supplier as a crude way of highlighting the differences between them;

  • Toshiba – 0.00%
  • HGST – 0.62%
  • Seagate – 1.71%

All the numbers are low because in general hard disk drives fail infrequently. But Seagate drives have an AFR that is 2.76 times worse than that of HGST, while both are put to shame by Toshiba’s zero AFR.

Now read our article ‘Why hard drives fail’.

Quantum emerges from accounting hell

Quantum has finally filed corrected financial statements along with its latest quarterly results, and is set to focus on the video and image storage market.

This has cost $33m and taken more than 18 months but the work is now done and the financial restatements filed. They show a business in decline as its attempts to grow the business in the face of declining tape system-related revenues, such as DXi deduplication, failed.

The troubled tape systems and file management supplier experienced financial mis-management from 2015 to 2017, caused by premature revenue recognition.

Once discovered this led to being booted off the New York Stock Exchange and a board-level accounting examination to recalculate the wrong results and file corrected ones with the SEC.

Latest and restated results

Revenues in its first fiscal 2020 quarter, ended June 20, were $105.6m,  down 1.75 per cent annually, with a loss of $3.8m, better than the $7.5m recorded  a year ago.

Signs of progress included;

  • After excluding non-recurring charges, adjusted net income was $4.4m compared to $2.3m a year ago,
  • Total operating expenses in the quarter were $43.1m, compared to $50.7m a year ago,
  • SG&A expenses declined 11 per cent to $34.4m compared to $38.5m in the year-ago quarter,
  • R&D expenses were $8.4m, up one per cent compared to $8.3m a year ago.

Gross margins were flat year over year despite lower royalty revenue in the first fiscal quarter of 2020 that was negatively impacted by LTO media supply issues. This was resolved in early August and the tape market should return to growth, Quantum said.

Quantum filed amended results for its 2015, 2016, 2017, 2018 and 2019 fiscal years. It said the revenue restatement re-casted the timing of revenue, not the quality or accuracy of the revenue itself. 

We can now see how the business has performed historically. Our chart showing revenues and profits since fiscal 2005:

Clearly it is a business in decline. Why should the declining stop? 

Chairman and CEO Jamie Lerner said in a canned quote; “Today, Quantum is a leaner, more efficient company poised for growth based on a series of transformative steps we have taken.”

That means new execs, eliminated expenses and participation in growing markets.

New management

Almost three quarters of the senior management prior to January 2018 has been replaced. Quantum said it has adopted new business priorities, standards and governance practices focused on product innovation and profitable sales.

Costs have been cut, as Quantum has chopped nine facilities and offices worldwide, and eliminated $60m in annualised operating expenses that included a reduction of about 30 per cent of its employees.

Looking for a revenue upturn

Lerner said: “With the restatement behind us, we are focused on growing our business profitably and creating sustainable value for our shareholders.” Quantum wants to get its shares re-listed on a national stock exchange hopes to accomplish that by the end of 2019.

It aims to grow revenues based on its lower expense base, a stronger tape market, and the growing video and image market.

The product line that has growth potential is its StorNext file lifecycle management system which has been successful in the entertainment and media industries with an integrated set of fast file, long-term object, cloud and tape storage capabilities integrated with application workflows.

Quantum thinks t80 per cent of the world’s data by 2025 will be video or video-like data, across industries – and not just in the entertainment and media world. It is specifically looking at the high-speed processing of video and long-term archiving of video and unstructured data.

The company forecasts revenues between $99m and $105m next quarter (Q2 fy20). That’s $102m at the mid-point which will almost exactly equal the $101.98m reported last year. It hopes for six to 10 per cent revenue growth for the remaining three quarters of its fiscal 2020, compared to the prior year.

That, and getting relisted, would be a good result.