Home Blog Page 299

HPE adds IBM Spectrum Scale to HPC storage line-up

HPE is to sell and support IBM’s venerable Spectrum Scale parallel file system for high performance computing (HPC) and AI workloads running on its ProLiant and Apollo servers,.

The company has added IBM Spectrum Scale to its HPE Parallel File System Storage offering, which is part of its HPC portfolio, and positioned under its ClusterStor array/ Lustre parallel file system. HPE already has resale agreements in place with Qumulo for scale-out filers and WekaIO for parallel access file software.

HPE’s Parallel File System Storage is said to support from tens to a few hundred compute node client systems, and scales out in terabytes. The high-end ClusterStor E1000 Lustre arrays support from several hundred to thousands of compute node clients, and scales out at the petabyte level.

Uli Pleschmidt.

The IBM tie-in was announced via a blog by Uli Plechschmidt who runs product marketing for HPC storage at HPE’s Cray business unit. “This really is a four-way winning scenario for all stakeholders – customers, channel partners, IBM, and HPE.”

He said that HPE Parallel File System Storage is unique, providing:

  • The leading enterprise parallel file system according to Hyperion Research; IBM Spectrum Scale  Erasure Code Edition (ECE)
  • Running on HPE ProLIant and Apollo x86 enterprise servers
  • Shipping fully integrated from HPE with HPE Operational Support Services for both hardware and software – one throat to choke
  • Without the need to license storage capacity separately by terabyte or by storage drive (SSD and/or HDD)

NFS is Not For Speed

Rather than making smaller ClusterStor systems, HPE is reselling Spectrum Scale on its servers to take on DDN (GRIDScaler), Dell EMC PowerScale (Isilon), Lenovo (DSS-G), NetApp AFF arrays and Pure FlashBlade in the HPC and AI markets.

Pleschmidt characterises the above lineup – except for DDN – as NFS systems:.“NFS-based storage is great for classic enterprise file serving (e.g. home folders of employees on a shared file server). But when it comes to feeding modern CPU/GPU compute nodes with data at sufficient speeds to ensure a high utilisation of this expensive resource – then NFS no longer stands for Network File System but instead, it’s Not For Speed.”

VAST Data takes a different view, having shown it can deliver more than 140GB/sec to Nvidia DGX A100 GPU servers using ordinary NFS.

HPE has to take a different competitive positioning tack against DDN, which also sells Lustre-based systems. It says it offers end-to-end compute and storage support whereas HPE compute customers using DDN Lustre storage have two companies to deal with for support.

HPE Parallel File System Storage is generally available next month and can be used as a service through HPE’s GreenLake. An HPE white paper, Spend Less on HPC/AI Storage and More on CPU/GPU Compute, discusses HPE’s two HPC parallel file system offerings in more detail.

Infinidat raises InfiniGuard’s ransomware defences with virtual airgap

Infinidat has added InfiniGuard CyberRecovery to its enterprise array feature set with near-instantaneous recovery from immutable snapshots to counter ransomware.

The array software provides write once, read many (WORM) snapshots of an entire environment along with policy-based, point-in-time recovery. Infinidat said the snapshots cannot be corrupted by a ransomware attack. If such an attack corrupts files the latest snapshot just before the ransomware attack can recover data in minutes.

Infinidat CEO Phil Bullinger supplied the announcement statement: “We are enabling enterprise customers to establish a new line of defence for data backup that is critical in 2021 and beyond.”

InfiniGuard software runs on an InfiniBox array and can store backup files from Commvault, IBM Spectrum Protect, Veeam, Veritas and other data protection products as well as from databases such as IBM DB2, Oracle, SAP and SQL Server. The InfiniGuard software makes immutable snapshots of these backups, according to the customer’s policies, and these are used for recovery if the original backup data is suspect.

InfiniGuard diagram

In effect, Infinidat has created a virtual air-gap between the backup datasets stored on the InfiniBox array and the snapshots of the backups. The InfiniBox contents can be replicated to other local or remote systems to guard against InfiniBox failure.

Data is restored from all the disk spindles in the InfiniGuard InfiniBox array in parallel, contributing to the recovery speed. Data verification can be tested before recovery in an isolated or sandbox environment.

Greg Harrison, VP Global Accounts at Infinibox channel partner CBTS, supplied a statement: “The fast speed with which you can recover, with full data integrity, makes InfiniGuard the rock star of cyber recovery.

Netlist bags $40m from SK hynix in patent cross-licensing deal

US SSD and memory module supplier Netlist has prevailed in a patent infringement lawsuit against SK hynix, winning a $40m settlement, a cross-licensing deal and a supply arrangement.

The dispute concerned Netlist LRDIMM and RDIMM patents, with Netlist alleging that SK hynix used elements of those patents its own memory module products.

Netlist CEO C.K. Hong said in a statement: “We are delighted with the recognition of the value of Netlist’s intellectual property and very much look forward to partnering with SK hynix, a global leader in memory and storage technology.”

Netlist will receive a payment of $40m in connection with the entry into the License Agreement. The Supply Agreement entitles Netlist to purchase up to $600m of SK hynix memory products during its term. The companies also plan to collaborate on commercialising Netlist’s HD CXL technology. This HD CXL technology refers to HybridDIMM modules, which mix NAND and DRAM, and are accessed over Computer Express Link as DRAM. Netlist supplies DIMM products to OEMs such as Dell, IBM, HP, and Apple, and has been in business for over 20 years.

Netlist DIMMs.

History

This agreement between Netlist and SK hynix means the entire legal dispute has been concluded. It’s been a long haul.

The dispute began in 2016 when Netlist filed a LRDIMM and RDIMM patent infringement suit against SK hynix with the U.S. International Trade Commission (ITC). It filed again in 2017. The initial ITC judgements in each case went against Netlist but it persisted. Netlist also went to the Patent Trial and Appeals Board of the U.S. Patent and Trademark Office.

In October 2019 the ITC issued a Notice of Initial Determination finding that certain Netlist memory module patents were being infringed by SK hynix. and set a target date of April 7, 2020 for completion of the Investigation and issuance of a Final Initial Determination.

In March 2020, Netlist filed new legal proceedings for patent infringement against SK hynix, again citing RDIMM and LRDIMM memory products, in the U.S. District Court for the Western District of Texas. 

VAST Data exits hardware business, pivots to software subscriptions

VAST Data, the high performance storage startup, is to stop selling its own hardware. It will concentrate instead on selling software on a subscription basis, and will certify hardware appliances built by Avnet.

VAST Data raised $100m at a $1.2bn valuation in April 2020. This was a meteoric rise for the company which just a year earlier made its public debut with the single tier Universal Storage platform. The hardware element was a high-end array, incorporating QLC flash drives, Optane metadata and write-staging SSDs, NVMe over Fabrics access and a data reducing software stack. The company claims the scale-out system is exabyte-capable and has costs down at the disk drive capacity level and not those of general all-flash arrays.

In January this year, the company introduced a software subscription service called Gemini. Previously, it sold software coupled to its own hardware, on a perpetual license basis. The company said it has already delivered dozens of petabytes to customers under the scheme.

Renen Hallak

“Instead of purchasing hardware and software together and being caught up in an endless refresh cycle as we’ve seen for the past 30 years, Gemini offers the freedom, flexibility, and simplicity, all at an affordable cost, that organisations need to deploy an infinite storage lifecycle,” Renen Hallak, VAST Data founder and CEO, said today in a press statement.

According to VAST, Gemini enables customers to to expand performance and capacity independently and license capacity according to their specific requirements. Customers purchase a single appliance-based system with the seller supplying claimed at-cost hardware and the Gemini subscriptions for the VAST Data software. This software is not tied to hardware generations. Gemini also includes a ‘Co-Pilot’ – a level 3 VAST engineer for each customer.

The Gemini scheme includes an unconditional 60-day right to return the hardware and software.

A Silken comparison

VAST is following in Silk’s footsteps. The rebranded Kaminario all-flash-array business exited the hardware business in January 2018. Silk did a deal with Tech Data for it to provide the K2 and K2.N hardware, with Kaminario providing the Vision OS, Clarity analytics software and Flex automation and orchestration software.

Like VAST, Kaminario said at the time that customers would get the software-defined economics and flexible consumption typically seen only by hyperscale cloud providers. Shortly afterwards Kaminario supplied its Vision OS software on a subscription basis to run in the Amazon, Azure and Google public clouds.

Q & A

We asked VAST some questions about the Gemini scheme.

Blocks & Files: Why exit the HW business?

VAST: The change here is that we are now no longer in the purchasing path, which means we are also now no longer bound to the financial constraints of being measured on hardware margin contribution, and that allows us to align our business directly with the business of our customers. Our customers can:

  • Refresh as often or as little as they’d like without worrying about their previous hardware/software investments
  • Purchase as little as 100TB and scale as their data grows
  • Purchase for as little as one year
  • Over-purchase hardware to achieve a specific performance target (as many of our customers have already done) without licensing the capacity until they need it, making it very affordable to accelerate without paying the classic software tax

One VAST customer in the financial services industry had a fixed performance requirement for an additional 350GB/sec but did not require the full ~7PB capacity for this investment. With Gemini, they were able to asymmetrically add hardware (at cost) and license only the capacity they needed. In this case, they saved on hardware acquisition costs, software licensing costs, eventual data migration costs (as they grow), and added capacity on-premises ready to license as their needs require it. They immediately saved over 40 per cent versus a legacy acquisition model.

Blocks & Files: Which hardware?

VAST: The appliances we use are assembled by Avnet to the specification of our R&D team. This allows us to preserve the appliance experience that our customers love, and eliminate much of the complexity and integration hassle that is often synonymous with pure software-defined storage plays.

Blocks & Files: How is it specified?

VAST: For hardware, it’s the same as VAST systems have always been configured:

  • NVMe Enclosures hold the state of the system, with our mix of QLC Flash and Storage Class Memory
  • NVMe Switching (Ethernet or InfiniBand)
  • X86 Servers running VAST Containers

All of this is sourced by customers at cost, who benefit from the buying power of VAST’s customers as part of the pricing that our manufacturer locks in for our collective of customers.

In addition, Gemini software subscriptions are sold much more incrementally than in the past:

  • 100TB and full enclosure licenses
  • 1-10 year agreements

Blocks & Files: How is it certified?

VAST: Because Avnet provides the solution pre-configured, it arrives on a customer’s door as an integrated appliance and VAST handles all aspects of product support.

Blocks & Files: How does this differ from Silk?

VAST: We have not studied Silk with respect to this offering.

Blocks & Files: Don’t customers now have to make separate hardware and software purchases, meaning more work by them?

VAST: Actually, this is not the case as: 

  • We prescribe the configuration
  • One purchase order is provided to Avnet
  • Avnet sends only the software portion of the order to VAST, and we work together to ensure that the integrated solution is preconfigured and shipped ready to install.

What Gemini doesn’t change is the same industry-leading experience that comes with VAST’s Universal Storage platform – customers interact with VAST for any hardware or software issue. This experience is the reason why customers buy VAST and then they buy a lot more. On average, VAST customers seamlessly add up to 3X their initial purchase within their first year.

Blocks & Files: Why is this good for customers?

VAST: Gemini provides several key benefits:

  • Savings: Buy what you need when you need it. By not having to answer to some hardware-derived gross margin percentage, VAST can focus on what the customers need and not what we’d have to show to the board.  Once the business is all software, it’s easy to be much more incremental, but it’s also easier to go after extremely large (hyperscale) projects that would have otherwise been too challenging for us to reach.
  • Simplicity: CoPilots now handle every aspect of monitoring, upgrading, expanding, and refreshing customer infrastructure such that it’s a low-touch/no-touch experience for customers. We’ve got customers now scaling clusters to 50PB and beyond, and when you get to that level of scale, it just makes more sense for us to handle the heavy lifting because our team knows the product best.
  • Flexibility: Refresh whenever you like to get the best use of hardware in the way that’s right for your business. Stop endlessly migrating and just scale via an asymmetric architecture.  

Blocks & Files: Will VAST software run in the public cloud?

VAST: Stay tuned.

Dell EMC leans on Druva for PowerProtect Backup Service

Dell EMC today officially confirmed backup-as-a-service, built on PowerProtect systems and Druva software.

PowerProtect Backup Service supports SaaS applications like Microsoft 365, Google Workspace, Salesforce and other cloud-based workloads, as well as endpoints and hybrid workloads. The offering enables customers to ”centrally protect, secure and recover diverse and distributed data while using features like eDiscovery, data security and compliance capabilities to reduce risks and meet governance requirements,” Laura DuBois, VP of data protection product management at Dell, wrote in a blog post.

Laura DuBois.

“IT organisations today must balance data growth across a growing number of clouds and edge locations while streamlining and centralising data management more broadly,” she said. According to Dell, PowerProtect Backup Service deploys in minutes and provides unlimited on-demand scaling along with centralised visibility and management from a web-based facility.

Blocks & Files exclusively revealed in January that Dell was in negotiations with Druva to offer the startup’s back up as a service. Druva supplies subscription-based backup-as-a-service that supports endpoints, on-premises server applications and public cloud workloads.

We asked DuBois why it it chose Druva’s SaaS platform. “A primary reason,” she said, “is its market leadership. Druva is a fast growing cloud data protection provider with over 4,000 customers and over 200PB of data under management.”

A second question is what’s Dell EMC’s long-term data protection as a service strategy? DuBois said: “As mentioned at Dell Technologies World last year, our primary vehicle for delivering IT as a service, including data protection, is Project APEX. It will offer customers greater choice in acquiring and using IT with a simple, consistent cloud experience across the Dell Technologies portfolio. You’ll hear more details about Project APEX soon.”

The PowerProtect on-premises system is the successor to Data Domain deduplicating backup target products with enterprise DD appliances and SMB Avamar-based IPDA DP Series systems. Dell EMC sells Networker enterprise backup software for use on-premises that can write to DD appliances.

Dell EMC’s own PowerProtect Data Manager software protects hybrid and in-cloud workloads in Microsoft Azure, AWS and the Google Cloud. A VMware-certified Data Manager offering protects the VMware Cloud Foundation infrastructure layer.

Data Manager also provides agentless, app-consistent protection of open source databases such as PostgreSQL and Apache Cassandra, in Kubernetes environments, and protects Amazon Elastic Kubernetes Service (EKS) and Azure Kubernetes Service (AKS).

Blocks & Files is interested to see how the PowerProtect Data Manager and PowerProtect Backup Service offerings evolve in the APEX future. Could they converge? Is Druva just a stopgap offering?

The limits of enterprise IT composability

AnalysisFungible’s composable data centre announcement last week excited much discussion about the limits of the enterprise composability market. We look at views from storage architect Chris Evans, IDC, and MinIO founder and CEO Anand Babu Periasamy, Panasas product marketeer Robert Murphy and others that outline the limits of enterprise composability.

Start point

Evans tweeted: “The biggest question is really, who are these technologies aimed at? If I was a mid-market co-lo or bare metal provider, I see good use for this technology.  But in the enterprise, how many IT shops really reconfigure their hardware every day?”

He continued: “Change is risk and if you’re not 100 per cent certain you can dynamically rebuild your infrastructure – then you treat it like a pet and leave it alone. I would posit that many enterprises just aren’t geared up to dynamically rebuilt infrastructure.

“So, if you gave them a Liqid/Fungible solution, the concept would scare them to death.  It’s likely savvy IT organisations will realise the benefits of gaining more flexibility with “exotic” components and be the leaders here. Perhaps a slower adoption could indicate that many IT positions have been dumbed down and we don’t have enough internal visionaries to make projects like composability a reality.”

Evans foresees “an interesting philosophical debate arising around the choice of bare-metal composable hardware and composable software VMs.”

IDC research

Andrew Buss

Andrew Buss, IDC Research Director – European Enterprise Infrastructure and European Edge Strategies, thinks three quarters of enterprises are not ready for data centre composability: “Only 23 per cent in Europe have sufficiently capable automation/orchestration and other advanced management and application development and deployment capabilities to adjust their IT service delivery dynamically, ” he tweeted.

he twIDC digital leader slide.

He wrote” “We did a study last year looking at this across Europe. Only 23 per cent gained Digital Leader status where they adopted 10 aspects of a transformed cloud-native Digital Platform extensively. The 46 per cent of the Digital Mainstream may do 1 or 2 aspects extensively but typically only adopt these to a limited extent. Fully 31 per cent don’t adopt these at all.”

IDC chart with our red-lined emphasis.

Evans responded: “That’s interesting as it sets the limit of adoption right there.  Both Liqid and Fungible need to find ways to justify the wider investment in tools/skills, otherwise this is a very limited market for them – other than hoping/praying for hyper-scaler acquisition.”

We suggested to Buss that “it doesn’t look good for large-scale adoption of composability.” He replied: “Our view is that it can hopefully tip that mainstream 46 per cent towards a more automated and agile future – can be helped by moving towards things like consumption-based approaches with managed infrastructure as a service, etc.

“So the future is there, but many customers need technological and cultural transformation to reach it.”

MinIO and Panasas views

Anand Babu Periasamy.

Periasamy contributed to the Twitter discussion: “Customers prefer disposable servers to reconfigurable ones in a scale-out environment.” 

He explained further: “The idea of the disposable server is all about commodity off-the-shelf servers and not about how expensive they are. Supermicro NVMe JBOFs and GPU boxes would also qualify. Customers do not want to be stuck with proprietary architectures that they do not understand or control.”

The notion here is that enterprises, particularly with scale-out systems, prefer to dispose of cheap servers rather than dynamically configure them but will reconfigure expensive server boxes.

That view resonates with Panasas’s Robert Murphy who thinks that the main composability market is in high-performance computing (HPC): “The primary opportunity for Fungible is the hyper-scalers,  which sure is enough of an opportunity but have shown an aversion to sourcing this kind of IP by purchasing product, when they have the chops to do it themselves.”

“The  most realistic opportunity are the tier 2 hypers, Telcos, Banks, Retail, that would need to purchase Fungible kit since they couldn’t develop it on their own. Fungible needs to … invest HEAVILY in finding customers here.” [His emphasis.]

“HPC could be considered a tier 2 hyper and it should be no surprise (even though it is) that both Fungible and Liquid have seen their biggest adoption in HPC to date. HPC, the king of disposable servers, is starting to look at composable, because those gold plated GPU DGX servers are not so disposable any more.”

Liqid US west sales lead Triet Phamh concurred: “Agreed! Servers with 8 x A100’s and 5k NVMe drives are not so disposable anymore. We are seeing a massive push for composable simply because of the astronomical costs of PCIe devices and customers simply don’t want those devices in a “static” state anymore.”

Evans agreed: “Yes, it makes sense. Certainly as a first market. Hyper-scalers will self-build or acquire and kill off, like they did with E8 Storage.”

The Co-lo perspective

Glenn Dekhayser, a Principal Architect at Equinix, chimed in: “The bet is that enterprises are giving up their data centres completely, and will use composable colocation where cloud doesn’t make sense.” He cautions us it’s not necessarily the bet of his employer.

But: “Don’t discount the edge as a huge motivator. CSPs will require partners to provide compute/storage at the metro and far edge to provide services at low latencies to consumers and on-premises use cases;  i.e. robotics, gaming, autonomic driving…composability provides advantages here.”.

Comment

Enterprises using expensive general compute and GPU servers and facing a need to support widely different workloads will see server utilisation going up and down as workloads change and be incented to drive overall utilisation up.

In particular this applies to HPC data centres and also to co-location/managed service suppliers facing workload variability as tenant workloads come and go. All three types of installation will likely have hundreds if not more servers, both general compute and GPU, and associated storage and networking infrastructure.

If they can drive utilisation higher through composability then their costs are less likely to eat into profit margins (co-los, MSPs) or bust up against budget limits (HPC). 

Western Digital and Micron weigh up Kioxia bids

Micron and Western Digital are mulling rival bids over Kioxia, which would value their Japanese competitor at $30bn.

The Wall Street Journal which broke this news, describes Kioxia as a “Must-Have for Both Western Digital and Micron“.

The NAND business is a game where high volume chip output wins, with cost per bit reducing as manufacturing volume rises. Wells Fargo analyst Aaron Rakers has told subscribers: “We think any move toward NAND industry consolidation would be viewed positively.”  In other words, fewer competitors means less glut and bust in a notoriously volatile market.

According to Rakers, Micron agrees that consolidation is needed but does not want to be the consolidator.

How ambitious is Western Digital CEO David Goeckeler? He’s been in the post for 12 months and a Kioxia acquisition would be an enormous deal. Does he want to make Western Digital the NAND market boss? Even if he was willing to push the boat out that far, would the Japanese government approve such a deal?

Western Digital previously made a ¥2 trillion ($18.2bn) bid for Kioxia when it was being spun-off from Toshiba in September 2017. The offer was rejected.

Western Digital has a JV with Kioxia, so buying the firm would give it sole ownership and there would be no technology integration problems whereas, from Micron’s point of view the Kioxia/Western Digital BiCS 6 162-layer 3D NAND technology differs from its own 176-layer technology.

That said, SK hynix faces similar integration problems bringing Intel’s 3D NAND layering tech in-house and it is still buying Intel’s NAND business.

Kioxia is owned by Bain Capital and SK hynix holds a minority stake. The memory maker has a joint venture with Western Digital to build NAND chips from semiconductor foundries in Japan. The foundry chip output is used by Kioxia and Western Digital to make and sell SSDs.

Kioxia planned to IPO in August 2020, at a $20.1 bn valuation. This was called off a month later due to the pandemic and US-China trade restrictions.

The Winner Takes it All?

DRAMeXchange’s 2Q’ 2020 figures show Samsung is the leading NAND supplier, with 31 per cent revenue share, followed by Kioxia (17 per cent), Western Digital (16 per cent), Micron (13.7 per cent), SK hynix (12 per cent) and then Intel (11 per cent). SK hynix is currently buying Intel’s NAND business which will raise its market share to 23 per cent.

A Micron-Kioxia combo would total 30.7 per cent revenue share, and WD, with Kioxia under its belt would overtake Samsung, to take 33 per cent.

Thanks for the memory as Micron revenues get big DRAM bump

Micron delivered a 28 per cent revenue jump in its second fiscal 2021 quarter and 7.5 per cent increase in net income as it benefited from DRAM price rises.

Revenue for the quarter ended March 4, 2021, was $6.24bn, up from $4.87bn a year ago, and net income of $603m, compared the year-ago $405m. DRAM accounted for 71 per cent of total revenues and was up 44 per cent Y/Y. NAND was 26 per cent of revenues and grew 9 per cent Y/Y, with the Storage business unit seeing a 2 per cent decline in revenues Y/Y.

Sanjay Mehrotra.

In the earnings call President and CEO Sanjay Mehrotra said: “Micron delivered strong FQ2 results above our original projections, driven by solid execution and higher than expected demand across multiple end markets. The DRAM market is in severe shortage and the NAND market is showing signs of stabilisation in the near-term.”

Financial summary:

  • Gross margin – 32.9 per cent
  • Adjusted free cash flow – $174m – compared to year-ago $101m
  • Total liquidity – $11.1 bn
  • Net cash – $1.95 bn at quarter end vs $2.7 bn a year ago

Business unit summary:

  • Compute and Networking BU – $2.6 bn – up 34 per cent Y/Y
  • Mobile – $1.8 bn – up 44 per cent
  • Embedded – $935m – up 34 per cent
  • Storage – $850m – down 2 per cent

An earthquake in Taiwan earlier this year caused little disruption to Micron and the company has secured extra supplies of water for its plants in central Taiwan, which is currently experiencing a drought.

Micron is four times blessed. First, the data centre market wants more SSDs and memory, with new server CPUs having more memory channels and new workloads generally needing more memory and storage. Second, the relatively new automotive market for DRAM and NAND is growing as vehicles need more computing for electric vehicles driver assistance aids and vehicle management. Micron set revenue records for automotive products in the quarter. Thiryd mobile phone memory and demand is strong with revenue records set for mobile multi-chip package memory products.

And the fourth blessing is that pandemic-induced remote working is driving up desktop and laptop computer demand for both DRAM and NAND.

Nevertheless Storage BU revenues declined, due to some customers reducing higher than average inventory levels, Mehrotra said: “We are continuing to expand our data centre NVMe SSD portfolio with internally developed controllers and have new product introductions planned in the coming quarters.”

Micron expects storage revenues to grow when 176-layer client SSDs are introduced in the next two fiscal quarters. Micron is, like SK hynix, singing the disk drive replacement song. Mehrotra said the company is “driving an increased mix of QLC NAND, which helps to make SSDs more cost effective and accelerates the replacement of HDDs with SSDs.”

Micron anticipates $7.1 bn revenue at the midpoint next quarter, which is 30.5 per cent high than the year-ago $5.4bn. The company confirmed it expects to complete the sale of its Lehi 3D XPoint fab by the end of the year and reiterated a commitment to developing CXL-facing memory products.

Startup TenaFe dumps DRAM for PCIe 4 SSD controller

Two former Micron execs have set up their second SSD controller company. CEO Mike and Lee Chief Scientist Cody (Yingquan) Wu founded Tidal Systems, a developer of NVMe SSD controllers that was bought by Micron in October 2015. They are joined by Chief Architect Priyanka Thakore, who also worked at Tidal. Lee and Thakore co- founded TenaFe in 2019, and the company emerged from stealth this week with a TC2200 PCIe 4 SSD controller that has no DRAM and is fast and low-cost.

Greg Wong, President of Forward Insights, issued a statement: “PCIe Gen4 DRAMless SSD provides the best cost-performance efficiency and will become the prevailing client storage solution moving forward. TenaFe’s TC2200 SSD Controller, with its impressive power and thermal profile, is well positioned to capture the growth in this market.

TC2200.

The TS2200 relies on the host system providing a slug of DRAM to operate the SSD. This HMB (Host Memory Buffer) technology approach is also used by Samsung with its  980 NVMe SSD. It lowers cost and electricity consumption as well as the heat generated, and the controller footprint is smaller.

There are four NVMe channels in the TC2200 controller, which delivers up to 600,000 IOPS and 4.8GB/sec in throughput while drawing less than 4W.  The controller uses ‘FlexLDPC’ error correction code, which provides best-in-class latency, quality of service and endurance for TLC and QLC flash, the company claims.

TenaFe said the TS2200 is suitable for small-format M.2 and BGA SSDs as used in edge devices, gaming consoles and ultra-portable laptops. The company is touting its wares on an OEM basis to SSD makers.

Lee said in a statement: “We will use [the TS2200] as a baseline to set an even higher bar for our next-generation data centre–focused controller, enabling our customers much faster time to market.”

TenaFe will compete with SSD controller companies such as Phison, as well as in-house controller operations at Kioxia, Micron, Samsung, and SK hynix. The firm has raised $29m in an A round of funding and has offices in California, China, and Taiwan.

TenaFe will begin sampling the TC2200 to customers in April 2021 in SDK and FTK formats.

Report: recommendation systems drive AI infrastructure initiatives

Promo What AI initiatives are companies investing in and what’s their underlying infrastructure? According to a new survey, nine in ten companies are experimenting with AI, with Recommendation Systems emerging as the single biggest driver for such initiatives.

The figures come in a report commissioned by WekaIO, “The State of AI and Analytics Infrastructure 2021”, which surveyed hundreds of professionals on their AI and cloud adoption plans.

The most common AI initiatives

Over 86 per cent of companies surveyed have at least one AI initiative, the research showed. Most respondents reported multiple AI initiatives, with companies typically having two to three at any given time.

The top use case for AI initiatives was implementing Recommendation Systems, followed by Scientific Visualization, Image Recognition, and Compliance and Conversational AI. 

The report categorizes common AI initiatives by vertical, and we also see different strategies (build vs. buy), frameworks (TensorFlow, PyTorch, Caffe and others), and databases by vertical.

CPUs vs. GPUs

Over half of the respondents (52 per cent) mentioned using GPUs in production or pilot programs, yet 38 per cent said that they do not have immediate plans to deploy GPUs.

Adoption of GPUs is especially high in Automotive, where image recognition is used to power autonomous transportation. Indeed, only 14 per cent of Automotive respondents did not have plans to use GPUs.

Other industries where the adoption of GPUs is above 50 per cent include Oil and Gas/Energy, Retail, and Cloud/MSP, where GPU use is mainly for reselling. The report supplies details by vendor and vertical.

Some hard numbers on cloud adoption

Companies also reported that around 40 per cent of their data is in the cloud, with the ultimate goal of getting to 70 per cent of data being in the cloud. The report indicates that the two main challenges for increased adoption are the complexity of mobilizing data and security concerns.

Companies’ use of the cloud was split evenly among compute (29 per cent), storage of data (29 per cent), and for Software as a Service (SaaS) (26 per cent). Although data storage was a key cloud usage model, only 15 per cent of the respondents mentioned using the cloud for backup.

For more information on cloud adoption by vertical, and the different cloud platforms used, click here to download the full report.

Brought to you by WekaIO.

MackStor launches 200TB DoItRite spinning NAND hybrid drive

Storage veterans have devised a 3-tiered, 200TB hybrid SSD-HHD device, the DoItRite 2000, containing both spinning disk platters and a non-rotating platter of NAND chips combining NVMe SSD speed and extraordinary capacity.

The engineers at MackStor, veterans from Maxtor and other now vanished disk drive manufacturers, have taken a 9-platter, helium-filled, disk drive base system and replaced the top platter with a non-rotating disk containing 30TB of NAND. This is divided into 25TB of QLC (4 bits/cell) flash and 5TB of much faster SLC (1 bit/cell) flash to provide a fast landing zone for write data as well as low latency read access.

Dr Theodore Hirate, CEO and co-founder of MackStor, said in a statement: “We have solved the dilemma of choosing between disk capacity and SSD speed with the DoItRite 2000, by combining both media in a single drive that’s perfectly aligned with the post-pandemic data storage challenges facing us all.” 

The eight disk platters hold 20TB of shingled magnetic recorded data. The drive controller’s software provides automatic and policy-driven tiering of data between the three tiers with machine learning algorithms identifying data for up- and down-tiering.

The controller uses three Arm chips; one for the disk platters, a second for the SSD, and third to provide management and data management services, including deduplication and compression. The effective capacity after data reduction is increased fourfold – to a staggering 200TB. Only deduplicated and compressed data is written to disk.

The drive transfers data across an NVMe PCIe gen 4 x 8 lane interface at 5GB/sec read and write and can deliver 2.6 million random read and 2.8 million random write IOPS. The latency is less than 500µs for 99.99 per cent of data requests. Power consumptIon is modest and heat generation no more than a standard, legacy hard drive. The drive is warranted for 5 years or 5 PB total data written; whichever comes first.

Jet Black Desiato, Chief Marketing Officer at MackStor, said: “This DoItRite 2000 drive will blow the socks pff the legacy disk and flash storage drive technology laggards out there. Our radically lower cost per terabyte will lead to wholesale domination of all enterprise and hyperscaler storage markets.”

The DoItRite 2000 drive is sampling shipping from today, April 1, 2021.

StorMagic branches out into digital asset management

StorMagic has bought the assets of SoleraTec, a small California video storage company – its second acquisition in 12 months.

Update: Hans O’Sullivan comments added. April 1, 2021.

The virtual SAN storage startup has rebranded SoleraTec’s digital asset data lifecycle management software as ARQvault. There are two offerings: ARQvault Video Surveillance and ARQvault Digital Evidence Management.

Brian Grainger, StorMagic CRO said in a statement: “The SoleraTec asset purchase marks our second major expansion in just twelve months. Now armed with virtual SAN, encryption key management and video solutions, StorMagic can truly deliver a forever data platform to address the needs of our edge customers.”

ARQvault provides a multi-tier, multi-location object storage facility with searchable contents. The software handles edge-generated data such as video surveillance and police-generated bodycam, car and interview room footage and stores it over the long term. 

ARQvault scales to thousands of sites and integrates with analytics packages. It is half the cost of all-disk storage and all of its data is available and searchable, like disk storage, StorMagic says.

‘Forever’ refers to ARQvault’s policy-driven tiering for its object storage across direct-attached disk, SAN and NAS all-flash, hybrid and disk storage, LTO tape, Sony Optical disk and public cloud object storage. The archived data is distributed and there is no single point of failure.

ARQvault diagram

Video and other asset data is stored in one or more Vaults which can be in different sites. Each Vault site has a server and storage. The servers can be X86 or Arm-based. ARQvault stores metadata, which is used in searches. Vaults respond to search requests in parallel. 

Each server has its own database which correlates to the video it is storing. As a guideline, every 16TB video storage requires 10 GB of disk storage as a minimum for the database. High-res videos have low-res proxies generated to speed search.

Second acquisition

CEO Hans O’Sullivan left StorMagic in March 2020, 14 years after starting the Bristol UK headquarterd business. Shortly after his departure, StorMagic bought KeyNexus, an encryption key management startup. The company still has to announce a replacement for O’Sullivan.

Hans O’Sullivan

A StorMagic spokesperson told us: “Hans O’Sullivan is no longer CEO, and stepped aside last year to let new leadership take over as the company continues to significantly grow through … last year’s acquisition of KeyNexus. I can confirm StorMagic is in the process of onboarding a new CEO and will be able to announce the appointment soon.”

O’Sullivan told B&F that his resignation was quite ordinary: “There was nothing sinister or unusual about my leaving StorMagic, it was in full agreement with the Board and myself.”

He said: “The acquisition of KeyNexus and its strong team was actually instigated and completed by me, the closure of which was one of my last actions, I felt it would add significant product and technology that fitted with the StorMagic ethos of software defined, automated and targeting the edge. It also helped round out the management team by adding a new CTO and engineering management.”

And: “I know StorMagic will continue to grow and do well and have full confidence in the Board and management team and in full agreement with their strategic direction.”

We understand that the search for a new CEO was not helped by the pandemic.

Asked who made the SoleraTec acquisition decision, Grainger told us: “We have a board of directors and an executive leadership team of which I’m part of both. Of course with these types of larger decisions it’s the board, the ELT as well as our shareholders that approve these types of decisions. … I was the lead of the asset purchase … but of course with the support of the board and shareholders.”