Home Blog Page 198

Cisco + NetApp + Pure = CI as a service

Converged Infrastructure (CI) is being offered as a Cisco+ Hybrid Cloud service based on existing FlexPod and FlashStack deals with NetApp and Pure Storage.

Cisco+ is an as-a-service brand combining Cisco and partner products with unified subscriptions: either pay as you use or pay as you grow. It was introduced last year and started out with network-as-a-service (NaaS) products presented under a Cisco+ Hybrid Cloud rubric and led by Tier 1 partners. CI refers to combined rackscale systems made from Cisco networking gear and UCS servers and, originally, Dell EMC storage, as in VBlock and VXBlock.

It evolved to include the networking and servers with NetApp ONTAP storage in a FlexPod reference architecture scheme, and Pure Storage FlashArray or FlashBlade arrays in a FlashStack reference architecture. These were all bought with traditional perpetual license schemes.

So now we have FlexPod-as-a-service and FlashStack-as-a-service. Mike Arterbury, NetApp VP/GM for Hybrid Cloud Infrastructure & OEM Solutions, said: “FlexPod-as-a-service is now available as an integrated offer allowing customers to buy what they need, when they need it… Customers can now choose the FlexPod consumption model that matches their needs while leveraging our full portfolio of validated designs.”  

Cisco graphic

 AJ Kapase, VP Global Strategic Alliances at Pure Storage, added: “We are excited to combine our own Pure Storage as-a-service (STaaS)… with Cisco+ Hybrid Cloud to create FlashStack as-a-service (FSaaS)… Our joint customers can scale capacity up or down as needed, and only pay for the IT services they consume.”  

Both of these bulk up the basic Cisco+ Hybrid Cloud offering, which has Hyperflex HCI as its storage component:

Cisco graphic

Cisco says we can expect to see more of its storage partners jumping aboard, and we understand Dell and Hitachi Vantara are likely candidates. 

There is more information here.

Storage news ticker – May 31

HPC storage supplier Panasas has announced a new global strategy with added benefits and incentives to grow sales reach through channel and alliance partners. New features include a partner portal hub with marketing collaboration tools, learning pathways, and a library of resources, partner marketing and demand generation support, and deal registration. Panasas is showing at ISC High Performance in Hamburg, Germany, over May 29-June 2, booth #E507.

Data protector Asigra has announced a Tiger’s Den Channel Program with significant updates to tiered discounting, new product marketing materials, enhanced partner services and support, as well as a new engagement package for value-added distributors (VADs) designed to increase global product availability. Asigra’s Tigris Data Protection platform prevents ransomware 2.0 and other advanced forms of malicious malware from accessing and affecting backup data. The platform includes the industry’s first zero-day Attack-Loop preventative technology using bi-directional malware detection, zero-day exploit protection, variable repository naming, and Deep MFA (multi-factor authentication) for a defensive suite against ransomware variants and other cyber-attacks on backup data. 

DDN has released EXAScaler 6.1 of its parallel file system, billed as delivering optimized AI integration and data security. Its Hot Pools and Hot Nodes features, both enabling flash performance for applications, can be combined with DDN’s end-to-end encryption. DDN updated its A3I AI400X2 systems and customers can now deploy up 900 disk drives and 16 petabytes of data in a single rack. DDN has a presence at ISC 2022.

HPE Frontier

HPE’s exascale Frontier supercomputer has an overall Orion file storage system, a multi-tier ClusterStor E1000 storage system, and an in-system SSD storage setup with local SSDs connected to compute nodes by PCIe 4. There is more than 700PB of Cray ClusterStor E1000 capacity, peak write speeds of >35 TB/sec, peak read speeds of >75 TB/sec, and >15 billion random read IOPS. Orion uses Lustre and ZFS software, and is possibly the largest and fastest single Posix namespace in the world. There are three Orion tiers: a 480 x NVMe flash drive metadata tier; a 5,400 x NVMe SSD performance tier with 11.5PB of capacity; and 47,700 x HDD capacity tier with 679PB of capacity. There are 40 Lustre metadata server nodes and 450 Lustre object storage service (OSS) nodes. One OSS node has one performance-optimized object storage device (OST) and two capacity-optimized OSTs. There are also 160 Orion nodes used for routing.

The containerized version of IBM Spectrum Scale includes IBM Spectrum Scale Data Access Services (DAS), which supports the S3 access protocol. This enables clients to access data stored in IBM Spectrum Scale file systems as objects. Spectrum Scale DAS requires a dedicated Red Hat OpenShift cluster that runs only Spectrum Scale CNSA and Spectrum Scale DAS. S3 applications run on colocated, separate servers using any operating system or Kubernetes platform. The Spectrum Scale container native cluster imports (remotely mounts) one Spectrum Scale file system, which is provided by a colocated Spectrum Scale storage cluster. More info is contained in this blog and the docs can be found here

IBM Spectrum Scale storage cluster

With IBM’s ESS3500 Spectrum Scale Server, NFS/SMB Services can now run in the box itself as container workload. For larger NAS workloads, the classic way of running protocol nodes is still valid. 

Cloud storage provider IDrive e2 has added a new edge location in Ireland, bringing fast object storage to EU customers. Launched last month, IDrive e2 is taking aim at Backblaze and Wasabi with no fees for ingress/egress, fast speeds, and pricing starting at $0.004/GB/month. Data can be accessed via the IDrive e2 web console or a third-party tool such as MSP360, Veeam, Cyberduck, Cloudflare, Fastly, iconik, Arq, QNAP, Synology, Arcserve, Duplicati, WinSCP, and S3 Browser.

IDrive e2 storage

Datacenter software supplier Liqid is demonstrating its Matrix composable disaggregated infrastructure (CDI) software at ISC High Performance 2022 (booth #B211) in Hamburg. 

Nvidia says HPC systems are using its BlueField-2 DPUs to increase overall system power and offload compute nodes, and identifies Los Alamos Nuclear Labs and TACC as examples. Another is Ohio State University, where researchers offloaded parts of the message passing interface (MPI) and accelerated P3DFFT, a library used in many large-scale HPC simulations. The resulting programming models run up to 26 percent faster. A consortium called the Unified Communication Framework is enabling heterogeneous computing for HPC apps and members include Arm, IBM, Nvidia, US national labs and universities. It is helping to define OpenSNAPI, a general application interface for DPUs. 

Rambus has announced the successful completion of the acquisition of Hardent. Hardent’s employees will provide skills and building blocks for the Rambus CXL Memory Initiative products.

Redgate Software announced updates to its SQL Data Catalog with maintenance advantages so users can reduce the amount of time spent on data classification and protection. It includes customization based on regulatory requirements and automated capabilities to better streamline data management processes. Redgate says its product can help ensure sensitive, personal data is protected before databases are made available for use in development, testing, and more.

Venture capitalists issue startup funding warning

Downturn
Downturn

Venture capitalists including Sequoia and Y Combinator are warning that an economic downturn is to threaten future fundraising, meaning start-ups should look to raise cash right now or conserve it.

The pair are responding to rising inflation, ever-climbing interest rates, the continued conflict in Ukraine, the pandemic and other geopolitical tensions.

Sequoia held a Zoom conference with its founders and the presentation, seen by The Information, talked about avoiding a commercial death spiral in any coming downturn with fledgling businesses urged to start thinking about trimming costs and pulling in spending.

Y Combinator, which backs hundreds of small businesses, sent out a letter to start-ups in its portfoilio entitled “Economic Downturn” – reported by TechCrunch – saying they should focus on reaching a so-called Default Alive status. That means a startup business can reach profitability with its current funding. The opposite, Default Dead, is when a startup runs out of cash before profits arrive.

The letter urges start-ups to “plan for the worst” following a tumultuous time for tech stocks, with declines reported in the past seven weeks. It states:

  • No one can predict how bad the economy will get, but things don’t look good.
  • The safe move is to plan for the worst. If the current situation is as bad as the last two economic downturns, the best way to prepare is to cut costs and extend your runway within the next 30 days.  Your goal should be to get to Default Alive.
  • If your plan is to raise money in the next 6-12 months, you might be raising at the peak of the downturn. Remember that your chances of success are extremely low even if your company is doing well. We recommend you change your plan.

The YC letter finishes by saying: “Remember that many of your competitors will not plan well, maintain high burn, and only figure out they are screwed when they try to raise their next round.  You can often pick up significant market share in an economic downturn by just staying alive.”

Our thinking is that storage startups that could be feeling the heat include ones whose last round was in 2019/2020 and who are still burning cash.

Multiple technology vendors have filed financial results in recent weeks that highlight the challenges they are currently facing in the industry. Cisco, Nutanix, Arista, NetApp, Dell and more said underlying demand was strong but meeting demand in a troubled global supply chain is no easy task. Yet the apparent direction of travel in the stock markets is clearly causing investors some concern.

UALink

UALink – In May 2024 AMD, Broadcom, Cisco, Google, HPE, Intel, Meta, and Microsoft announced the formation of a group that will form a new industry standard, UALink (Ultra Accelerator Link), to create an ecosystem and provide an alternative to NVIDIA’s NVLink.

  • UALink creates an open ecosystem for Scale-up connections of many Al accelerators
  • Effectively communicate between accelerators using an open protocol
  • Easily expand the number of accelerators in a pod
  • Provide the performance needed for compute-intensive workloads now and in the future

A scale-up memory semantic fabric has significant advantages. Scale-out is covered by Ultra Ethernet and the industry is aligned behind the Ultra Ethernet Consortium (UEC) to connect multiple pods with the ability to scale multiple pods.

    Upcoming Micron SSDs could have chip-level controller functions

    Micron spoke of 232-layer SSD tech and a “combination of external and optimized internal controllers” at its recent Investor Day Conference.

    Micron slide

    This was picked up by Jim “The SSD Guy” Handy, who heard technology and product veep Scott DeBoer say that Micron makes vertically integrated SSDs which include NAND, some DRAM, and the controller. DeBoer said Micron is using an in-house controller because a portion of it was incorporated into the NAND flash chips themselves.

    The controller was split between external and internal functions. Why? It means some NAND controller functions can be performed by the NAND chips inside an SSD – low-level functions internal to a chip. That means they could occur in parallel, offload the SSD controller so it can do other work, and enable the SSD to work faster overall.

    Handy said an SSD typically has a controller inside it – Micron’s “external controller” phrase – which links to NAND chips across a relatively narrow bus. Each NAND chip is subdivided into blocks and these are linked by a wide internal bus, which is connected to the narrow bus going to/from the controller.

    In a process such as the recovery of partially erased blocks – garbage collection – valid data is read from blocks needing to be recovered and copied to an empty block. When all the data from partially erased blocks has been read into the new block, the old blocks’ contents are deleted and they become empty blocks available for reuse. Handy has a diagram showing this:

    Jim Handy Micron blog post
    Jim Handy diagram.

    Such garbage collection is directed by the controller, which reads then writes the data involved in the process. While it is carried out, this internal-to-the-SSD process it is not available to perform its main work, reading and writing data to/from the SSD for its host server or single-user system.

    Handy suggests that this could be done by putting appropriate internal controller functions – simple ones – inside the NAND chips. For example, the external controller would decide that garbage collection was needed and, using its metadata map of NAND empty and used chip blocks and data contents, calculate a series of block data reads and writes and subsequent block deletions as before. Then it would tell the NAND chips to do the work themselves, and eliminate the internal SSD IO of data reads and writes to/from the controller. A second Handy diagram illustrates the concept:

    Jim Handy Micron blog post
    Jim Handy diagram.

    The controller, for example, tells a NAND chip internal controller function to read valid data from block 0 to block 2, then valid data from block 1 to block 2, then delete block 0 and block 1. The data is transferred across the NAND chip’s wide bus, hence it is faster, and the external controller is not directly involved in the IO at all. 

    What we have here looks like a potential variant of the processing-in-memory (PIM) idea (think Samsung’s Aquabolt AI processor). We could call it PIN – processing-in-NAND. It is literally compute in storage, albeit very simple compute.

    This, Handy says, is only a possibility. He writes: “Micron may be doing something altogether different.  The company’s engineers may have chosen more appropriate functions to pull into the NAND.  In the end, though, the new NAND functions, whatever they are, probably accelerate the SSD while reducing the controller’s complexity and cost. I would also assume that Micron plans to keep these functions confidential, so that only Micron SSDs can take advantage of them.  This would give the company a distinct competitive edge.”

    It seems a powerful idea and Handy suggests we should “expect to see this approach adopted by other NAND flash makers in the future.”

    We must wait and see what Micron announces as it brings out 233-layer 3D NAND SSDs. The signs we’ll be looking for include better-than-expected SSD performance.

    Hyperconverger Diamanti discusses Kubernetes lifecycle system

    Interview: Diamanti started out as a hyperconverged infrastructure appliance vendor but then switched to supplying Kubernetes lifecycle management software that runs on its Spektra all-flash and bare-metal HCI system as well as other systems.

    We spoke to CPO/CTO and EVP Engineering Jaganathan Jeyapaul about some of the issues, including customers being more “thoughtful in their Kubernetes choices”, and more.

    Jagnathan Jeyapaul (JJ)

    The proposition of Diamanti’s Spektra environment is that businesses need a dedicated environment, on-premises or in the public cloud, within which to run Kubernetes and that it is not just another system app to run inside a virtual machine or general server.

    Jeyapaul told us Diamanti’s storage software is purpose built to provide high performance and security for cloud native applications running on Kubernetes, adding that through the use of storage accelerator cards, it achieves “about 1 million IOPS per Kubernetes node.”

    Blocks & Files: Kubernetes is becoming table stakes for any storage supplier, either on-premises or in the public cloud. What are your thoughts around Kubernetes and its management being a feature and not a product?

    JJ: Kubernetes has become the cloud’s default operating system. Provisioning & managing Kubernetes clusters is not a standalone product anymore, rather it is an expected feature within any higher-level cloud product that manages a group of cloud applications and services running flavors of Kubernetes across multi-clouds and hybrid clouds (the service mesh). Diamanti’s control plane and orchestration product has the core ability to monitor, manage and administer Kubernetes across multi-clouds and hybrid-clouds through telemetry data intelligence.

    Blocks & Files diagram of Diamanti’s Spektra

    Blocks & Files: Will Kubernetes become a cloud-native orchestration tool for IT workloads in general and then also IT infrastructure in general? What are the pros and cons of this?

    JJ: Heavy-duty, data-intensive IT workloads (homegrown stateful apps, 3rd party software ex: Analytics) are typically containerized already, and hence run well within Kubernetes. These heavy-duty IT workloads must rely upon a scale-out architecture to achieve performance at scale. Kubernetes serves as an excellent orchestration tool for the scaled-out IT workload nodes. IT infrastructure similarly benefits through Kubernetes adoption for configuration management, deployment, and lifecycle management of infrastructure components.

    However, if enterprises aren’t thoughtful in their Kubernetes choices, they could lock themselves inadvertently into different flavors of Kubernetes, which would reduce the portability of their workloads. Diamanti is a great equalizer in that our Kubernetes-based orchestration platform levels the playing field by providing accelerated performance and security while being 100 percent portable between on-premise and cloud clusters.

    Blocks & Files: Does Kubernetes have any relevance to composable infrastructure, and if so, what do you think it is?

    JJ: Kubernetes enhances a well-designed composable infrastructure and by sharing similar design principles, Kubernetes-based composable infrastructure provides a complete elastic, low-cost (no over-provisioning) and durable infrastructure for data-intensive, scaled-out stateful applications at scale.

    Blocks & Files: How is access to Kubernetes secured?

    JJ: Kubernetes distributions are generally very secure (specialist vendors like Diamanti thoroughly scan their distros and are certified). However, there are vulnerabilities in the ways a Kubernetes cluster is deployed & administered that require careful investigation. Kubernetes clusters must follow security best practices & standards for RBAC, secret protection, infrastructure as code, container security and end-point protection.

    Blocks & Files: What would you say about the idea that Kubernetes is too low-level for mass enterprise use and an abstraction layer with automated functions needs to be erected over it? 

    JJ: Kubernetes has been adopted by about 90 percent of enterprises already. It is seen as the default cloud OS and platform, and its plugin-based architecture allows for building customized infrastructures to meet all enterprise workload needs. Kubernetes itself must be treated somewhat as a low-level function and a well-designed control plane abstraction for the management of Kubernetes nodes, plugins, and IT workloads is needed in most cases, e.g., Diamanti Spektra. 

    Blocks & Files: What will the Kubernetes world and ecosystem look like in 5 years’ time?

    JJ: Kubernetes will become ubiquitous and will serve as the portable “runtime” for micro apps/services for the serverless, edge and ambient computing use cases. It will be to micro cloud workloads what Java is to on-premises legacy applications (write once, run anywhere, any cloud, any device). 

    Quantum positions ActiveScale as HPC secondary storage

    Interview: Quantum, known best for its flash, disk and tape storage, also sells into the data-intensive world of HPC storage. We spoke to Quantum’s Eric Bassier, senior director, Product and Technical Marketing about tape, cold storage, HPC capacities, and more during its exhibition at ISC High Performance 2022 in Hamburg, Germany.

    Blocks & Files: What are the use cases Quantum sees in the high performance computing market?

    Eric Bassier

    Eric Bassier:  We don’t do the primary storage for high performance computing. But we do have a lot of customers, different research laboratories, different organisations in life sciences, research, different firms, where they do use Quantum for the secondary storage. And a lot of those use cases are our StorNext file system, with some kind of a disk cache in front of tape.

    Blocks & Files:  What does an average quantum secondary storage HPC installation look like, in terms of the disk capacity range they might have and the tape capacity rates they might have?

    Eric Bassier:  It does depend on the use case. But it in general, it might be 10 to 20 percent on disk, and 80 or even 90 percent on tape.

    Blocks & Files:  Could you describe a couple of customers?

    Eric Bassier:  One is the Texas Advanced Computing Center (TACC) talk. They’ve built a centralized archive for their research facility based on StorNext and tape. And then the other public case study – it would represent why we want to be at a show like ISC – is what we’ve done at Genomics England. We’ve actually partnered with Weka.

    Blocks & Files:  Why is that?

    Eric Bassier:  As a file system Weka is much more suited for a typical HPC type of workload. StorNext really excels for streaming data, which is why it’s so good for large video files, movie files. Genomics England have 3.6 petabytes of flat storage for for their Weka file system on flash.

    That’s where they’re ingesting the data from the genomic sequencers. And they now have over 100 petabytes of our ActiveScale object store. It’s totally their secondary storage. In that case more than 90 percent of their data would be considered secondary storage.

    ActiveScale diagram

    Blocks & Files:  Does that include tape?

    Eric Bassier:  Although Genomics England is not using tape today, as part of their ActiveScale system we are talking to them about it.

    Blocks & Files:  I’d imagine that the rate at which they’re accumulating data, they’ll possibly start thinking in terms of the disk ActiveScale archive having colder data on it in parts, and maybe there’s so much of it, that they could offload it to tape.

    Eric Bassier:   It really is an ideal use case in many ways. A reseller partner of ours in the federal government space, they do a lot with kind of AI and machine learning,  the head there has said a lot of data is cold, or inactive. But it’s only inactive temporarily.

    And their customers … can’t predict all the time when they’re going to want to bring it back from cold data. In many of those use cases, they’re perfectly happy if it takes five minutes, 20 minutes to get data back from tape. The speed of tape is not a factor for that. And they like the low cost and reliability and also the low power – the green aspect.

    Blocks & Files: With you having a focus on secondary storage for HPC market, then I guess you’re thinking that we need to accept data from the primary storage systems quickly and straightforwardly and easily, and we need to ship cold data that’s now warmed up to those primary storage systems in the same way. Is there a workflow aspect to this?

    Eric Bassier:  Yes. Any type of research is going to have a workflow associated with it. They’re going to have a stage where you have scientists actively analysing [or] working on the data, and then they would move it to less expensive storage, to an archive. Now, I think one thing that Quantum has done, where we have a very, very unique offering, is the way that we’ve integrated tape with ActiveScale. I think that it’s the first time where it’s not a tape gateway. 

    In other words: we built an object store where you can have a single namespace across disk and tape. And the way that an HPC application would interact with it is use S3, standard to read and write objects to disk. And then use either an S3 Glacier API set to put objects on tape and restore objects from tape, or used what’s called the AWS lifecycle policies, which are part of the S3 standard API set.

    There are other other solutions out there. There are gateways to put data on tape. But now you’re talking about different namespaces, different user interfaces, and multiple key management points. What we’ve done with ActiveScale, we think is unique, because it’s the only object store where you can create an object storage system on both disk and tape; you can take advantage of the economics of tape. I think we’ve abstracted the way that an application has to interact with tape in a way that’s better than what anyone’s done in the past.

    Blocks & Files: Are there any other advantages to ActiveScale for HPC users?

    Eric Bassier:  The second, really the key innovation for us, is the way that we do the erasure encoding of the objects on tape – that’s where we have patents. And why that matters is you get much better data durability on tape, and you get much better storage efficiency. Instead of making three copies of a single file where you’ve tripled your tape capacity, we erasure code the object and then we create the parity bits and we striped it over tape and it’s more efficient. The other thing that it unlocks through the way we do erasure encoding is that, most of the time, we can recover an object from cold storage with just a single tape mount. And it turns out that’s been a really difficult technical challenge to enable this concept for many, many years. 

    Blocks & Files: I’m thinking that you’ve got something here that is in its early days.

    Eric Bassier:  We view this as an area where we are going to grow. We actually think tape is going to be more relevant because of this type of a use case. But it’s important to put that in context. The overall tape storage market is still under $1 billion versus the disk market and the flash market, which are massive – many, many billions. 

    What I will say, though, is the tape business is growing. Our tape revenues are increasing. And the reason is because of the way that the largest hyperscalers are using tape. Effectively, they’re using it behind object stores with software that they’ve developed themselves. And here is the premise of our strategy. 

    The whole way we’ve developed the portfolio we have is our belief that HPC organizations have basically the same need at maybe a slightly smaller scale, and maybe a few years back, too. They’re not going to invest the four or five years of engineering time to develop their own object store stack code. So we’ve said: we’ve built this for you, we’ve put it in a box. If you’d like AWS Glacier, you don’t want to put all your HPC data in the public cloud, we’ve built Glacier in a box for you. We can deploy it at your site, we can deploy it at multiple sites. We could, and where we’re going is, maybe we might host part of that for you. And that’s where our roadmap takes us. 

    Blocks & Files: So that could be as a Quantum cloud. You could have, for example, some Quantum ActiveScale systems in a colocation sites or an Equinix centre or something like that, and make that available to HPC customers?

    That is something that we are considering. And just to make a comment on Genomics England. They take advantage of the capability of our ActiveScale object store software to do what we call geo-spreading. So they have object store systems that are deployed at three sites, and the ActiveScale software geo-spreads to the objects across all three sites. So you actually have a single system, a single namespace, that’s spread across three sites. And we can do that either on disk or tape. So conceptually, you can have disk at three locations and a tape system at three locations. But it’s a single namespace, a single object store. 

    But we have many customers that say, well, I might have two sites, but I don’t have three or, you know, I’ve got only got one site. Would Quantum be willing to host the other two? And that’s where I think our our customers are going to lead us there in terms of what’s the right model.

    Blocks & Files: So you could provide a component of a customer’s private cloud? 

    Eric Bassier:  Correct. I think really, you know, private cloud is one of these things that means different things to different people. But that is where we’re getting a lot of the early customer engagements that we have. That is how they’re expressing their initiative. They’re saying, ‘hey, we want to build a private cloud for archival data’. And we say, ‘we can help you build a private cloud for your archival data’. So yes, we think we’re pretty excited about that. 

    Comment
    Blocks & Files thinks Quantum is well positioned with StorNext and ActiveScale using both disk and tape to pick up a number of HPC customers as they accumulate too much data to store on their primary and possibly flash storage systems and tier older data off to nearline disk and then to tape. The single namespace and geo-spreading unifies tape and disk into effectively a single object store and that potentially makes life easier for HPC admin staff.

    The possibilities of liquid memory

    Researchers at nano- and digital technology R&D center IMEC are suggesting memory could become liquid in the future.

    Engineering boffins at Belgium’s IMEC (Interuniversity Microelectronics Centre) presented a paper on liquid memory at the 2022 International Memory Workshop (IMW). Maarten Rosmeulen, program director of Storage Memory at IMEC, identifies DNA storage as a post-NAND high-density but slow archival technology in an IMEC article. He proposes that liquid memory could replace disks for nearline storage at some point in the future.

    Two types of memory, colloidal and electrolithic, are said to have the potential for ultra high-density nearline storage applications and might have a role between disk and tape from 2030 onwards “at significantly higher bit per volume but slower than 3D-NAND-Flash.” Rosmeulen says: “We anticipate that with these approaches, the bit storage density can be pushed towards the 1Tbit/mm2 range at a lower process cost per mm2 compared to 3D-NAND-Flash.” 

    Colloidal memory concept

    In colloidal memory, two types of nanoparticles are dissolved in water contained in a reservoir. They carry data symbols. The reservoir has an array of capillaries through which the nanoparticles flow one at a time. “Provided that the nanoparticles are only slightly smaller than the diameter of the capillaries, the sequence in which the particles (the bits) are entered into the capillaries can be preserved.” The bit sequences encode information and the nanoparticles can be sensed by CMOS peripheral circuit-controlled electrodes at the entrance to each capillary tube for writing and reading.

    Frequency-dependent dielectrophoresis is being investigated as a write mechanism. “A selective writing process can be created by choosing two particles that respond differently to the applied frequency (attractive versus repulsive).” R&D with polystyrene nanoparticles is ongoing “to fine tune the concept and provide the first proof of principle on a nanometer scale.”

    Electrolithic memory also has a fluid reservoir with an array of capillary tubes. Two kinds of metal ions – A and B – are dissolved in it and electro-deposition and electro-dissolution techniques are used for reading and writing information.

    Electrolithic memory concept

    There is a working electrode at the base of each capillary tube, which is made of an inert metal like ruthenium, and the reservoir also has a single counter electrode. A CMOS integrated circuit connects to the dense array of working electrodes. The common counter electrode plus the reservoir and a working electrode form an electrochemical cell for each capillary tube.

    The articles says: “By applying a certain potential at the working electrode within the capillary, thin layers of metal A can be deposited on the electrode. Metal B will behave similarly but deposits at a different onset potential – determined by its chemical nature.” Information can be encoded in a stack of alternating layers on each working electrode. For example, “1nm of metal A can be used to encode binary 0, while 2nm thick layers of A encodes a binary 1. A layer of metal B of fixed thickness (e.g. 0.5nm) can be used to delineate subsequent layers of A.”

    Rosmeulen says: “These new liquid-based memories are still in an exploratory research stage, with the electrolithic memory being the most advanced. Nevertheless, industry has already shown considerable interest in these concepts.” 

    HJe adds: “To become a viable storage solution for nearline applications, the technology must also have adequate response time, bandwidth (e.g. 20Gbit/s), cycling endurance (103 write/read cycles), energy consumption (a few pJ to write a bit), and retention (over 10 years). These evaluations will be the subject of further research, building on IMEC’s 300mm liquid memory test platforms with both colloidal and electrolithic cells in different configurations.”

    Dell reports bumper Q1 results despite supply chain woes

    Dell logo
    Dell logo

    Dell grew Q1 2023 revenues to a record level on a year-over-year basis with commercial PCs up 22 percent, server revenues up 16 percent, and storage up 9 percent, while still grappling with a troublesome supply chain.

    In the quarter ended April 30, Dell revenues were $26.1 billion, up 16 percent: the Infrastructure Solutions Group – servers, networking and storage – delivered $9.3 billion, and the Client Solutions Group – PCs – contributed $15.6 billion. Profit $1.1 billion profit, a rise of 17.3 percent on a year ago. This was Dell’s fifth consecutive quarter of growth.

    Vice chairman and co-COO Jeff Clarke said in a statement: “We followed a record FY22 with a record first quarter FY23… with growth across our business units.”

    Co-COO Chuck Whitten talked of the benefits of having “a strong, geographically and sector-diverse business covering the edge to the core data center to the cloud,” proclaiming: “We are positioned to pursue growth wherever it materializes in the IT market, given the predictability, durability and flexibility in our business.”

    Financial summary

    • Operating income: $1.6 billion, up 57 percent year-on-year
    • Operating cash flow: -$0.3 billion, primarily driven by annual bonus payout and seasonal revenue decline
    • Diluted EPS: $1.84, up 36 percent and a record
    • Recurring revenue: $5.3 billion, up 15 percent
    • Cash & Investments: $8.5 billion, up 15 percent

    Core debt, which was at $42.7 billion in fiscal 2019, has fallen to $16.5 billion as of Q1 2023. That’s down 41 percent year-on-year.

    Within ISG, servers and networking provided $5 billion in turnover, up 22 percent, with storage up 9 percent and generating $4.2 billion. This follows several quarters of slower storage sales.

    Analyst Aaron Rakers told subscribers: “Dell delivered what we think should be considered very strong F1Q23 ISG results.”

    CSG saw commercial PC revenues rise 22 percent to $12 billion and consumer revenues increase 3 percent to $3.6 billion.

    Rakers said: “Dell’s strong CSG (PC) results reflect the company’s ability to continue to leverage a leadership position in software + peripherals – something we expect to be continually driven by the need for enterprises to support hybrid work.”

    ISG is a key pillar of Dell’s business and the company believes it has ample room to grow. The ISG total addressable market was $165 billion in 2021, and is forecast to grow to $216 billion in 2025. Dell noted a shift in IT spend during Q1 from consumers and PCs to datacenter infrastructure, and expects CSG growth to moderate during fiscal 2023. It will moderate most in consumer PCs and ones running Chrome, less so in commercial PCs, which represents over 70 percent of Dell’s PC business.

    Earnings call

    In the earnings call, Clarke said: “We continue to execute quite well in a complex macro environment… We experienced a wide range of semiconductor shortages that impacted CSG and ISG in Q1. In addition, the COVID lockdowns in China caused temporary supply chain interruptions in the quarter. As a result, backlog levels were elevated across CSG and ISG exiting the quarter.

    “We expect backlog to remain elevated through at least Q2 due to current demand and industry-wide supply chain challenges.” But the supply chain problems are being worked through. “People are coming to Dell because there’s confidence in our supply chain to deliver.”

    Whitten said IT budgets remain healthy. “What we don’t see is an immediate move to go after a reduction in IT budgets. I mean, right now, it is a very healthy infrastructure environment.”

    Revenues in the next quarter are expected to be between $26.1 billion and $27.1 billion, up 10 percent at the mid-point. Full fiscal 2023 revenues are being guided to grow approximately 6 percent over 22’s $101.2 billion. This is despite macroeconomic concerns such as the geopolitical environment, inflation, interest rates, slowing economic growth, currency, and continued disruption to supply chains and business activity. 

    Dell expects its relationship with VMware to continue unchanged if Broadcom completes its purchase.

    Storage news ticker – May 26

    Object storage software supplier Scality says its S3 Object Storage is available on the HPE GreenLake Cloud Services platform for on-premises use with cloud-like data services that meet data sovereignty requirements. HPE and Scality have co-deployed over an exabyte of storage, with hundreds of joint customers in more than 40 countries. The two say they provide freedom from being beholden to Amazon or the public cloud – therefore, no expensive data access or egress fees. Read up on the background here.

    Amazon has added new file services to AWS. AWS Backup now allows you to protect FSx for NetApp ONTAP. Amazon EFS has increased the maximum number of file locks per NFS mount, enabling customers to use the service for a broader set of workloads that leverage high volumes of simultaneous locks, including message brokers and distributed analytics applications. Amazon FSx for NetApp ONTAP is now SAP-certified for workloads including S/4HANA, Business Suite on HANA, BW/4HANA, Business Warehouse on HANA, and Data Mart Solutions on HANA. A Jeff Barr blog provides more details.

    Data protector N-able announced N-hanced Services for its MSP partners with onboarding, support, custom solutions, and migration services to assist MSPs with integrating and consolidating remote monitoring and management platforms. Leo Sanchez, VP of support and services at N-able, said: “We know that our partners are struggling with challenges like labor shortages, security, and more and more devices in more increasingly complex environments. With this new offering, we’re putting our experts to work to help them get the most out of our solutions and optimize their current workforce so they can actually do more with less staff. They can leverage bespoke services that meet them where they’re at and get to a place where they see real value faster.”

    NetApp has closed the acquisition of Instaclustr, a database-as-a-service startup. A blog by EVP  Anthony Lye said: “We just closed our acquisition of Instaclustr, marking a huge step in the transformation of NetApp. Think about it, in seven years, an on-premises storage company has built unique relationships with the three biggest clouds. Today, we’re offering a rich set of data services and hold a leading position in Cloud Operations (CloudOps).” He claimed: “Instaclustr allows us to deliver on our promise of more cloud at less cost.”

    Nutanix has signed up the UK’s BUPA private health organization as a customer, providing a two-site IT platform (100 nodes in total) for BUPA’s 3,000-plus Citrix desktop-as-a-service (DaaS) users. BUPA has also begun using Nutanix Calm to automate management of its DaaS computing system and allow for deployment of this and other workloads to any public cloud. BUPA said: “Citrix modules that previously took half an hour or more to boot were now starting in seconds.” Read a case study here.

    High-performance NVMe array supplier Pavilion Data announced a partnership with Los Alamos National Laboratories (LANL) to co-develop and evaluate the acceleration of analytics by offloading analytics functions from storage servers to the storage array, minimizing data movement by enabling data reduction near the storage. LANL is moving their I/O from file based to record or column based, to enable analytics using tools from the big data community. It  has shown 1000x speed-ups on analytics functions by using data reduction near the storage devices via their DeltaFS technology. We are told the data processing algorithms of Pavilion HyperOS coupled with the performance density of its HyperParallel Flash Array provides a fast computational storage array capability enabling analytics offloads at scale.

    Qumulo has introduced non-disruptive rolling Core software upgrades across a cluster of nodes. It’s available immediately for Network File Storage (NFS) v3.x, with a planned expansion to support NFS v4.x this summer. The rolling update process within Qumulo Core allows for node reboots with no downtime to the cluster. Stateful requests can be handed off to other processes on an individual node, as other processes update. Full platform upgrades update underlying component firmware and operating systems, giving the administrator an option to perform the upgrade as a parallel upgrade (with minimal downtime) or as a rolling, non-disruptive upgrade (with one node going offline and upgraded at a time). Non-disruptive rolling upgrades give storage administrators the freedom to run upgrades during normal operating hours instead of during off-hours, weekends or holidays.

    Data protector and security supplier Rubrik has hired Michael Mestrovich as CISO. At the Central Intelligence Agency, Mestrovich led cyber defense operations, developing and implementing cybersecurity regulations and standards, and directing the evaluation and engineering of cyber technologies. He also served as the Principal Deputy Chief Information Officer for the US Department of State where he was responsible for managing the department’s $2.6 billion, 2,500-person global IT enterprise.

    Snowflake’s revenue for its first quarter of fiscal 2023 ended April 30 was $422.4 million, up 85 percent year-on-year. Product revenues were up 84 percent to $394.4 million. The net revenue retention rate was 174 percent. It now has 6,322 total customers, up from 5,994 last quarter, and 206 customers with trailing 12-month product revenue greater than $1 million. Snowflake recorded a loss of $165.8 million. Chairman and CEO Frank Slootman said: “We closed the quarter with a record $181 million of non-GAAP adjusted free cash flow, pairing high growth with improving unit economics and operational efficiency. Snowflake’s strategic focus is to enable every single workload type that needs access to data.” Guidance for the next quarter is for revenues of $435-$440 million, compared to $272.2 million revenues in Q2 2022.

    Snowflake’s revenue growth is accelerating

    … 

    NAND industry research house TrendForce says NAND flash bit shipments and average selling prices fell by 0.5 percent and 2.3 percent respectively in Q1 2022, causing a 3 percent quarterly decrease in overall industry revenue to $17.92 billion. The market was oversupplied, resulting in a drop in contract prices in Q1, among which the decline in consumer-grade products was more pronounced. Although enterprise SSD purchase order volume has grown, demand for smartphone parts has weakened and inflation is rising. Looking to Q2, the same dynamics are expected to continue to slow the growth of consumption, however, the ongoing shift large North American datacenters to high-capacity SSDs will drive enterprise SSD growth by 13 percent. 

    Virtium has announced a new StorFly XE class M.2 NVMe SSD product portfolio with up to 10x more endurance than TLC SSDs – because they use pseudoSLC flash, a hybrid of 2-bit per cell MLC using firmware to emulate the storage states of 1-bit per cell SLC, with up to 30,000 PE cycles. This pSLC NAND has much better data retention performance at higher temperatures, yet at a fraction of the cost compared to SLC. The XE class M.2 NVMe SSDs are configurable and include vendor-specific commands to tune critical parameters including power and capacity. They also provide extremely steady performance over the full SSD capacity and over the full -40 to 85°C temperature range, meaning they won’t suffer the frequent and erratic performance drops often found in client and enterprise-class SSDs. 

    Toshiba edging nearer to 26TB capacity nearline disk drives?

    Japanese disk drive media manufacturer Showa Denko K K (SDK) is shipping 26TB capacity disk drive platters. It has not revealed its customer, but Toshiba uses Showa Denko platters in its nearline 3.5-inch disk drives.

    Nearline disks are high-capacity (15TB-plus) drives spinning at 7.200rpm, typically with a SATA interface.

    SDK says its new disk media supports energy-assisted magnetic recording and shingled magnetic recording (SMR) because its has fine crystals of magnetic substance on the aluminum platter surface. It includes technology to improve the media’s rewrite-cycle endurance. SDK says it produces disk platter media by growing epitaxial crystals at the atomic level, forming more than ten layers of ultra-thin films with a total film thickness of no more than 0.1μm.

    SMR drives need host management of data writes to zones of overlapping write tracks – a data change in any part of the zone requires the whole zone to be copied, edited, and rewritten. This effectively limits their use to hyperscaler customers who can have their system software modified to manage SMR drives. Such drives are unlikely to appear in standard enterprise datacenters or video-surveillance farms or even desktop PCs, unless system software is altered to support them.

    Like Western Digital, Toshiba uses energy-assisted recording to strengthen data writing in the very small bit areas of 18TB-plus drives. Its specific technology is called MAMR (microwave-assisted magnetic recording). Seagate has an alternate HAMR (heat-assisted magnetic recording) technology in development.

    Western Digital and Seagate are shipping 20TB conventionally recorded (CMR) nearline drives with Toshiba at the 18TB level. Seagate is sample shipping a 20TB-plus drive with Western Digital sample shipping 22TB CMR drives and 26TB shingled magnetic recording (SMR) drives. SDK and Seagate are partnering to build HAMR disk drives.

    Toshiba has a 20TB CMR HDD on its roadmap and a 26TB SMR drive. The Showa Denko 26TB HDD media news indicates that this could be announced for sample shipping quite soon.

    We could see both Western Digital and Toshiba shipping 26TB SMR drives in the summer but not Seagate, unless its 20TB-plus drive has an SMR variant at the 26TB capacity level.

    SDK said it plans to produce nearline disk drive media with a greater than 30TB capacity by the end of 2023.

    Nutanix lowers outlook for fiscal ’22 amid server delays and sale rep departures

    Hyperconverged infrastructure software vendor Nutanix is warning that supply chain issues and sales reps leaving for pastures new will result in weaker-than-expected sales for the remainder of its financial year.

    Revenues in the company’s Q3 of fiscal 2020 ended April 30 were $404 million, up 17 percent year-on-year, beating the company’s own forecasts of between $395 million to $400 million. It reported a net loss of $111.6 million, versus a net loss of $123.6 million in the year ago period. 

    President and CEO Rajiv Ramaswami said during an earnings call: “Our third quarter reflected continued solid execution, demonstrating strong year-over-year top and bottom line improvement.

    “Late in the third quarter, we saw an unexpected impact from challenges that limited our upside in the quarter and affected our outlook for the fourth quarter. Increased supply chain delays with our hardware partners account for the significant majority of the impact to our outlook, and higher-than-expected sales rep attrition in the third quarter was also a factor. We don’t believe these challenges reflect any change in demand for our hybrid multicloud platform, and we remain focused on mitigating the impact of these issues and continuing to execute on the opportunity in front of us.”

    Full FY22 revenues are now forecast to be $1.55 billion, an 11.2 percent year-on-year increase. Nutanix had predicted a revenue range of $1.625 billion to $1.63 billion three months ago.

    Nutanix financial chart
    Significant net loss improvement in the two most recent quarters

    Customers run Nutanix software on third-party server hardware and shipping delays impacted revenues and orders. Rukmini said: “We saw these supply chain challenges impact us late in Q3, which limited our upside in Q3, and we expect these trends to continue in Q4.”

    It won’t be just a short-lived issue, Ravaswami said: “We expect that these challenges in the supply chain are likely to persist for multiple quarters.“ He added that Nutanix wasn’t seeing any change in underlying demand.

    Annual Recurring Revenue of $1.1 billion was up 46 percent on the year.

    CFO Rukmin Sivaraman, said: “Q3 sales productivity was in line with our expectations… We saw rep attrition worse than in Q3, resulting in lower-than-expected rep headcount entering Q4. Under the leadership of our new Chief Revenue Officer, Dom Delfino, our sales leaders remain focused on getting rep headcount to our target level via both better retention and increased hiring efforts.”

    Nutanix’s prior CRO, Chris Kaddaras, left in October 2021 to join startup Transmit Security, just seven months after starting at Nutanix. Delfino was hired from Pure, commencing his role in December 2021. Five months in and he’s losing sales reps. They are going, Ramaswami said, to startups with “the promise of quick IPO riches.”

    Ramaswami suggested things would start to improve after the next quarter. Delfino is working on increasing sales rep productivity, “doubling down on training and enablement,” as well as “improved territory coverage and [a] higher level of quota attainment.” 

    Wells Fargo analyst Aaron Rakers told subscribers that sales reps had left for other start-up opportunities because they were “unable to make quotas,” and said: “We believe this could take a few quarters to rectify.”

    The customer count rose by 586 in the quarter to 21,980. This is down on the 660 added a year ago and the 700 in the prior quarter. Nutanix says it saw a year-over-year improvement in win rates against VMware and other competitors. 

    Free cash flow was minus $20.1 million. Ramaswami said Nutanix was working to change this: “We continue to prioritize working towards sustainable free cash flow generation in FY ’23.”

    The fourth-quarter outlook is for a growth of between $340-$360 million, weighed down by customer order delays due to server hardware availability and the lower-than-expected sales rep headcount. At the $350 million mid-point, this is a 10.4 percent decrease on the year-ago Q4. William Blair analyst Jason Ader told subscribers this is “an eye-popping $89 million below consensus.”

    Ramaswami said: “We don’t believe our reduced outlook is a reflection of any change in our market opportunity or demand for our solutions. That said, we are focused on mitigating the impact of these challenges and continuing to drive towards profitable growth. “