Cohesity is expanding its Data Security Alliance ecosystem with six Data Security Posture Management (DSPM) vendors: longstanding partner BigID, Cyera, Dig Security, Normalyze, Sentra, and Securiti.
The Data Security Alliance was set up in November, housing nine members. At the time CEO Sanjay Poonen said: “We are partnering with these industry heavyweights so they can leverage our platform, the Cohesity Data Cloud, to help customers easily integrate data security and resilience into their overall security strategy.”
It is part of Cohesity’s DSPM approach that focuses on data to protect it, with technologies and processes used to identify sensitive data, classify and monitor it, and reduce the risk of unauthorized access to critical data.
We understand that, as the DSPM market is still in its early stages, Cohesity has looked at the plethora of players in the space and rather than picking a single vendor, partnered with the largest share of the market and let its customers decide what’s right for them to best protect against today’s cyber threats.
Cohesity says DSPM is a three-phase process:
Discover and classify data automatically – it continuously finds and labels sensitive, proprietary, or regulated data across all environments, whether on-prem, hybrid, or multicloud.
Detect which data is at risk and prioritize fixing the problem area – by automatically and continuously monitoring for any violations of an organization’s security policies.
Fix data risks and prevent them from reoccurring – any DSPM-discovered data access problem is fixed and the organization’s security posture and policies adjusted to stop a repeat event.
Last month Tata consultancy Services joined Cohesity’s Data Security Alliance, bringing its TCS Cyber Defense Suite portfolio into play. At that point the Data Security Alliance contained 15 members: BigID, Cisco, CyberArk, Mandiant, Netskope, Okta, Palo Alto Networks, PwC UK, Qualys, Securonix, ServiceNow, Splunk, TCS, and Zscaler. BigID is not a new member this month so there are five new members, taking the total to 20 (19 when Cisco buys Splunk).
Cohesity claims that its new DSA group represents the majority of the DSPM market, which means it has 20 partners helping to push its security messaging and services. It is particularly concerned about data in the public cloud, saying 82 percent of breaches involve data stored there. Cloud adoption continues to increase, and copies of data are often shared between clouds without oversight by IT or security, resulting in the growth of shadow data, which is untracked.
Availability
The integration with Normalyze, Cohesity’s initial design partner, is expected to be available within 30 days. The company’s partnership with BigID on enterprise-grade, AI-powered data classification grows through this new integration with SmallID (BigID’s DSPM product) and is expected to be available in 60 days. Additional DSPM partner integrations will be available in the coming months.
CTERA cloud file system users now have access to both file and object data through a single Fusion interface.
The company provides a global file system which can be hosted on a public cloud or on-premises object store, and accessed from a datacenter or remote site using an edge caching device to speed data access. Files are synced and shared between users for collaboration. Now CTERA is providing simultaneous SMB, NFS, and S3 access to the same data set. Cloud applications can connect directly to enterprise file repositories, with no need for an intervening data copy to separate S3 buckets.
Oded Nagel
CTERA CEO Oded Nagel said in a statement: “Fusion extends our vision for the enterprise DataOps platform, enabling organizations not only to store their unstructured data efficiently and securely, but also to unlock its greater potential. By offering easy programmatic access to data, CTERA Fusion allows organizations to implement their own custom data pipelines that operate across multiple access protocols, cloud environments, and remote locations.”
The Fusion software includes:
Single Namespace Across File and Object – interact with data generated at the edge using standard object storage S3 protocols, or access cloud-generated data from the edge using NAS protocols.
Data transfer capabilities like multipart uploads and pre-signed URLs, making ingestion and sharing of large files more efficient.
High availability and scalability.
All data is secured in transit via TLS and encrypted at rest, providing an added layer of protection.
Compatibility with all features of the CTERA Enterprise File Services Platform, including AI-based ransomware protection (CTERA Ransom Protect), WORM compliance (CTERA Vault), and global file collaboration.
Nasuni’s analytics connector creates a read-only temporary copy of file data in S3 so that cloud analytics services can be run against it, but does not natively expose an S3 protocol. Also, based on the information available on Egnyte’s and Panzura’s websites, it appears they do not expose an S3 protocol endpoint. Egnyte, for example, has a virtualized application that can be placed in AWS and will synchronize content stored in Egnyte back to AWS. Also Egnyte provides a way to archive completed projects into lower-cost AWS S3 storage. The file and object domains are kept at relative arms length.
A Panzura spokesperson told us: “Panzura provides SMB and NFS access to the same dataset today. The company refers to that as ‘mixed mode.’ It is in progress on S3 access, and it’s on their roadmap for 2024.”
Chris Evans is the consultant and analyst behind Architecting IT. He has a reputation for shrewd and insightful analysis of storage technologies and suppliers. We thought we’d take advantage of that and ask him for his views on some mainstream storage suppliers and new technologies such as ephemeral cloud and decentralized storage.
Blocks & Files: How can existing block storage array vendors who have ported their storage OS to the public cloud – for example, NetApp with its ONTAP-based AWS, Azure and Google offerings and Pure with Cloud Block Store – compete with the ephemeral cloud storage quartet: Dell (PowerFlex), Lightbits, Silk, and Volumez?
Chris Evans
Chris Evans: Vendors that have ported natively to the cloud are integrated into the ecosystem of the cloud itself. This means the cloud vendor manages scaling, resiliency, availability, and upgrades. It’s a true service. Those platforms are also integrated using knowledge on how the cloud platform itself operates behind the scenes. Currently this means only NetApp and Microsoft (Windows File Server). Every other vendor is at arms’ length from the internal operation of the platform, so must design and build accordingly. Their solutions are cloud-aware, so can scale, manage redundancy, etc. but crucially, those vendors don’t have visibility of the internal workings of the platform, so must make assumptions on how the cloud operates, based on observation. The key next step for vendors like Pure, Volumez and Lightbits is to build relationships with the platform vendors that results in native integration. This blog post covers the definition of cloud native in this context.
So, from a technology perspective, NetApp has an advantage. However, remember that NetApp is an OEM service, so the customer is owned by the cloud platform. That presents a problem for NetApp to upsell to those cloud customers as they don’t own the customer/vendor relationship.
Blocks & Files: How can the existing on-premises unstructured data storage suppliers compete with decentralized storage startups like Storj, Cubbit, and Impossible Cloud?
Chris Evans: The decentralized storage model is an interesting one that we couldn’t have developed 20 years ago.
Fast networking, cheap compute (for erasure code calculation) and the global distribution of datacenters provides the perfect storm for the technology. The greatest challenge for the decentralized model is regulation.
If data needs to be physically stored in-country, then the benefits of the decentralized model are lost. The other question to ask here is about scale. Decentralized storage is not about a few terabytes of filer data, but about long-term storage of petabytes to exabytes of capacity, where the usage profile changes over time.
On-premises vendors need to decide how much of the future storage pie they want. Small-scale (< 1PB) will continue to be practical for on-premises (especially for data under regulatory rules), but many enterprise businesses will consider pushing data to the decentralized storage cloud as the best financial option, especially where that data’s value is unclear (risk of loss vs cost of retaining). On-premises vendors need to get together or acquire the likes of Backblaze or Wasabi and be able to offer both an on-prem and cloud (and integrated) solution.
Blocks & Files: How would you compare Storj, Cubbit, and Impossible Cloud with cryptocurrency, anti-fiat currency, web3-style decentralized storage suppliers like FileCoin in terms of likely business success?
Chris Evans: I expect the decentralized storage vendors are desperately trying to distance themselves from the crypto similarities, due to the recent issues in the cryptocurrency market. I would look at decentralization as the natural evolution of RAID and erasure coding. RAID protected within a system, erasure coding extended this to multiple hardware platforms and with geodispersal; now decentralized provides the capability to buy erasure coded storage “as a service”. The underlying protection mechanisms (especially FileCoin) are based on web3 & blockchain technology concepts but that’s where the comparison should end.
There’s a two-tier model in play here. First, the storage layer – storage of data reliably, at scale and at low cost. Then there are the services that sit on top. CDN is a good example, perhaps AI-as-a-service where a vendor pulls a copy of the AI data from the decentralized cloud into a local cache. I see the real value being the diversity of services that can be offered. It was the same for object storage when that first hit the market. The concept is useful, but the higher-level application use cases add the value.
Blocks & Files: What do you think of the idea that Pure Storage and IBM have a sustainable and significant technology advantage over all-flash array suppliers that use COTS SSDs?
Chris Evans: We repeatedly see issues with storage media where scaling capacity or performance (or both) represents a technical challenge for the industry. As NAND flash has scaled up, the increased bit count per cell has introduced endurance issues and performance challenges (latency, not throughput). I believe current SSD capacity has been limited by 3 challenges:
The per-unit cost, which could be $10,000 for a 32TB drive. Populating enough drives into a system to provide resiliency is expensive. Additionally, a customer will track any failure much more closely, to ensure the vendor replaces any failed devices within warranty. No customer will simply discard a failed $10,000 drive. The unit cost also causes issues for the vendor and media supplier for the same reasons; the vendor will want the supplier to replace failed media under warranty. So, drives need to be more repairable, or at least capable of some degree of reuse.
DRAM overhead. Increasing the capacity of drives has been achieved through bit density and layer counts. DRAM is used to store metadata, keeping track of what data is stored where on the SSD. Typically, 1GB per 1TB of capacity is used for DRAM. With 64TB drives, this means each drive would have 64GB of DRAM. This is unsustainable in high-capacity systems from a cost and power/cooling perspective. The current vendor solution is to use larger indirection units (IU) or bigger blocks to write data. This means less metadata, but increases write amplification, making these drives only suitable for read-focused activities. The industry answer appears to be tiering as seen by Solidigm (see this post).
Failure domains. Here’s a post I wrote six years ago talking about the issue where we’re already discussing 32TB ruler drives. The industry has taken an extraordinary amount of time to reach the 64TB level. Part of the problem here is the impact of a device failure. In any system, there needs to be at least one unit of free capacity. If you build from 32TB drives, at least one 32TB drive (or the equivalent capacity) must be kept free. With COTS SSDs, all drives act independently, so failure can’t be predicted. Therefore, systems get designed with sufficient excess capacity to cater for MTTR (mean time to repair). With 64TB drives, that’s a lot of wasted capacity and cost. There’s also a rebuild factor; a failed drive will create significant amounts of additional backend I/O traffic to re-protect data.
Pure Storage and IBM have control over the entire FTL (flash translation layer), so can mitigate the cost and impact of DRAM scaling, the failure domain challenge, and the media cost (by mixing SLC, QLC, TLC on the same storage blade). Pure Storage is much more advanced than IBM in this area, whereas IBM currently just has “better” SSDs. That position may change in the near future, as indicated by IBM Fellow and Storage CTO Andy Walls in this podcast.
Blocks & Files: Does VAST Data’s storage technology (DASE architecture + QLC flash + SCM for metadata and writes + NVMe internal fabric) amount to a disruptive innovation for, firstly, the existing scale-up/scale-out file storage vendors and, secondly, the parallel file system software vendors?
Chris Evans: VAST has built a very interesting architecture that is specifically designed to overcome the challenges of large-capacity media. Data is written in stripes that suit the sequential write nature of QLC SSDs. SCM manages the short-term I/O and active metadata. The VAST system is essentially a massive key-value store (even the metadata is KV data). So, the platform is applicable to any data storage solution. However, while the VAST Data Platform could be used for block-based storage, it is not ideal for it (due to the I/O profile of block data) as it would put greater load onto the SCM layer.
VAST is disruptive to the unstructured market, because it offers “unlimited” scale in an architecture that would continue to perform well with larger capacity media. The C-node & D-node design creates a “physical layer” onto which storage applications can run – specifically NFS and S3, but now also database and other structured architectures. The legacy vendors have no answer to the VAST architecture (hence the HPE OEM agreement). They will see their business slowly chipped away from the top down (highest capacity downwards). The only saving grace is that the VAST solution is (currently) designed for petabyte scale and up.
Blocks & Files: I think you believe that HCI vendor Nutanix may eventually be acquired. What’s your reasoning for this view?
Chris Evans: I believe we’re going to see a convergence of architectures in the future. The public cloud has given IT organizations the capability to build and deploy applications without having to be too concerned about the infrastructure. The cloud vendor provides the physical plumbing, the virtual instances, application instances (built on virtual instances) and so on. On-premises, internal customers of the IT department will increasingly demand a cloud-like experience. IT teams don’t want effort the of building out their own cloud ecosystem but want to take one from a vendor. This is what VMware, Nutanix, OpenShift, OpenStack and SoftIron all provide.
So, if you’re Dell, HPE, Cisco, maybe even Oracle, how will you differentiate your hardware solutions? I can envisage one of the major infrastructure vendors acquiring Nutanix, as it offers an “oven-ready” cloud solution to sell to customers, either as a service or as a platform. The Cisco tie-in was interesting, because Nutanix’ biggest overhead is sales and marketing. If the Cisco partnership can be shown to reduce expenses enough to hit profitability, then this validates that Nutanix has a viable solution, and someone will acquire them. The acquisition route is preferable because it stymies the competition between infrastructure vendors. Simply reselling Nutanix (or any of the private cloud solutions) just continues to make those infrastructure vendors appear as box shifters and there’s increasingly less margin in that area.
Blocks & Files: Pure Storage is pushing the idea that no new HDDs will be sold after 2028. Do you think this idea holds water and what’s the reasoning behind your view?
Chris Evans: As I discuss in this blog post I think this message is a mix of semantics and marketing. It’s semantics from the position of the word “new”. We can take that to mean the HDD vendors will continue to sell existing HDD models, but won’t create new ones, because the cost of development will be outweighed by the revenue return. That’s the tipping point for the industry, when the R&D costs won’t be recouped through new product sales. At that point, the vendors simply push out the same products into a dwindling market that might last another 10-20 years. The $64,000 question is whether that tipping point is 2028. There’s the marketing angle – create a story today that few, if any, people will check out in five years’ time.
Blocks & Files: With the rise of edge as distinct from datacenter IT, and application repatriation, would you agree that the public cloud takeover of on-premises IT has come to a halt, and why or why not?
Chris Evans: I think the term “edge” is an interesting one. We’ve had edge computing forever, just in different forms. IBM used to sell System/38 and AS/400 to sit on the edge of the mainframe. The PC and local file servers provide edge capabilities to the core datacenter. CDNs and other delivery networks have had edge capability for at least 20 years. Over the last 70 years, IT has concertinaed from centralized to distributed and back again. So “edge” is part of the normal cycle.
The modern interpretation of edge computing started with the view that edge devices would collect, store, pre-process, and forward data into a core datacenter. The perception was that these devices would be in hostile locations where primary datacenters would be too expensive to deliver. Today, we now see edge as an extension of the datacenter, more in a distributed computing model, than a store and forward one.
Examining AWS’s and Azure’s offerings in this space, both platforms were designed to extend the public cloud outwards and subsume on-premises systems. In that respect, the only problem they solved was the issue of latency. It’s the reason I described Outposts as a cuckoo in the nest.
The goals of edge computing need to reviewed and applied to products and solutions from either on-premises or cloud vendors. Edge solutions need to manage being deployed in a physical environment that may be suboptimal. Edge systems need to operate autonomously if network connectivity fails. Edge solutions need improved resiliency capabilities, as maintenance could be intermittent. Most of these challenges have already been met. The next wave of edge designs needs to focus on security and operations, including ensuring data collected from the edge can be trusted, systems can be updated remotely, applications can be deployed remotely, and so on. The cloud providers aren’t offering solutions to meet this need, so their strategy has stalled.
Blocks & Files: Finally, would you argue that the generative AI hype is justified or unjustified?
Chris Evans: This is a difficult question to answer. There are clearly some significant benefits to be gained from generative AI. A human-like interface is one (with recent news that AI can hear, watch, and speak). The ability to pose questions in natural language is another. However, in the IT world we tend to get carried away with the hype of new technology. Vendors see an opportunity to sell new products, while end users like to play with new and shiny things (even if there isn’t an obvious business case).
With all new technology, I like to exhibit a degree of caution. In the first instance, I like to investigate and understand, but also identify shortcomings. To quote Ronald Reagan, I believe in “trust but verify,” I trust my GPS, for example, but I also verify what directions it gives me, because it is occasionally wrong. I think we’re in a hype cycle of inflated expectations, when AI is seen as the answer to everything and will be integrated into every platform. In reality, we’ll settle down into a middle ground, where generative AI speeds up certain tasks. I don’t think it’s about to become Skynet and take over the world!
Meta has announced its second generation smart glasses in conjunction with EssilorLuxottica, which owns Ray-Ban, two years after the first Stories generation was launched. Photos and videos taken by the glasses’ camera are stored in a tiny flash chip inside the temple side arms.
These Ray-Ban Meta Smart Glasses are wirelessly tethered to an iOS or Android smartphone and respond to Hey Meta commands, actuate on-frame touches or switch clicks, take photos, livestream video, or play audio. They follow on from the 2021 gen 1 Stories smart glasses, being lighter, and with better camera and audio features, longer activity period, and access to Meta AI through voice requests.
At the at the Connect conference, which also revealed the Quest 3 VR headset and Meta AI, founder and CEO Mark Zuckerberg said: “Smart glasses are the ideal form factor for you to let AI assistants see what you’re seeing and hear what you’re hearing.”
Ray-Ban Meta smart glasses
An in-frame battery powers the Qualcomm Snapdragon AR1 gen 1 system-on-chip for up to four hours, better than the gen 1 Stories’ three. The system draws less than 1 watt and supports Wi-Fi 6 and Bluetooth 5.7.
There is a charging case, connected by a USB-C cable, capable of powering the glasses for up to 36 hours. Camera photos and up to video clips are stored in an e.MMC (embedded MultiMediaCard) NAND+controller chip with 32GB capacity, enough for up to 100 x 30-sec videos and 500 three-frame burst photos. The gen 1 Stories NAND card had a mere 4GB capacity. Photos and videos can be transferred to a tethered phone to free up capacity in the glasses.
Unlike the Stories’ dual 5MP camera, this gen 2 design has a single 12MP ultra wide camera with 3,024 x 4,032 pixel resolution for images and 1,440 x 1,920 pixels at 30 fps video. A LED light on the frame front glows when a video is being recorded.
Audio is played through open speakers embedded in the sidearms with a directional capability to enhance hearing and reduce leakage. This audio is said to have improved bass sounds and overall be 50 percent louder than the Stories speakers. Voice commands are picked up from a five-microphone array in the front of the glasses.
The Meta AI assistant is based on Llama 2 tech and accesses real-time information through a Bing Search partnership. It includes photo-realistic image generation thorough Emu.
These water-resistant specs will be available from October 17 and can be pre-ordered on meta.com and ray-ban.com at $299 for the Wayfarer style and $329 for the Headliner. The glasses are compatible with prescription lenses. Meta AI features will be available in the US in beta only at launch.
A no-charge update is scheduled for 2024 to enable the smart glasses to recognize what the wearer is seeing via the onboard camera, such as a building, and provide information about it. We think augmented reality could be coming as well, with virtual objects or diagrams displayed in the glasses’ lenses.
Comment
Stratechery’s Ben Thompson writes: ”I think that smart glasses are going to be an important platform for the future, not only because they’re the natural way to put holograms in the world, so we can put digital objects in our physical space, but also — if you think about it, smart glasses are the ideal form factor for you to let an AI assistant see what you’re seeing and hear what you’re hearing.” He reckons it is more natural and quicker to talk to a ChatGPT-like AI, and hear an answer, than typing in text and reading a response.
Library of Congress image.ights Advisory: No known restrictions on publication.
Backblaze, which uses SSDs as boot drives for its disk-based storage servers, has produced its latest SSD annualized failure rate (AFR) report. Restricting its stats to drives with more than 10,000 days of use and to statistics with a confidence interval of 1.0 percent or less gets it a table with three entries:
Its overall HDD AFR is 1.64 percent, measured across 226,309 drives and 20,201,091 drive days – considerably more reliable statistics. Its SSDs are more reliable with their overall 0.60 percent AFR. These relatively early SSD AFR numbers are tending to show a so-called bathtub curve, similar to disk drives, with new drives failing more often than young and middle-aged drives and the failure rate rising as drives become older:
Report author Andy Klein writes: “While the actual curve (blue line) produced by the SSD failures over each quarter is a bit “lumpy”, the trend line (second order polynomial) does have a definite bathtub curve look to it. The trend line is about a 70% match to the data, so we can’t be too confident of the curve at this point.”
…
Connectivity cloud company Cloudflare, building on its collaboration with Databricks, is bringing MLflow capabilities to developers building on Cloudflare’s serverless developer platform. Cloudflare is joining the open source MLflow project as an active contributor to bridge the gap between training models and deploying them to Cloudflare’s global network, where AI models can run close to end-users for a low-latency experience. MLflow is an open source platform for managing the machine learning (ML) lifecycle, created by Databricks. Cloudflare’s R2 is a zero egress, distributed object storage offering, allowing data teams to share live data sets and AI models with Databricks. The MLFlow deal means developers will be able to train models utilizing Databricks’ AI platform, then deploy those models, to Cloudflare’s developer platform and global network, where hyper-local inference is deployed on the edge, to complete the AI lifecycle.
…
Data security, manager and protector Cohesity says Tata Consultancy Services (TCS) has joined its Data Security Alliance ecosystem. Cohesity’s Modern Data Security and Management platform bundled with TCS’ Cyber Defense Suite portfolio of security services and platforms, will bring customers a unified offering. The pair claim this is designed to improve visibility across the threat landscape, secure cloud activities, and enhance their cyber resilience. They say that joint customers will benefit from TCS’s domain knowledge as well as security solutions contextualized for specific industries such as finance, manufacturing, HLS, retail, utility, and more.
The Cohesity Data Security Alliance was founded in November 2022, and contains 15 members including BigID, Cisco, CyberArk, Mandiant, Netskope, Okta, Palo Alto Networks, PwC UK, Qualys, Securonix, ServiceNow, Splunk, TCS, and Zscaler. It will go down to 14 when Cisco completes buying Splunk.
…
Marco Fanizzi, Commvault’s SVP and GM International, and ex-VP EMEA, is leaving for a destination unknown after joining Commvault 4 years ago.
…
Micron’s Crucial consumer products unit has an X9 portable SSD in 1TB, 2TB and 4TB versions using Micron 176-layer QLC NAND, and a read speed up to 1,050MBps across its US-C interface. It has a three-year warranty period and comes in a 65 x 50mm plastic case. It is shock, vibration and drop proof up to 7.5 feet on a carpeted floor. The X9 works with Windows File History, Apple Time Machine and Acronis True Image.
…
Italy-based web3 decentralized storage startup Cubbit has won Leonardo, one of the world’s largest cybersecurity and defense companies with more than $14 billion in revenues, as a customer. For Leonardo, Cubbit storage means each file is encrypted, fragmented and replicated across multiple geographical locations and, in the event of an attack, it will always be fully reconstructable. The deal means Leonardo will reduce its data traffic, enabling a reduction in production and CO2 emissions. It says the distributed storage enables the construction of more efficient digital twins and it’s now ready for the storage of more archival data. That’s expected to grow threefold between now and 2026.
…
Gartner has a new Magic Quadrant; the Distributed Hybrid Infrastructure MQ. It deals with a set if suppliers who provide a standardized infrastructure stack that can run both in the public cloud and on-premises, either in datacenters or at the edge. The suppliers include public clods with versions of their software environment running on-premises, meaning Alibaba, AWS, IBM, Microsoft (Azure), Oracle, and Tencent Cloud, but not Google. A second supplier grouping consists on on-premises suppliers who have migrated their software to run in the public cloud: Huawei, Nutanix and VMware. More info here for Gartner customers. Here’s the MQ diagram:
…
A “Beyond Big Data: Hyperscale Takes Flight” report from Ocient says that two-thirds of IT Leaders (67 percent) plan to replace their data warehouse provider this year. That’s a 12% increase from last year (59 percent). 58 percent say that database and data warehouse modernization is the top data and analytics-related IT budget priority over the next 12 to 18 months. 67 percent are actively looking to switch their organizations’ data warehouse infrastructure. 90 percent are currently, or in the next six to 12 months, planning to remove or replace existing big data and analytics technologies. Ocient says that, for the public sector to adapt to a hyperscale data landscape, it must adopt a strategic and holistic approach with technological upgrades. Translation: convert to Ocient.
…
Event analytics supplier Mixpanel has released a new native connector for Google’s cloud data warehouse BigQuery, making it easier for users to explore and gain insights from data. Event analytics captures every action (or event) that each user performs within a digital product, like an e-Commerce site or a ride hailing app. This granular view helps companies understand how different groups of users behave at various points during their experience. Mixpanel says this approach is faster and easier than traditional Business Intelligence (BI) tools that require data to be prepared and tabulated, with BI queries coded in SQL.
…
European CSP OVH Cloud has announced the integration of Nvidia A100 and H100 Tensor Core GPUs into its AI offering. That will allow it to complex instances for GPU workloads, large language model ML and HPC workloads.
…
According to Tom’s Hardware reporting about a a German Computerbase forum, Samsung has a coming T9 portable SSD offering 2GBps read/write speed through its USB 3.2 gen 2×2 interface. Samsung’s existing T7 portable drive runs at 10GBps. The T9 will come in 1TB, 2TB and 4TB variants with a 5-year warranty. Prices at French retailer PC21 are reportedly $133 for a 1TB T9 and $226 for the 2TB version – although we couldn’t find the drives on the PC21 website.
…
Swissbit has announced an N5200 Enterprise SSD in three Enterprise and Data Center Standard Form Factor (EDSFF) E1.S variants (5.9 mm, 9.5 mm, and 15 mm) in addition to U.2. Capacities range from 1.92 to 7.68 TB. It has a 4-lane PCIe and an NVMe 1.4 interface, with sequential data rates of up to 7,000 MB/s read and 4,200 MB/s write. Random reads and writes reach up to 1.35 million IOPS and 450,000 IOPS, respectively. Its endurance us at least 1 drive write per day over 5 years. The drive features TCG OPAL 2.01 and AES-256 encryption, Secure Boot, and Crypto Erase plus error correction and data protection mechanisms. It complies with the OCP Cloud Specification 1.0. Swissbit doesn’t identify the NAND type and supplier. For project and sales inquiries, contact the Swissbit Datacenter team.
…
Cloud-based block storage startup Volumez has hired itself its first CRO, Jason McKinney, who most recently was worldwide VP of public cloud sales at NetApp for the last 3.5 years. He helped launch three 1st Party Services during his tenure: Azure NetApp Files, AWS FSXN, and Google Cloud Volumes Service. He was at Salesforce and VMware before NetApp. Volumez recently completed a $20 million A-round funding and hired John Blumenthal as its Chief Product Officer. Previously he was VP Data Services in HPE’s storage organization. It’s clear that Volumez has a sellable product, even though it is only at an A-round funding stage, and is setting up an exec team of a kind more usually seen with C-round stage startups.
Destini Nova
…
Destini Nova is joining WANdisco (the soon-to-be Cirata) as the senior director of Alliances and Business Development, and will be focused on building strategic partnerships and driving business growth delivering value to customers. WANdisco says she is thrilled to be working for an organization under the dynamic leadership of Stephen Kelly again, grateful for the opportunity, and looking forward to this new journey. She comes from being director Global Alliances at Sage and is based in Seattle.
CEO Antonio Neri is making exec changes at HPE, establishing a Hybrid Cloud business unit containing all GreenLake activities.
GreenLake is HPE’s IT products subscription service providing a public cloud-like experience for customers but on premises. The company says it has 27,000 GreenLake customers with 3.4 million connected devices. It is part of HPE’s view that business IT is becoming edge-centric, cloud-enabled, and data-driven. A hybrid cloud is an IT environment spanning the edge, datacenters, and the public cloud.
Fidelma Russo
Neri says: “We are creating a new Hybrid Cloud business unit, to be led by EVP and Chief Technology Officer Fidelma Russo. This business will bring together the HPE GreenLake platform with the technologies and services of HPE Storage, HPE GreenLake Cloud Services Solutions and the current Office of the CTO organization.”
Russo joined HPE as CTO on September 2021 from a role as SVP and GM of VMware’s cloud business unit. Her job was to lead HPE’s technology roadmap and manage GreenLake’s design and development, which she did, leading the team that created the current GreenLake platform.
Tom Black
Russo has been so successful at this that she now runs the whole GreenLake shebang, with Storage and Cloud Services thrown in. Neri says the aim is to ”deliver one portfolio of storage, software, data, and cloud services on the HPE GreenLake platform.”
Tom Black, who has run HPE Storage since 2020, becomes the leader of a new Private Cloud team within Russo’s operation and reports to her. He oversaw the Alletra introduction unifying HPE’s Primera and Nimble storage products.
Vishal Lall, GM for Software and GreenLake Cloud Solutions since 2021, with an 11-year stint at HPE, is leaving the company. Russo is taking over his responsibilities. Neri said Lall has made numerous contributions and has had a significant impact in driving the momentum and market opportunity in front of the company.
Vishal Lall and Predeep Kumar
Services SVP and GM head Pradeep Kumar, a 27-year HPE vet, is retiring, with Neri saying he “has been a remarkable leader.” HPE “will now put the key operational activities for all of our products and services under one leader.” That’s Mark Bakker, who will run a global operations organization composed of HPE Services, Supply Chain, and Quote-to-Cash.
Russo emerges from this reshuffle on top of GreenLake with storage exec Tom Black getting the reins to Private Cloud within her organization, and leaving a hole in storage to be filled.
Micron revenues fell again – as expected – in its final quarter of fiscal 2023 ended August 31, with a crappy economy and biting sanctions that prevent its products being sold in China the contributing factors. Execs are trying to put a brave foot forward, banking on a recovery over the following twelve months helped by surging demand for generative AI.
The memory/NAND and SSD maker’s Q4 revenues were down 40 percent year-on-year to $4.01 billion, and it recorded a net loss of $1.43 billion compared to the year-ago $1.49 billion net profit. This is its fifth successive quarter of declining revenues. Full fy2024 revenues were $15.54 billion, down 49 percent, and a net loss of $5.83 billion contrasted sharply with the year-ago’s $8.7 billion profit.
President and CEO Sanjay Mehrotra’s said in a statement: “During fiscal 2023, amid a challenging environment for the memory and storage industry, Micron sustained technology leadership, launched a significant number of leading-edge products, and took decisive actions on supply and cost.”
Micron’s results were affected by China and its Cyberspace Administration (CAC) declaration back in May that Micron’s products represent a security risk in the country and therefore should not be bought by “operators of critical information infrastructure.” That has affected some of Micron’s datacenter and networking sales in China, and is baked into its forward guidance. Micron is working with CAC to resolve the issue but if it is a tit-for-tat response to US technology export bans against China then any resolution will not come quickly.
Micron said it achieved record automotive revenue, record NAND QLC bit shipments for the full fiscal year, and reached record levels in calendar Q2 (fyQ3/Q4) for revenue share in data center and client SSDs.
Q4 fy204 financial summary
Gross margin: -9 percent
Operating cash flow: $249 million vs $24 million in prior quarter and $3.8 billion a year ago
Free cash flow: -$758 million
Diluted EPS: -$1.31 vs year-ago $1.35
Cash and investments: $10.5 billion
Liquidity: $13 billion
Total debt: $13.3 billion
These revenues reflect sales into vertical and geographic end markets, and can be sub-divided into NAND and DRAM-based revenues and also into business unit earnings. We’ll take DRAM and NAND first.
DRAM revenues of $2.8 billion were down 42 percent year/year but up 3 percent quarter/quarter while NAND revenues of $1.2 billion were up 19 percent Q/Q while still being down 29 percent Y/Y.
The chart above shows that Micron’s NAND downcycle bottomed out two quarters ago but the memory downturn has only just started to climb out of a deeper trough, indicating lower server and PC memory demand may be responsible.
Compute and network BU revenues are still declining and the embedded BU registered a Q/Q decline also, but mobile revenues are trending up as are storage BU revenues as well, but to a smaller degree.
Divisional revenues:
Compute and networking: $1.2 billion – down 59 percent Y/Y and down 14 percent Q/Q
Mobile: $1.21 billion – down 20 percent Y/Y and up 48 percent Q/Q
Storage: $739 million – down 17 percent Y/Y but up 18 percent Q/Q
Embedded: $860 million – down 34 percent Y/Y and down 6 percent Q/Q
The BUs sell into different end markets: the datacenter, PC, mobile, and embedded (automobile and industrial) sectors.
Customers in general are buying fewer products as they reduce their DRAM, NAND and SSD inventories due a depressed economy, weighed down by factors including COVID-19, a slowdown in China, and the Ukraine war. Micron said most customer inventories for memory and storage in the PC and smartphone markets are now at normal levels, as they are across most customers in the automotive market as well. Not so in the datacenter space, although customer inventory there is also improving and will likely normalize in early calendar 2024.
In the datacenter area traditional server demand remains lackluster while demand for AI servers has been strong. AI training servers contain significantly higher DRAM and NAND content, with HBM3E memory a necessary GPU component. Total server unit shipments are expected to decline in calendar 2023, the first year-over-year decline since 2016, but market revenues are touted to expand given the richer configurations.
Mehrotra said: “We believe our data center revenue has bottomed, and we expect growth in fiscal Q1 and increasing momentum through fiscal years 2024 and 2025 in our data center business.”
Micron expects total server unit growth to resume in calendar 2024 (starting in Micron’s fy24 Q2) as workload demand, helped along by AI, starts rising, and it projected to keep on rising.
Datacenter DRAM revenues should be helped by Micron beginning the HBM3E production ramp in early calendar 2024, and to achieve meaningful revenues in the corporation’s fiscal 2024. But HBM3 growth will reduce overall DRAM unit growth because an HBM3 and HBM3E die are roughly twice as large as a DDR5 DRAM die, meaning HBM3 and 3E demand will absorb an outsized portion of industry wafer supply.
The company said its HBM3E technology is currently in qualification for Nvidia GPU products. Mehrotra said in the earnings call: “We are very much still on track for meaningful revenue, several hundred million dollars in our fiscal year ’24. … Micron will be well-positioned to capture the generative AI opportunities that require the kind of attributes that our HBM3E memory brings to the market.”
The mobile market is gradually improving, and Mehrotra said: ”We expect calendar 2023 smartphone unit volume to be down by a mid-single-digit percentage year over year and then grow by a mid-single-digit percentage in calendar 2024. “
The PC market is also tipped recover from declining unit shipments in calendar 2023 to grow by low to mid-single-digit percentage in calendar 2024. AI-enabled PCs may drive memory content growth and an improved refresh cycle over the next two years, at least Intel and others are crossing their fingers for this outcome.
The automobile market held up in 2023 with Micron reporting record revenues and saying it has the leading market share; yet it provided no numbers for either claim. It expects that in the long term memory and storage content per vehicle will increase in both advanced driver-assistance systems (ADAS) and in-cabin applications. Electric vehicles typically have more memory and NAND content than traditional cars so as EV sales rise so too should Micron’s automotive sector revenues.
Sales into the industrial embedded market showed recovery signs in the quarter and Micron expects this to continue in fy2024.
Looking ahead, Mehrotra said: “Our 2023 performance positions us well as a market recovery takes shape in 2024, driven by increasing demand and disciplined supply. We look forward to record industry TAM revenue in 2025 as AI proliferates from the data center to the edge.”
Generative AI may well generate a revenue increase from DRAM needed for running models and NAND used to store their data.
The outlook for the next quarter (Q1fy2024) is for revenues of $4.4 billion +/- $200 million. This is 7.7 percent higher than the year-ago Q1’s $4.1 billion and indicate’s Micron’s overall revenue downcycle has bottomed out.
Rock band U2 has a residency at the Sphere in Las Vegas and Weka is storing and moving video data for the gig.
The Sphere at the Venetian Resort is an expensive spherical entertainment centre near the Las Vegas strip, 336 feet tall and 516 feet wide, with an outer surface incorporating a massive 580,000 square feet (53,800 sq m) video display and an 18,600 seat auditorium inside. This is enclosed inside a 16,000dpi, 60 frames/sec wraparound LED screen and a 167,000-speaker array, and features spatial audio beamforming and wave field synthesis. The haptic seats are able to provide vibration and motion effects.
Weka’s Data Platform is a fast scale-out and parallel filesystem with integrated data services, and will feed data to the video screens.
A statement from Stefan “Smasher” Desmedt, technical director and video director for U2, enthused: “This is a whole new level of rock and roll. Once again, U2 is breaking new ground, pushing the limits of what’s possible in live entertainment and setting trends that people will emulate for years to come.”
The Irish rock veterans explained: “U2 hasn’t played live since December 2019 and we need to get back on stage and see the faces of our fans again. And what a unique stage they’re building for us out there in the desert … We’re the right band, Achtung Baby the right album, and Sphere the right venue to take the live experience of music to the next level […] Sphere is more than just a venue, it’s a gallery and U2’s music is going to be all over the walls.”
Desmedt continued: “The sheer scale of the Sphere is extraordinary – it’s like a spaceship in here, with 16K x 16K pixel screens that require that we move 200–300 gigabytes of video data every minute. In the past, we used 4K video – this is uncharted territory. We needed to find a storage server technology partner that could meet both the band’s vision and the production’s extreme scale and performance requirements and deliver flawlessly in real time. Weka quickly became the clear choice.”
The U2 tech team had to migrate more than 500TB of archival video footage rendered in the United Kingdom to the venue’s servers in Las Vegas. Weka, with its data platform software, provided a way for Desmedt’s team to render, load, and migrate large video files via the cloud to its local cluster at the Sphere.
Fancy doing file sharing and collaboration on a smartphone or tablet? Panzura has an Edge Gateway which might appeal.
The company has a CloudFS global cloud file services product based on AWS, providing file-based collaboration services. File data is stored in AWS in immutable object form. Analytics and governance data services are layered on this platform with access applications, AKA Smart Apps, for data access, collaboration, and monitoring. They provide isolated multi-company file sharing zones, content search, ransomware protection, and an edge application. This manages security and intellectual property on a mobile device such as an iPad or laptop. It has been extended to to create private workspaces and securely share and update files stored within CloudFS via browser access.
Don Foster, Panzura’s Global Head of Sales Engineering, said: “Panzura Edge is a game-changer for organizations that need to share data with authorized partners without sacrificing data control or security. When users need to collaborate using their mobile devices or across multiple locations around the world, Panzura Edge gives them an efficient and secure gateway for sharing their CloudFS-held files.”
Panzura graphic
Panzura Edge can be hosted on-premises or in the cloud and is designed to meet the file access and collaboration needs for C-Suite executives, IT managers, and business users.
This new release of the Edge app adds smartphone and more tablet access. Employees and trusted partners will have file access anytime, anywhere, and on any device. They will be able to collaborate on files within the CloudFS environment without the need to export or make copies.
A Panzura spokesperson told us: “This release of Panzura Edge [is] designed for iOS and Android, and the company is focused on UX and ease of use. They have customers using it as part of their phased adoption process of Edge and Panzura is pleased and excited by their interest – it confirms the company’s vision for Panzura leading in unstructured data storage and protection.”
Users don’t need a VPN to access files. Data security comes from password-protected file sharing, two-factor authentication, and encryption. Management facilities include limiting downloads, automatic expiration of shares, and anonymous file uploads. Panzura says the system monitors all access attempts and takes multiple steps to prevent unauthorized access after login. Capabilities to monitor, prevent, and fix data leakage are also included.
An anonymous customer C-suite exec said: “Panzura Edge helps our team ensure productivity across all offices and remote locations.Our risk of data exfiltration has gone down dramatically.”
Panzura’s competitors have similar functionality:
CTERA Mobile enables business users to access their files securely and store them in the cloud where they can be shared with colleagues.
Egnyte Mobile supports the iOS and Android platforms for both phones and tablets.
Nasuni Mobile provides secure access to corporate documents, photos, video and other data.
Panzura Edge is now available for all customers. A Solution Brief is available here.
Qumulo scale-out filesystem software is available on new Primergy M7 hybrid servers from Fujitsu.
These M7 servers were launched in January and use Xeon gen 4 CPUs for mainstream enterprise server use and affordable AI operations. They come in RX rack format, TX tower format, CX small cloud node, and GX GPU-fitted versions. Fujitsu commenced reselling Qumulo’s parallel access filesystem software in 2020. That software is cloud-native and spans the on-premises and public cloud environments.
Christian Leutner, Head of European Platform Business, Fujitsu, said: “With the expansion of our relationship, our customers in a wide range of industries will benefit from low-threshold access to Qumulo data processing as well as the ability to rapidly sift through billions of files to find the nuggets of true value. The integration on the Primergy RX2540 M7 also provides our channel partners with the opportunity to benefit from our relationship with Qumulo and distribute the solution.”
The RX2540M7 is powerful compute and storage box. It’s a 2RU, dual-socket server with up to 60 Xeon cores per CPU, up to 8TB of DDR5 memory, PCIe gen 5 connectivity, up to 6x Nvidia GPUs, and up to 12x SAS/SATA 3.5-inch or 24x 2.5-inch SAS/SATA/NVMe storage drives, and optionally six more at the rear of the chassis.
Fabrice Gourlay, Qumulo’s EMEA VP of Sales EMEA, stated: “Primergy RX2540 M7 is the ideal server for business-critical workloads such as AI, machine learning, graphics rendering, and in-memory databases.”
With Qumulo software, the Primergy server’s NVMe-based hybrid nodes are claimed to deliver the performance of flash storage at the price of disk. Back in May 2020, Fujitsu said Qumulo’s software was fast, with 90 percent of read requests handled in under 1ms from the in-built cache.
Ryan Farris, Qumulo’s VP of Products, said in a May 2023 performance blog: “Qumulo has optimized performance and cost trade-offs in its core hybrid architecture, where cold data resides on low-cost disk storage but [is] served at the speed of SSD and memory. As a result, Qumulo’s AI-based prefetch algorithm (with 10+ years of training) monitors read requests, anticipates which data blocks are likely to be requested next, and prefetches them into system memory for even faster access than the SSD layer can deliver. The net effect for Qumulo’s customers (and their workloads) is an amazing and highly predictable experience, with the vast majority of read operations being served with <1ms latency.”
We were told: “This is true for all of our hybrid platforms, including the most recent hybrid storage SKUs from Fujitsu which run Qumulo software.”
Qumulo chart
Qumulo monitored read latency on more than 1,000 of its installed systems and found:
More than 90 percent of all reads are served at <= 1ms latency. Within that 90 percent: a. 50 percent of all read operations are served from system memory – faster than SSDs b. 25 percent of all reads come in at <0.1ms c. 48 percent of reads are at 0.25ms, whether from system memory or SSD d. 20 percent are between 0.5 and 1ms
Only 10 percent of all reads were from disk, and more than half of those still come in at 2ms or less of latency.
This is not as good as Infinidat’s arrays with its patented Neural Cache technology, where 95 percent or more of reads come from system memory, but it is the next best caching algorithm we have seen.
Other Fujitsu deals
Fujitsu has had an OEM deal with NetApp ONTAP storage since 2020, with its Eternus AF all-flash and DX hybrid arrays getting NetApp-based entry and mid-rage models. In June 2022, Fujitsu Asia Pte Ltd. (Singapore) was NetApp’s GSI Partner of the Year.
Fujitsu has a partnership with data intelligence supplier Alation, combining its own data analytics capabilities with Alation software to make it easy for customers to find, understand, and trust data.
Last week Fujitsu announced it will move out of its Tokyo headquarters and consolidate its other offices in the capital, concentrating its activity in Tokyo’s suburbs, partly due to the rise of remote working. It is trialing generative AI software with Japanese banks. That might be a Qumulo opportunity.
Get an RX2540 M7 datasheet here. Qumulo is a Gold Sponsor of Fujitsu’s TechCommunity Workshop Event being held in Dubrovnik over October 10-12. The Imelda Hospital in Belgium is a Fujitsu/Qumulo customer and a case study can be read and downloaded here.
CDN supplier Akamai has confirmed six new core compute regions across Europe, Asia, North America and Latin America. The new locations in Amsterdam, Los Angeles, Miami, Milan, Jakarta, Osaka, Japan, and São Paulo expand Akamai’s cloud computing network todata-intensive connection points that will ultimately allow enterprise customers to provide improved connectivity and experience to end users across the globe. This is the third installment of Akamai’s rollout of new cloud computing regions since the launch of Akamai Connected Cloud in February and its acquisition of Linode last year. Akamai ia focusing on a future where scale becomes more about the size of the network vs. the size of its datacenters, more effectively powering modern applications.
…
Data intelligence supplier Alation has become the leader in the latest Forrester Wave: Data Governance Solutions report. It scored highly in a number of sections and was recognised for its intelligent federated governance approach. The report notes that Alation blends technical skills in the field of machine learning (ML) and intelligent asset classification capabilities with a focus on collaboration tools and data valuation models. Alation received the highest scores possible in 12 criteria, including data governance management, as well as the highest possible ranking among all vendors in the strategy category.
…
Backblaze has reported that the lifetime annualized failure rate for all its SSDs was 0.6 percent, based on 2,072 drives operating for 1,897,473 drive days with 31 drive failures. This is based on SSD products with more than 100 units in use, more than 10,000 drive days in operation, and a statistical confidence interval of 1.0 percent or less between the low and the high values. It previously found that the lifetime annualized failure rate for all its disk drives in Q2 2023 was 1.45 percent, compared to 1.40 percent in Q1. That’s based on a population of 241,297 disk drives. Taking the average of the two, the HDD AFR is 1.425 percent while the SSD AFR is 0.6 percent. There is vastly more data for the disk drives, which means Backblaze can have more confidence in the accuracy of its HDD AFR than its SSD failure rate data.
…
Open-source columnar database developer ClickHouse announced GA of ClickPipes, which connects external data sources directly into ClickHouse Cloud. ClickPipes allows users to set up continuous data pipelines in just a few clicks and launches with integrations for Confluent Cloud, Amazon MSK and Apache Kafka, with plans to add more. ClickHouse Cloud, capable of processing billions of events per second, launched as a fully-managed cloud service in December 2022, building on open source real-time analytics technology used in production at companies such as eBay, Uber, and Disney. Now, by launching ClickPipes, it unveils a way for companies – like the 80 percent of the Fortune 100 that use Apache Kafka – to plug streaming data sources into its database for real-time apps and analytics.
…
Decodable, which has developed an enterprise-ready stream processing platform built on Apache Flink and Debezium, has announced the GA of Bring Your Own Cloud (BYOC), allowing users to run a private instance of the Decodable data plane – the part that handles connectivity and stream processing – in their own AWS account. In addition, Decodable has opened up a technical preview to support custom jobs written in Java using the standard Apache Flink DataStream and Table APIs.
…
CRN has reported that Dell has moved/promoted Jeff Boudreau, president and GM of its Infrastructure Solutions Group to Chief AI Officer, with Arthur Lewis taking on the ISG presidency. Both report to COO and vice-chairman Jeff Clarke. Clarke said Dell needs “dedicated leadership to drive our AI strategy across the company.” The AI Officer’s team will “partner across the company to understand domain-specific use cases, build, define and standardize architectures for the future, and integrate AI across our product portfolio. The team will also build relevant AI partnerships.”
…
Nearly 4 in 5 companies have developed plans for achieving carbon neutrality (78 percent), making sustainability a top priority, according to The State of Data Infrastructure Sustainability report. To help companies achieve this, Hitachi Vantara is launching its new Sustainability Services and Solutions to aid customers in achieving critical environmental and decarbonization goals. Hitachi Vantara’s new offerings enable organizations to adopt sustainable business practices that pave the way to a greener future, and focus on four decarbonisation areas: Green IT, Manufacturing, Facilities and Data and analytics.
…
Kioxia Europe is sampling new, higher performing JEDEC e-MMC Ver 5.1 compliant embedded flash memory products for consumer applications. They integrate the latest version of the company’s BiCS FLASH 3D flash memory (BiCS 8 at 218-layers) and a controller in a single package, reducing processor workload and improving ease of use. Both 64 and 128 gigabytes (GB) products will be available. As the market continues to shift to UFS, there are cases where e-MMC may still be used. This includes consumer products with mid-range storage requirements such as tablets, personal computers, point-of-sale devices and other portable handheld devices, as well as smart TVs and smart NICs.
…
A new open source project called LangStream is designed to help developers with getting data stored in multiple different sources – vectorized data in a database, non-vector data held in databases, and event stream data alike – into generative AI applications. It supports multiple databases including MySQL and PostgreSQL as well as Cassandra. Read a Datastax blog – https://www.datastax.com/blog/how-langstream-can-integrate-diverse-data-for-generative-ai – about LangStream here.
…
Storage supplier Nexsan is positioning its products as cloud storage repatriation targets. CEO Dan Shimmerman said; “Cloud repatriation is gaining massive traction from organizations that need the ability to minimize costs while maintaining greater control of their data and applications. One of the biggest challenges surrounding cloud repatriation is ensuring your organization has the right infrastructure to support moving your data from the cloud to on-premises solutions.”
Nexsan’s diverse product line includes Unity, a unified system with the capacity and performance needed for the most demanding mix of workloads, E-Series for high-density and high-capacity storage that can shrink the data storage footprint and massively reduce costs by saving on power, plus BEAST Elite is a cost-optimized storage solution, purpose-built to offer optimal reliability, availability and density – with 960TB in a standard 4U rack – to seamlessly manage high-volume applications like backups, archives, and video surveillance.
…
Analysts at a Nutanix investor day were told it has completed its subscription transformation from perpetual license sales and a scaling phase lies ahead. Its ARR has a >20 percent CAGR in fy23 to fy27, with $1.4-$1.5 billion by fy25 and $3.1-$3.3 billion by fy27. Nutanix is guiding revenue to grow at a ~15 percent fy23-fy27 CAGR; $2.4-$2.6 billion by fy25 and $3.2-$3.4 billion by fy27. The company will look at opportunistic tuck-in acquisitions looking ahead. GPT-in-a-box is a significant opportunity for Nutanix. Ditto Cisco adopting Nutanix HCI to replace its unpopular HyperFlex HCI offering. The Broadcom-VMware acquisition will encourage the largest customers to move to a dual vendor strategy, benefitting Nutanix.
…
The latest version of ObjectiveFS, v7.1, includes native support for ARM64, AVX2, AVX-512 and more hardware architectures This version uses the fastest vector instructions available on the platform to achieve the best performance. The release includes:
Linux x86-64 smart binary automatically selects the fastest binary for the platform
Linux x86-64 includes support for AVX-512, AVX2, AVX, SSE4_2, SSE3, etc
Linux ARM64 binary for ARMv8.0 and newer
Linux ARM64 supports NEON instruction set
MacOS universal binary for x86-64 and ARM64
Client-side encryption now uses the fastest vector instructions available on the platform
SHA-1 ARM64 and x86-64 native instructions support (signature v2 object stores)
SHA-256 ARM64 and x86-64 native instructions support (signature v4 object stores)
Vector instruction performance gain, e.g. 3.9X faster data integrity checks with AVX-512
For the full list of updates in the 7.1 release, see the release notes.
…
SmartX was named an Asia/Pacific Customer’s Choice in the recently released “Voice of the Customer for Full-Stack Hyperconverged Infrastructure Software” by Gartner for the third consecutive year. The report included 11 worldwide vendors. The recognition and analysis are based on eligible published views during the specified 18-month submission period. Overall, SmartX was rated 4.9 out of 5 stars, with 100 percent of reviewers saying they would recommend SmartX products. In the SmartX Vendor Summary section, SmartX received a 4.8/5 in product capabilities, sales experience, deployment experience, and a 5/5 in support experience.
…
Cohasset’s VAST DataStore Compliance Assessment validates that when configured properly, the VAST DataStore meets the requirements for immutability and audibility in SEC rules 17a-4(f), 18a-6(e), FINRA (Financial Industry Regulatory Authority) Rule 4511(c) and CFTC (Commodity Futures Trading Commission) rule 1.31(c)-(d). We understand that records management consultancy Cohasset Associates is the gold standard of third-party validation. VASt says that, with it storage, a customer’s compliance archive in its all-flash system with support for GPU direct storage and NVIDIA SuperPods, is available for analysis while being locked against deletion and modification as required by SEC Rule 17a-4 and similar regulations.
…
Software RAID developer Xinnor will present a talk titled “Optimizing Lustre Throughput in a Software RAID Environment: Configuration tips and Performance Insights” at the upcoming Lustre Administrators & Developers Workshop, scheduled to be held in Bordeaux on October 5-6, 2023. The Lustre Administrators & Developers Workshop serves as a premier platform for Lustre administrators and developers from across the globe to convene, share experiences, discuss developments, tools, best practices, and much more. It’s a must-attend event for those deeply involved in the Lustre ecosystem.
Dell has launched its APEX Cloud Platform for Azure, first announced at Dell World in May, helping to provide a consistent Azure cloud environment across edge, branch, and central datacenters, as well as the public cloud.
Azure Stack is a hyperconverged hardware and software stack needed to provide Azure cloud services on-premises with several OEMs, such as Dell and HPE, supporting it. The Azure Stack HCI includes the Hyper-V virtualization technology along with Storage Spaces Direct. With the APEX CLoud Platform Dell provides the compute, storage, and networking resources in a hyperconverged environment to run the AzureStack HCI v22H2 operating system. The APEX Cloud Platform will also support Red Hat OpenShift and VMware. This Azure version of Dell’s APEX Cloud Platform runs Storage Spaces Direct with PowerFlex software-defined storage as a future option. Dell, via APEX, is the first member of a new Microsoft Premier Solution category of Azure Stack HCI offerings.
Douglas Phillips, Corporate VP for Azure Edge + Platform at Microsoft, said: “More of our customers are asking us to bring the power of Azure wherever they need it out in the real world, whether that is in their factories, retail stores, quick service restaurants, or distributed locations. Through our partnership with Dell, we can project just enough Microsoft Azure to those locations at the edge.”
That’s “just enough” as in not replacing the Azure cloud, not over-burdening the edge sites, and in using familiar Azure cloud management services. For example, users can have consistent security and compliance policies across their Azure environment through integration with centralized Azure management and governance services.
This Dell service is based on hyperconverged MC-760 and MC-660 nodes in 2RU chassis. There can be a single node at edge sites and clusters of up to 16 in datacenters with automated cluster expansion. A table summarizes their features:
The MC-760 can have all-flash, hybrid NVMe SSDs and HDDs, and hybrid SAS SSD and HDD configurations, while the smaller MC-660 is only available in all-flash setups, 10 x SAS SSDs or 10 x NVMe SSDs, which are faster than SAS drives. Both node types support up to 2x Nvidia Ampere GPUs.
The node details get complicated and a Dell datasheet has lists the options. We’ve summarized a few salient node server and storage differences from it here:
Dell says it will add a third, more specialized node type, based on its PowerEdge XR4000 edge-optimized server. It will also add the ability to scale storage separately from compute.
It claims that compared to its existing Integrated System for Azure Stack HCI, customers will see an 88 percent reduction in deployment steps due to a new deployment wizard. Dell is now so integrated with Microsoft’s support services that updates, patches, and new releases will be available to customers within 4 hours of a Microsoft release.