The soon to be spun-off SanDisk has a new minimalist logo, with a new harder to read font.
Here’s the old logo for comparison:
It’s changed to an all-caps, serif font with the S, A and D letters having substantial parts of their form removed. The “S” looks a tad like a left-pointing cone above a small square dot, which indicates a pixel. When seen with black letters against a deep red background the logo wordmark recognition is slightly harder. Perhaps the idea is that having people need to make more effort to read the logo word mark will make it more memorable.
ELA Advertising helped develop the new visual identity for Western Digital.
Sandisk said it has “previewed its new corporate branding and creative direction, signaling a bold debut of the company’s comeback launch as a standalone Flash and memory technology innovator, planned for early 2025.” The company will be spun out from owner Western Digital then. You can watch a video about the new logo here.
We’re told: “The Sandisk wordmark represents a heritage of mobility and versatility that enables a seamless and simplified world of resilient data expression and storage. The company’s innovation keeps aspirations moving and pushes possibility forward, empowering people and businesses with data everywhere.”
Joel Davis
Joel Davis, VP of Creative at Sandisk said: “Our visual brand philosophy is inspired by the future and all the diverse ways our customers consume data. Starting with a single pixel, the new Sandisk mark uses bold visual language while being rooted in the idea that progress is not an end point but a way of being.”
The “iconic open D letterform” is a reference back to the old logo where the stemless D was a distinctive element. SanDisk will soon inherit the WD joint-venture with Kioxia to manufacture flash chips.
SK hynix has matched its subsidiary Solidigm with a 60TB-class PS1012 SSD and says it will develop 122TB and 244TB follow-on drives to meet AI storage demand.
Ahn Hyun, President and Chief Development Officer at SK hynix, stated: “SK hynix and Solidigm are strengthening our QLC-based high-capacity SSD lineup to solidify our technological leadership in NAND solutions for AI.”
SK hynix PS1012 SSD
Solidigm introduced the then groundbreaking D5-P5336 61.44TB QLC (4bits/cell) SSD in July 2023. It proved to be an astute move and other vendors eventually followed suit with their own high-capacity QLC drives. Samsung was the first, a year later, with its BM1743. This outperformed the Solidigm drive as it used the PCIe gen 5 bus, twice as fast as the D5-P5336’s PCIe gen 4 bus.
November saw Micron bring out its own 61.44TB 6550 ION drive, also using the PCIe 5 bus. Solidigm stuck with the PCIe gen 4 bus and launched a doubled capacity D5-P5336 version at 122TB.
Phison matched that capacity with its 122.88TB capacity Pascari D205V and hooked it up with a PCIe gen 5 bus to beat the Solidigm drive’s performance. Now SK hynix, which uses a separate NAND fabrication process and plants from Solidigm, has previewed its own 61.44TB PS1012 drive, using the PCIe 5 bus, saying it’s ready for sampling.
The PS1012 supports the OCP 2.0 standard and is made in the traditional 2.5-inch U.2 format. SK hynix “plans to supply the sample of the new product to global server manufacturers within this year for product evaluation.” The company says it delivers up to 13GBps sequential read bandwidth.
A 122TB version is planned for the third calendar quarter of 2025. Both versions use, we understand, SK hynix’s 238-layer 3D NAND. Sk hynix says it will develop a 244TB drive using its latest 321-layer NAND process.
Hyun said: “In the future, we will lay the foundation for growth to become a full stack AI memory provider by meeting the diverse needs of AI data center customers based on our high competitiveness in the eSSD field.”
This announcement by SK hynix leaves Kioxia, which IPO’d today, Western Digital, and China’s YMTC outside the high-capacity SSD group. We expect all three to bring out 60TB+ drives in the future and to enter the 100TB+ drive category as well.
Databricks has attracted a $10 billion investment in a tenth funding round to pay for early investor and staff stock sales and allied taxes, new acquisitions, international go-to-market expansion and new AI products.
This is expected non-dilutive funding and Databricks has completed $8.6 billion of the raise to date, giving the business an estimated valuation of $62 billion. The money raised is even higher than had been expected.
When the round completes, the AI analysis-focussed data lakehouse supplier will have raised a total of $14 billion since it was founded in 2013. This J-round is led by Thrive Capital and, we’re told “co-led by Andreessen Horowitz, DST Global, GIC, Insight Partners and WCM Investment Management.” Other significant participants include existing investor Ontario Teachers’ Pension Plan and new investors ICONIQ Growth, MGX, Sands Capital and Wellington Management.
Ali Ghodsi.
Ali Ghodsi, Co-Founder and CEO of Databricks, stated: “We were substantially oversubscribed with this round and are super excited to bring on some of the world’s most well-known investors who have a deep conviction in our vision. These are still the early days of AI. We are positioning the Databricks Data Intelligence Platform to deliver long-term value for our customers and our team is committed to helping companies across every industry build data intelligence.”
We’re told the annual run rate for Databricks’ SQL data warehousing product is $600 million, more than 150 percent higher than a year ago. It has in excess of 10,000 customers, including Block, Comcast, Condé Nast, Rivian, Shell and more than 60 percent of the Fortune 500.
The company says it has more than 500+ customers consuming at an above $1 million annual revenue run-rate (ARR).
Databricks says it hopes to achieve positive free cash flow in its fourth quarter, ending 31 January, 2025, and expects to pass the $3 billion revenue run rate mark. It’s growth rate has accelerated to more than 60 percent year-on-year in the third quarter, “largely due to the unprecedented interest in artificial intelligence.”
Databricks intends to pursue acquisitions, bolster its international go-to-market and develop additional AI products. Chances are that the organization might float in 2026/27, something that has been discussed for some years.
Rubrik, the cyber-protector and resilience supplier, will use third-party technology to make a customer’s protected and secured data available to Gen AI agents with its Annapurna project.
The agents will, we’re told, be able to use Rubrik-generated copies of a customer’s proprietary data to build retrieval-augmented generated responses that are intended to be more accurate and relevant to user requests made within the customer’s IT environment.
Rubrik’s Annapurna will provide security-protected data from its Rubrik Security Cloud to large language model (LLM) AI Agents from Amazon’s Bedrock model store. These models need to access source data that has been transformed into vector embeddings stored in a vector database. More information has emerged about how this process will take place.
Anneka Gupta.
Chief Product Officer Anneka Gupta told us: “The vectorization will be initialized through Rubrik Security Cloud (RSC), which does not require customers to bring their own embedding model or vector database to facilitate. Through RSC, customers can leverage Amazon Bedrock and Azure OpenAI LLM and embedding models to create and deliver secure data embeddings for Gen AI apps.”
That sorts out how data will be transformed into vectors. These then have to be made available to LLMs responding to user requests.
Gupta added: “Embeddings will be stored in a vector database managed by Rubrik Security Cloud. The database itself may be through a third party provider such as Pinecone or Azure AI search, but the data is held as vectors and not chunk content.”
Azure AI Search, previously called Azure Cognitive Search, is an information retrieval system for mixed type content that has been ingested into a search index. It “is the recommended retrieval system for building RAG-based applications on Azure, with native LLM integrations between Azure OpenAI Service and Azure Machine Learning, and multiple strategies for relevance tuning.”
The indexing process includes vectorization: “Rich indexing … This includes integrated data chunking and vectorization for RAG, lexical analysis for text, and optional applied AI for content extraction and enrichment.”
“Indexing is an intake process that loads content into your search service and makes it searchable. Internally, inbound text is processed into tokens and stored in inverted indexes, and inbound vectors are stored in vector indexes. The document format that Azure AI Search can index is JSON. You can upload JSON documents that you’ve assembled, or use an indexer to retrieve and serialize your data into JSON.”
We noted that Annapurna provides data embeddings generated from data stored in the Rubrik Security Cloud, meaning backed-up data. Could Annapurna also provide access to real-time data, meaning data created and not yet captured by the Rubrik Security Cloud?
Gupta said: “Today, Annapurna will leverage data, metadata and permissions stored in Rubrik Security Cloud, which refresh dynamically. We are not focused on real-time data sources outside of Rubrik Security Cloud at this time, however we are always looking at ways to enhance our innovations.”
COMMISSIONED: AI and advanced storage systems ensure a smooth and profitable winter shopping season, so retailers don’t miss a beat during the holiday rush.
As the clock strikes 12:01 a.m. on Black Friday, retailers everywhere unleash a frenzy of highly anticipated doorbuster deals. You jump online to snag the must-have item of the season – only to discover it’s already sold out. Meanwhile, across town, another shopper triumphantly walks out of the store with the last big-screen TV, unaware it was marked “out of stock” online. Behind these “naughty or nice” outcomes, artificial intelligence (AI) and advanced storage systems quietly orchestrate the chaos, ensuring the holiday shopping experience is as smooth – and profitable – as possible.
The stretch between Black Friday and Christmas is a high-stakes marathon for retailers, and AI is now at the forefront. From managing inventory and optimizing pricing to personalizing customer experiences, and preventing fraud, AI is transforming the holiday shopping experience into something truly magical. But AI doesn’t work in isolation. To deliver on the promise of efficiency and precision, it needs a robust foundation capable of processing immense volumes of data with lightning speed and unwavering reliability. The secret to creating a seamless operation for retailers and consumers alike lies in the dynamic partnership between AI and storage.
Dynamic pricing: Santa’s Little Helper in disguise
Picture this: It’s Cyber Monday, and a shopper is browsing for noise-canceling headphones. After browsing a few models, they linger on one product page for over 30 seconds. Behind the scenes, an AI algorithm kicks in, analyzing their browsing behavior, market trends, and competitor prices. Within moments, a 5 percent discount is applied—just enough to nudge the shopper to hit “Add to Cart.”
Dynamic pricing isn’t just effective; it’s essential during the holiday season. With over 184 million U.S. shoppers expected to kick off their holiday shopping spree over Thanksgiving weekend, retailers must make split-second decisions to stay competitive. That’s where AI algorithms powered by high-performance storage systems come in, processing terabytes of real-time data with ultra-low latency to enable these crucial decisions.
AI is revolutionizing retail operations, but its true potential is only unlocked when paired with a robust storage infrastructure. Here’s how they come together to transform the retail experience:
1. Data Ingestion and Preprocessing AI is only as good as the data it receives. Social media trends, real-time sales metrics, and customer preferences are just a few of the many variables retailers must analyze during the holiday rush. Storage systems capable of rapid data ingestion across multiple protocols ensure AI algorithms get timely and accurate data.
2. High-Speed Data Access Training AI models for tasks such as fraud detection or inventory prediction demands immense computational power and near-instantaneous access to massive datasets. Storage solutions optimized for AI workloads, such as those supporting GPUDirect technology, enable GPUs to directly access data without bottlenecks, significantly speeding up the process.
3. Scalability on Demand Retailers can’t afford downtime to scale up IT infrastructure during the holiday season. Storage systems that expand seamlessly allow merchants to manage the surge in data volume without downtime or sacrificing performance.
4. Flexibility Across Environments AI workloads – whether in stores, warehouses, or the cloud, – need to flow smoothly across various environments. Multicloud storage deployments ensure retailers can operate wherever their data and customers are.
Beyond pricing: The holiday magic of AI and storage
Dynamic pricing is just one way AI and storage transform the holiday shopping experience. But that’s not all, here are some other ways AI and storage are rewriting the holiday shopping playbook:
– Inventory Management AI-powered forecasting helps predict demand, ensuring stores are stocked with the right products in the right quantities. This requires storage systems that can handle enormous datasets and deliver insights quickly.
– Personalization Retailers use AI to analyze individual shopping behaviors and deliver tailored recommendations, optimize website layouts, and refine marketing campaigns. High-performance storage enables rapid data retrieval and processing, ensuring these personalized experiences feel seamless and intuitive.
– Fraud Detection The holiday season is prime time for cybercriminals, and sellers must be vigilant. AI systems analyze transaction patterns to flag unusual activity, a process requiring real-time access to large volumes of historical and current data. Storage optimized for low latency is critical for this rapid decision-making.
While AI often takes the spotlight, storage is like Santa’s elves behind the scenes making it all happen. It’s not just a repository for data; it’s the foundation that enables real-time AI applications to function at scale, handling the massive amounts of data that come with the holiday rush. Without high-speed, scalable, and flexible storage, even the most advanced AI systems would struggle to keep up with demand.
Dell Technologies PowerScale exemplifies storage designed to tackle the unique challenges of AI-driven retail. Its scalability lets retailers start small and grow exponentially during peak seasons. Universal data access ensures compatibility with multiple protocols, and GPUDirect technology accelerates AI workloads like fraud detection and inventory prediction.
Additionally, PowerScale’s high-performance Ethernet and NFS over RDMA provide the rapid data collection and preprocessing capabilities retailers need for real-time decisions. Plus, its multi-cloud capabilities enable AI workloads to run seamlessly across on-premises, edge, and cloud environments, offering unmatched flexibility during the holiday crunch.
Wrapping it up with a festive bow
Whether it’s the noise-canceling headphones, the last big-screen TV, or another must-have holiday item, AI and storage are the dynamic duo ensuring your purchase is a success. From dynamic pricing to inventory management and fraud prevention, retailers are leveraging AI to deliver a seamless shopping experience. Behind the scenes, high-performance storage systems like Dell PowerScale, which process petabytes of data at breakneck speed, quietly power the entire operation.
As the holiday season approaches, the stakes for retailers couldn’t be higher. The ability to effectively harness AI and storage can mean the difference between record-breaking sales and frustrated customers staring at empty virtual shelves. For retailers and businesses in any industry, it’s not just about surviving the holiday surge–it’s about thriving in it.
To learn how Dell storage can support your AI journey with Dell Powerscale.
Databricks is reportedly close to finalizing a $9.5 billion – or greater – funding round.
The 11-year-old biz, which supplies a platform that combines data lakes and data warehouses, has grown revenue at a furious clip as it vies with publicly-owned Snowflake to become the dominant player. It has secured nine funding rounds to date, raising more than $4 billion, and according to Reuters could this week ingest the biggest investment in its history, potentially topping $9.5 billion.
Databricks funding events to 2023,
Databricks – valued at $38 billion in 2021 – claimed it achieved a $350 million annual recurring revenue (ARR) in 2020 and a $1 billion ARR in late 2022. Its valuation ballooned to $43 billion when it raised $500 million in September last year. Funding was used, in part, to pay for something of an acquisition spree:
October 2021 – 8080 Labs – no-code data analysis tool,
April 2022 – Cortex Labs – open-source platform for deploying/managing ML models,
Oct 2022 – DataJoy – ML-based revenue intelligence software,
May 2023 – Okera – AI-centric data governance platform,
June 2023 – Rubicon – storage infrastructure for AI,
June 2024 – Tabular – Apache Iceberg open table management technology
Reuters reported in November that Databricks was seeking to raise up to $8 billion, giving it a valuation of $55 billion but this has since risen to $9.5 billion, and the estimated valuation has swelled to $60 billion. The format is thought to be a secondary share sale so that employees could sell stock options, and early investors could cash out as well. It is believed that some of the incoming funds would be used to cover the tax costs on these stock sales.
A $9.5 billion round would dwarf many previous rounds into insignificance
There is said to be strong demand from potential investors with the round looking to be twice over-subscribed ahead of an IPO in early 2025 or early 2026.
At the time of Databricks $500 million raise last year, Alan Tu, Lead Private Equity Analyst for investor T. Rowe Price Associates, said: “Data and AI have rapidly become the centerpiece of many business strategies. Databricks has not only pioneered the Lakehouse category with a world-class team and product, but it is now also at the forefront of Generative AI for the enterprise.”
Since then the business prospects for providing Gen AI’s large language model (LLM) services, big data analytics and storage on one platform are seen as so immense.
Databricks was rated as the sixth most valuable privately held unicorn in 2024 by BestBrokers
A $60 billion Databricks’ valuation is line with its main competitor, Snowflake, which currently has a $56.93 billion market capitalization.
Bloomberg is reporting that Databricks could be trying to arrange a $2.5 billion term loan alongside the funding round.
Western Digital EVP and GM Robert Soderbery, head of the NAND and SSD business unit, is leaving on January 2.
The storage device builder currently has two divisions: hard disk drives (HDD) and flash. The latter covers its NAND joint venture with Kioxia, and its SSD production and sales. The HDD business was run by EVP and GM Ashley Gorakhpurwalla and the flash division by Soderbery, with both reporting to CEO David Goeckeler.
With WD splitting into two separate businesses – one for HDDs and the other for flash – CEO Goeckeler announced he’d run the flash biz and Irving Tan, EVP global operations, will be the HDD business’s CEO. This clearly meant a downgrade for both Soderbery and Gorakhpurwalla’s roles. Gorakhpurwalla quit Western Digital and became head of Lenovo’s Infrastructure Solutions Group (ISG) in October.
Clockwise from top left: David Goeckeler, Irving Tan, Robert Soderbey, Ashley Gorakhpurwalla
Soderbery has decided to exit the drive maker as well, but with no destination yet revealed. A Western Digital SEC 8K filing discloses the bare bones of this. It reads: “Soderbery is entitled to receive Tier I severance benefits under the Severance Plan.” That means, we understand, a not inconsiderable amount:
A lump sum severance payment equivalent to 24 months of base salary.
A pro-rata bonus for the fiscal year in which termination occurs, based on target performance metrics.
Outstanding time-based stock options and restricted stock units (RSUs) are treated as though the executive remained employed for an additional six months, allowing partial accelerated vesting.
Performance-based equity awards are governed by his specific plan terms.
Up to 18 months of COBRA premium reimbursement for medical, dental, and vision coverage, provided the executive does not obtain equivalent coverage elsewhere.
Coverage of professional outplacement services for up to 12 months.
Wedbush analyst Matt Bryson tells subscribers: “We see this news as a disappointing update for Western Digital/SanDisk. Rob, in our view, was one of the company’s key executives, who at one point looked to likely head SanDisk following the upcoming split … Even though we would have preferred a SanDisk with Mr Soderbery, this event doesn’t shift our view that the forthcoming spin-off of SanDisk is likely to create value for WDC shareholders and also that even with NAND trending more poorly than might have been expected this quarter, that Q2’25 and beyond look more promising for WDC and Kioxia given the expected rightsizing of customer client SSD and handset module inventories as well as the ramp of BiCS 8 (which we believe will significantly elevate WDC and Kioxia’s ability to compete).”
It would be fun if Soderbery joined Kioxia. Then he could have face-to-face meetings with Goeckeler.
Keepit has secured $50 million in an equity investment round to expand its vendor-independent, dedicated infrastructure for the SaaS data protection business.
The Danish company stores customers’ backed-up data in its own datacenters, forming the Keepit Cloud, based in Europe, North America, and Australia, and has some five million users. It has raised a total of $90 million in three equity investments in the past four years. There was a $30 million A-round in September 2020, today’s $50 million C-round, and a $10 million B-round some time between them. We’re told by Keepit that the additional $10 million funding came in December 2021 and was a small internal round.
Keepit was founded in 2007 and had raised $225 million in debt financing from Silicon Valley Bank by March 2023. At that time, Keepit co-founder and then co-CEO Frederik Schouboe said: “Silicon Valley Bank was prepared to take risks that other banks wouldn’t, and it is not possible to bank with mainstream banks if you are making a deficit in the subscription business – the regulatory environment is too strict for them to take part.”
A joint statement from the two co-founders, CEO Morten Felsvang and Schouboe, who is now chief vision officer, said: “This new funding will allow us to expand our reach and continue innovating the most advanced SaaS data protection solutions on the market. We’re thrilled to see such strong support from our investors, who understand our mission and share our vision for the future.”
Felsvang added: “Investing in product development right now makes sense: organisations worldwide are facing increasing demands for data sovereignty, security, and compliance. With this funding, we’re not just enhancing our solutions, we’re ensuring businesses can confidently protect their SaaS data in a fast-changing global landscape.”
This C-round was led by existing investor One Peak and EIFO, the Export and Investment Fund of Denmark. Jacob Bratting Pedersen, Managing Director, Partner & Head of Tech & Industry at EIFO, commented: “Keepit’s focus on cloud-native, vendor-independent data protection is what sets them apart. This investment is not just a financial decision for us – it’s about supporting a company that is revolutionizing the way organizations think about data security. We believe in their long-term vision.”
Keepit intends to use the cash “to accelerate its global expansion strategy, prioritizing key markets like the US, Europe, and other high-growth regions.” It hired Fahad Qureshi, VP Sales America and ANZ, in November, and strengthened its partner network. It will also further develop its cloud-native SaaS data protection software with “broader workload coverage and additional data management and intelligence capabilities for the enterprise.”
We think the “intelligence capabilities” point refers to GenAI, either to help in administering backups or to use the data in them for analysis, or both. The SaaS application backup business has seen a lot of development and activity this year, with Salesforce acquiring the Own Company, Veeam buying Alcion, Commvault buying Clumio, and heavy service developments by Druva, HYCU, and others. We also think Keepit could be viewed as a highly attractive acquisition target.
Seattle-based Amperity has launched a Customer Data Cloud – an AI-powered setup designed to help users transform raw customer data in a data warehouse into actionable business assets through its Lakehouse architecture. It enables them to standardize data, resolve identities, build profiles, and provide access to save time and build more accurate data models. It features:
AI-Powered Identity Resolution – Easy-to-configure, ML-powered identity resolution that quickly finds hidden connections in online and offline customer data;
Industry-Specific Data Modeling – Turnkey data models and lifecycle management predictions that accelerate the creation of a 360-degree customer view;
Self-Service Data Access – Business-friendly reverse ETL tools and GenAI capabilities that enable non-technical users to explore and segment data independently, reducing ongoing data requests;
Intelligent Change Management – End-to-end workflow testing in a full production sandbox that has GenAI built-in to help users resolve errors using natural language.
There are real-time tables, a bridge for Snowflake in real time through Snowflake Secure Data Sharing, eliminating the need for ETL maintenance and complex integrations. A zero-copy data access capability allows AWS users to leverage their existing storage infrastructure to house their customer profiles and data assets.
.…
AWS re:Invent news roundup part 2
Amazon Nova – a new generation of foundation models (FMs) that have state-of-the-art intelligence across a wide range of tasks, and industry-leading price performance. Amazon Nova will be available on Amazon Bedrock.
Trainium 2 instances – AWS announced GA of Trainium2-powered EC2 instances, introduced new Trn2 UltraServers, enabling customers to train and deploy today’s latest AI models as well as future large language models (LLM) and foundation models (FM) with exceptional levels of performance and cost efficiency, and unveiled next-generation Trainium3 chips.
S3 managed Apache Iceberg tables – See story here.
DynamoDB and Aurora DSQL – AWS announced new capabilities for Aurora and DynamoDB to support customers’ most demanding workloads that need to operate across multiple Regions with strong consistency, low latency, and the highest availability whether they want SQL or NoSQL.
SageMaker – AWS announced the next generation of SageMaker, unifying the capabilities customers need for fast SQL analytics, petabyte-scale big data processing, data exploration and integration, model development and training, and GenAI into one integrated platform.
Bedrock – AI Safeguard, agent orchestration, and customization options, Bedrock is a fully managed service for building and scaling generative artificial intelligence applications with high-performing foundation models.
Q Developer – AWS announced enhancements to Amazon Q Developer, including agents that automate unit testing, documentation, and code reviews to help developers build faster across the entire software development process, and a capability to help users address operational issues in a fraction of the time. Enhancements to Q Business as well.
GuardDuty – AWS introduced advanced AI/ML threat detection capabilities in Amazon GuardDuty. This new feature uses the extensive cloud visibility and scale of AWS to provide improved threat detection for your applications, workloads, and data. GuardDuty Extended Threat Detection employs sophisticated AI/ML to identify both known and previously unknown attack sequences, offering a more comprehensive and proactive approach to cloud security.
OpenSearch Service zero-ETL integration with Amazon Security Lake – This integration enables organizations to efficiently search, analyze, and gain actionable insights from their security data, streamlining complex data engineering requirements and unlocking the full potential of security data.
…
AWS is accelerating file reads with a storage caching server. A blog shows how users can create an Amazon EC2-based cache that provides 25GB/sec of read throughput for under $4/hour, which can be scaled to almost unlimited throughput. The workload “should have a working data set small enough to fit into the RAM of a single EC2 instance. It’s primarily read-only (read/write caching might be possible, but we’re not covering it here), and it needs to be a filesystem with Linux support, in this example we use NFS. This caching “relies on the Linux OS file cache, which uses spare system RAM to cache file access. Mounting a filesystem via an Amazon EC2 instance will cache any file accessed. No additional software is needed.” Read the blog to find out more.
…
Cloud backup and storage provider Backblaze has a “game-changing media production acceleration case study” with the Philadelphia Eagles. With the team and system integrator CHESA, Backblaze:
Game-planned a fast, cloud-based media workflow, including Backblaze B2 Cloud Storage, Mimir asset management software, and a Quantum shared file system to support accelerated content production for increased fan engagement.
Traded slow access, failure-prone LTO tape for immediate access to decades of rich content stored in the cloud – no more waiting days or weeks, or missing out completely, when historical footage of a player or play is needed.
Scored early team wins, including enabling its media professionals to access and make more footage in the critical 48 hours after each game so they can then shift attention to the next game, and saving time and hassle by making content sharing with outside vendors and other organizations far more efficient.
…
DNA storage startup Biomemory has raised $18 million in Series A funding, led by Crédit Mutuel Innovation, with participation from various French investment organizations. Biomemory has demonstrated the viability and potential of its molecular storage technology. The funding will enable the biz to complete the development of the first generation of its data storage appliance, accelerate the development of partnershipswith industry players and cloud providers, recruit top talent in molecular biology and engineeringto accelerate the development of the product and its commercialization, and advance research into broader molecular-based solutions.
…
Live data replicator Cirata has launched Data Migrator 3.0, with production-ready support for Apache Iceberg, expanded capabilities for Databricks Delta Lake, significant enterprise security enhancements, and comprehensive extensibility. CTO Paul Scott-Murphy claimed: “Our production-ready, direct support for open table formats like Apache Iceberg and Delta Lake eliminates the constraints of closed data architectures, even if you have petabytes of data held in formats or locations that previously required lengthy, complex and risky efforts to modernize. Data Migrator 3.0 is a significant advancement for organizations that want to future-proof their data management, analytics and AI strategies.”
…
Cloudera has announced its “Interoperability Ecosystem” with founding members AWS and Snowflake. The ecosystem makes it easier for customers to connect their workloads with Snowflake, Cloudera and now unique AWS services – such as S3, EKS, RDS. EC2 and Athena. Some of the features the collaboration enables include:
Seamless Data Sharing and Interoperability – AWS customers can leverage Cloudera’s data lakehouse alongside Snowflake’s AI Data Cloud, facilitating unified data access and sharing across platforms while ensuring compliance and scalability for customers handling sensitive data.
Enhanced AI/ML Performance – The partnership optimizes data workflows for AI/ML applications by enabling Cloudera’s on-premises or hybrid data sets running AWS to integrate with Snowflake’s analytics capabilities, reducing latency and improving insights.
Maximized Cloud Investments and Support for Multi-Cloud Strategies – Customers can leverage Snowflake for targeted analytics while managing broader data operations with Cloudera, maximizing AWS investments by combining their strengths. This collaboration simplifies connecting AWS data with Snowflake, supports multi-cloud strategies, and enhances data mobility without vendor lock-in.
Michelle Scardino (@m_scardino on X/Twitter) joined DDN as VP of Demand Generation in September, reporting to CMO Jyothi Swaroop.
…
Elastic announced general availability of Elasticsearch logsdb index mode, which reduces the storage footprint of log data by up to 65 percent compared to recent versions of Elasticsearch. Logsdb index mode optimizes data ordering, eliminates duplication by reconstructing non-stored field values with synthetic _source, and improves compression with advanced algorithms and codecs. Benefits include reduced costs, preservation of valuable data, expanded visibility and streamlined access to data. Logsdb index mode is generally available for Cloud Hosted and Self-Managed customers starting in version 8.17 and is enabled by default for logs in Elastic Cloud Serverless.
…
Data mover Fivetran announced 40 percent year-on-year growth in the AWS Marketplace, attributed to rising enterprise demand for AI and analytics. It says its Managed Data Lake Service automates structured, semi-structured, and unstructured data into Delta Lake or Apache Iceberg formats as it moves into Amazon S3 data lakes. By automating table maintenance, natively integrating with catalogs including AWS Glue, and covering compute costs associated with ingestion, Fivetran reduces the resources needed to keep data continuously query-ready, enabling teams to prioritize AI and ML projects over manual data preparation.
…
Intel announced that its Storage Performance Development Kit (SPDK) project will be transitioning to the Linux Foundation in 2025. It declared “this move aims to ensure the long-term sustainability and growth of SPDK, fostering a vibrant community and driving innovation in storage performance. While Intel will be divesting from the project, we remain committed to supporting the community during this transition.”
…
Cross-sectional TEM image for the InGaZnO vertical transistor (Photo: Business Wire)
NAND fabber Kioxia and DRAM producer Nanya announced the development of OCTRAM (Oxide-Semiconductor Channel Transistor DRAM) – a new type of 4F2 DRAM comprised of an oxide-semiconductor transistor that has a high ON current, and an ultra-low OFF current, simultaneously. This technology is expected to realize a low-power DRAM by bringing out the ultra-low leakage property of the InGaZnO*1 transistor. The OCTRAM utilizes a cylinder-shaped InGaZnO vertical transistor as a cell transistor. This was first announced at the IEEE International Electron Devices Meeting (IEDM) held in San Francisco, CA on December 9, 2024. InGaZnO is a compound of In(indium), Ga(gallium), Zn(zinc), and O(oxygen).
Each memory cell occupies an area of 4F2, where F is the minimum feature size achievable with the fabrication process. The design enables the adaptation of a 4F2 DRAM, which offers significant advantages in memory density compared to the conventional silicon-based 6F2 DRAM.
…
Kioxia announced that the cryptographic module used in its CM7 Series PCIe 5.0 NVMe Enterprise SSDs has been validated to meet Federal Information Processing Standard (FIPS) 140-3, Level 2 for cryptographic modules.
…
Kioxia Europe announced its Universal Flash Storage (UFS) Ver. 4.0 embedded flash memory devices designed for automotive applications have received Automotive SPICE(ASPICE) Capacity Level 2 (CL2) certification. Kioxia is the first manufacturer to be awarded this distinction for automotive grade UFS 4.0.
…
According to the Wall Street Journal, Kioxia was valued at $18 billion when the Bain-led consortium bought it in 2018. Its IPO on the Tokyo Stock Exchange is expected to reflect a $5.1 billion valuation and raise $800 million. When Kioxia previously planned an IPO in 2020 – which did not come to pass – its valuation was $16 million. The valuation fall is ascribed to its net debt following the Bain buyout: $4.9 billion, “equal to around 1.2 times its shareholders’ equity. Western Digital’s ratio is less than 0.5 times and SK hynix’s is under 0.3 times.” The WSJ observes: “The company has been losing market share in particular to SK hynix. Kioxia’s market share in the NAND flash memory market dropped from 17.8 percent in the first quarter in 2023 to 15.1 percent last quarter, according to industry tracker TrendForce.” Kioxia could become an acquisition target, with the NAND industry potentially consolidating.
…
Lightbits Labs, a cloud data storage supplier that developed NVMe over TCP, has joined the Mirantis partner program. Mirantis is a provider of open source-based infrastructure services and systems. The two collaborators aim to move customers from VMware to a Lightbits cloud infrastructure, using OpenStack for Kubernetes, with “dramatically improved performance and cost-efficiency at scale.” Find out more here.
…
DRAM, NAND and SSD producer Micron has been awarded $6.1 billion in a federal funding agreement through the CHIPS and Science Act – $4.6 billion for New York and $1.5 billion for Idaho. Reuters reports the funding will support Micron’s long-term plan to invest around $100 billion in a 1,400-acre DRAM campus in New York and $25 billion in Idaho, and is one of the largest government awards to chip companies under the $52.7 billion 2022 CHIPS and Science Act. According to Micron the $100 billion New York plant will create 9,000 jobs over 20 years and 36,000 support positions at related suppliers and service companies. Separately, the US Commerce Department has reached a preliminary agreement to award Micron up to $275 million to expand and modernize its facility in Manassas, Virginia to help update its wafer production.
Micron New York fab rendering.
…
NetApp and Futurum have produced a report titled Cloud, Complexity, AI: The Triple Threat Demanding New Cyber Resilience Strategies. The four main findings are:
Cloud Security Risks – Misconfigurations and vulnerabilities in hybrid multi-cloud environments are now among the top threats, outpacing traditional attacks like ransomware.
Tool Sprawl Challenges – Seventy percent of respondents use more than 40 cyber security tools, with 84 percent citing operational complexity as a major inhibitor to cyber resiliency, underscoring the need for tool consolidation and integrated solutions to streamline operations.
AI in Cybersecurity – Forty percent of organizations are leveraging AI for threat detection, with plans to expand its use for automating response and recovery.
Increased Investment – Over 90 pe4rcent of respondents plan to increase cyber security budgets in the next 12 to 18 months, focusing on integrated and proactive solutions.
HPC storage systems engineer Jake Wynne at Oak Ridge National Labs (ORNL) has developed a hsi_xfer transfer tool to simplify the process of transferring vast amounts of data at speed and ORNL is making it available for public use. It was created to help transfer large quantities of data from the facility’s tape library-based High Performance Storage System, or HPSS, to the newly deployed nearline storage system, Kronos. Kronos is a 134PB, multi-programmatic nearline storage system that also provides back-end tape-based backups for all data as a disaster-recovery measure. The hsi_xfer transfer tool is named after the hsi command-line interface for the HPSS.
The goal was to prevent overloading the tape library’s robotic tape-retrieval mechanisms and provide users with a simpler, more efficient way to access their data. “By batching all requested files from a single tape and streaming them together, the script minimizes robotic movement, reduces tape loading and seek times, and helps extend the lifespan of both the tapes and the hardware,” Wynne explained.
The script has demonstrated superior transfer performance compared to other tools, while also offering data-integrity features typically found in tools such as Globus – features that are not available in the standard hsi. Like Globus, the script includes a checkpointing feature that allows users to recover quickly from interrupted transfers, resuming exactly where they left off with minimal overhead.
Wynne is currently stewarding the script through ORNL’s open-sourcing process, and he hopes it will soon appear in the OLCF’s public GitHub repository.
…
Wedbush analyst Matt Bryson tells subscribers that, per the Korea Daily, Samsung won’t break into Nvidia’s supply chain this year and is hoping to penetrate the largest consumer of HBM next year. And DigiTimes today suggests that Samsung has reassigned 2K engineers to work on this initiative. Assuming DigiTimes is correct, we view this shift of more resource to HBM as confirming that Samsung still has work to do to fix its HBM – a result that should benefit Hynix (and Micron) on the HBM front, but could hurt general DRAM dynamics if Samsung is forced to reallocate HBM capacity to standard DRAM production.
…
Silk, which provides superfast software-defined cloud storage, has appointed Ronen Schwartz to its board of directors. “Schwartz, who currently serves as CEO of K2View, brings decades of experience in cloud storage, data management, and enterprise solutions to Silk’s board. Previously, he served as SVP and GM of Cloud Storage at NetApp, where he played a pivotal role in driving cloud adoption and innovation.”
…
Sofia, Bulgaria-based primary data storage supplier StorPool Storage has received all 4- and 5-Star reviews as part of the Gartner Peer Insights Customers’ Choice 2024 program based on criteria including product capabilities, evaluation and contracting, integration and deployment, and service and support. The 85 percent 5-Star ratings received by StorPool during the past 12 months is among the highest ratings overview achieved among competitive vendors in the Primary Storage Platforms category. One hundred percent of peers evaluating the StorPool Storage Platform as part of Gartner Peer Insights recommend the product. The supplier’s overall ranking was 4.8 out of 5 stars.
…
Mårten Strömberg (left) and Stefan Röse (right).
Stravito, the Swedish SaaS startup (2017) that provides a central place to store and analyze market research data, has appointed Mårten Strömberg VP of Strategy and Analytics to enhance Stravito’s strategic direction and data-driven growth. He previously worked as director of analytics at fintech Zettle by Paypal, and as a strategy consultant at The Boston Consulting Group. Stravito has also appointed Stefan Röse as head of client development.
…
Object cloud storage supplier Wasabi has added IBM Cloud‘s London datacenter to Wasabi’s storage regions. It says Wasabi’s ability to leverage IBM’s Multizone Region (MZR) addresses the need to help joint customers address their evolving regulatory requirements and leverage AI and other emerging technologies with a secured, enterprise cloud platform. IBM Cloud MZRs are composed of three or more datacenter zones, with each being an Availability Zone. Wasabi “aims to help new and existing UK customers utilizing Wasabi AiR, an intelligent media storage solution for the sports, media, and entertainment segment, address their data residency requirements. Wasabi AiR customers such as Liverpool Football Club (LFC) in the Premier League, will be able to access and leverage key sports data securely across a hybrid cloud infrastructure.”
Veeam Software has announced the availability of Veeam Data Platform v12.3 with a host of new features.
Boosting identity protection and access management includes support for backing up Microsoft Entra ID. “The ability to protect both Active Directory and Entra ID is critical, because identity-based attacks are massively on the rise. Hackers are choosing to log in as opposed to hacking in whenever possible,” explained Krista Case, research director at The Futurum Group.
Other features in the new release include Recon Scanner and Veeam Threat Hunter to improve proactive threat analysis. Utilizing generative AI delivers more intelligent protection of enterprise data, with advanced reporting powered by Veeam Intelligence.
In addition, Veeam Data Platform v12.3 expands data portability by offering complete Nutanix AHV protection, with application-aware processing, in-depth alerting, and analytics for Nutanix AHV workloads.
Fully integrated with Veeam Data Cloud Vault v2, the latest update also provides instant access to secure, air-gapped, encrypted and immutable cloud storage that is “predictably priced,” according to Veeam.
Anand Eswaran.
“Security starts with identity and authentication, which is why providing backup for Microsoft Entra ID is an important addition to Veeam Data Platform v12.3. We can now protect the most used identity and access management system, and combine it with new proactive threat analysis tools that better prepare enterprises for cyber threats,” said Anand Eswaran, CEO at Veeam.
Recon Scanner provides proactive threat assessment technology – identifying adversary tactics, techniques, and procedures (TTPs) before a cyber attack. It is built from the patent-pending Coveware technology used to counter thousands of ransomware incidents. Coveware was acquired by Veeam earlier this year.
Veeam Threat Hunter offers accelerated signature-based malware scanning, allowing organizations to cast a wider net and detect dormant threats in their backups to ensure business continuity. Threat Hunter employs machine learning and heuristic analysis to identify advanced threats such as polymorphic malware, with threat signatures and ML models updated multiple times per day to detect newly developing threats.
Another new feature is IoC Tools Scanner, to enable organizations to be notified on the appearance of indicators of compromise (IoC) tools that are commonly used by cyber criminals – including different techniques such as lateral movement, exfiltration, command and control, credential access, and more on protected machines. The tool promises to significantly reduce the Mean Time to Detect (MTTD) threats.
While Veeam claims to be worth $15 billion after recent investment, making it the most valuable org in its field, this week’s completion of the Cohesity/Veritas merger might potentially have knocked Veeam off its number one market share perch.
N2WS has updated its cloud-native backup and disaster recovery (BDR) software to address the growing threats of ransomware and other attacks.
There are new features in the latest version of its Backup and Recovery software for AWS and Azure users. Its enterprise and MSP customers can use them to cut operational costs and streamline cross-cloud and multi-cloud data management.
Ohad Kritz
Ohad Kritz, CEO and co-founder of N2WS, stated: “We excel in protecting data—that’s our specialty and our core strength. While others may branch into endpoint security or threat intelligence, losing focus, we remain dedicated to ensuring our customers are shielded from the evolving IT threat landscape. These latest multi cloud enhancements are a testament to our unwavering commitment to data protection and delivering vendor neutrality to our customers.”
This hints that Google Cloud Platform support may be on N2WS’s roadmap. The point about losing focus is perhaps a criticism of data protection suppliers such as Cohesity amd Rubrik who are going deeper into cyber-resilience than N2WS.
The new software brings Data Lifecycle Management (DLM) to Azure virtual machine (VM) backups. By charging per VM rather than based on VM size, N2WS says it enables up to 500 percent in cost savings on both licensing and storage. It also supports a broader range of storage options (Azure Blob, AWS S3, Wasabi S3), offering enhanced flexibility for users. By adding support for Wasabi’s third-party S3-compatible storage repositories, N2WS now offers customers highly cost-effective storage options at competitive prices.
The N2WS platform now leverages cloud-native, platform-independent block-level snapshot technology to deliver maximum-speed read and write access across Azure, AWS, and third-party repositories.
Enhanced recovery scenarios functionality removes the uncertainty of identifying resources during a planned failover, ensuring a quick and efficient failback once operations are restored. The new functionality introduces Custom Tags for Recovery Scenarios, enabling the retention of backup tags and the addition of fixed tags, such as marking Disaster Recovery targets. This improvement also enhances the differentiation between original and DR targets during failover. New recovery options for AWS FSx ONTAP storage have been added, further strengthening recovery capabilities.
A Partial Retry for Policy Execution Failures feature enhances backup efficiency and reliability by retrying only the failed resources, without reprocessing the successful ones. This not only saves time and costs but also boosts reliability by targeting only the failed backups, minimizing unnecessary policy-wide failures.
S3 Compliance Locking has a smarter algorithm which reduces API request volumes, “significantly lowering operational costs” while maintaining compliance and functionality.
Kritz said: “Efficiency and affordability are at the core of what sets us apart and resonates with our customers. Our mission is to provide exceptional protection while helping them cut costs, improve backup efficiency, and simplify restore and disaster recovery testing. This commitment to making things ridiculously easy and fast has been the cornerstone of our success, and we’re excited to continue supporting our customers in the years to come.”
N2WS is changing its name to N2W Software. We expect the company to start paying more attention to Gen AI in the future, rather than just saying its SW can work with AI tools. For example, an N2WS blog says: “Once AI-based features in disaster recovery tools have fully matured, they’ll remain just one of many types of capabilities necessary to enable successful disaster recovery.
“Businesses will also require features like automated disaster recovery testing, as well as the ability to immediately recover data across cloud regions, across cloud accounts and across entire cloud platforms – all of which you can already do with N2WS. This means that even as they experiment with AI-enabled disaster recovery, organizations will also need tried-and-true solutions like N2WS at their disposal to deliver core disaster recovery capabilities.”
Interview. We came across UK-based Predatar when its Cyber Recovery Orchestration achieved Veeam Ready Security status in November. This was followed by an email interview with marketing head Ben Hodge, which revealed some surprising points about the company’s relationship with Index Engines and Rubrik.
Blocks & Files: How does Predatar’s technology and use relate to that of Index Engines?
Ben Hodge
Ben Hodge: Index Engines is the solution that Predatar’s capabilities get confused with the most, but they are actually tackling the problem of malware in backups differently. In my opinion, Predatar and Index Engines actually complement one another beautifully (not that anyone is using both as far as I am aware).
Index Engines excels at scanning data oningest, detecting anomalies and identifying encrypted data that might slip past other tools. This gives customers peace of mind that only unencrypted data is vaulted, which is crucial for securing backups.
Predatar is all about validating recoverability. It focuses on continuous recovery testing and deep malware scanning of the data that resides within backups (and snapshots). Predatar continually mounts and powers-up workloads to validate recoverability and then runs a malware scan using built-in XDR tools in a Predatar CleanRoom environment.
The Predatar team claims to have found malware hidden in backups at 79 percent of the customers that use it, which goes to show that Predatar is solving a very real problem.
In addition to validating the cleanliness and recoverability of data, Predatar also records recovery times, allowing users to validate their SLAs/RTOs.
I think a lot of the confusion comes from the fact that Predatar also includes backup anomaly detection – but that is really secondary to our core proposition of recovery validation. Predatar uses the anomaly detection as just one mechanism to help prioritize which backups/snapshots to test next.
So, to recap… Index Engines ensures clean, encryption-free vaulting, while we guarantee reliable and proven malware-free recovery. [See Index Engines’ viewpoint on this below.]
Blocks & Files: How does Predatar’s technology compare to data protection supplier’s in-house malware detection and reaction facilities. Rubrik for example?
Ben Hodge: Rubrik, IBM, Veeam, and HPE have all acknowledged (not all of them publicly) that Predatar does something different to their own offerings, and for now at least they consider us to be partners rather than competitors. Rubrik, IBM, and HPE all participated in our recent Control24 user summit. As per the question above, I’ll focus on Rubrik specifically here.
Rubrik has some fantastic tools to spot and track malware and signs of encryption in data. However, once Rubrik finds an issue, that’s were it stops. Rubrik doesn’t provide any automation to push suspect data to a cleanroom for testing, and Rubrik doesn’t have a way of doing recovery testing at scale. In the words of Rubrik, “Predatar picks up where Rubrik stops.”
Ben Hodge: Not really. HPE has something called the Cyber Resilience Vault (Zerto is a component of it), but it’s missing a “Predatar-type” component that validates the recoverability and cleanliness of the data that is stored in it.
HPE is offering a version of their Cyber Resilience Vault with Predatar incorporated into it. You can see Shariq Aqil, Global Field CTO at HPE explaining the solution here (skip to 12:03).
Blocks & Files: I see Predatar has evolved. Its Crunchbase profile says “the Predatar platform enables the traditional VAR to evolve into an MSP, through the delivery of remote management, on-premise BaaS.” How did this happen?
Ben Hodge: Step 1. The beginning: Predatar was originally designed as a tool to help the Silverstring MSP scale up by automating repetitive maintenance tasks and reporting – thus meaning Silverstring could manage more customer’s backups without employing more people.
Step 2. What the Crunchbase blurb is about: We realized that other MSPs (or resellers with an ambition to become MSPs) could benefit from Predatar in the same way Silverstring did. We went to market with Predatar as a tool for MSPs.
Step 3. The cyber recovery piece; we continued to develop the platform with more features that MSPs would benefit from. This included automated recovery testing and later malware detection tools that would allow MSPs to guarantee the backups they were managing were recoverable and infection-free.
Alistair Mackenzie
The reaction to these new tools was huge. Interest in the new tools eclipsed the original reporting and management tools.
Step 4. Marketing focus: Earlier this year we took the decision to focus solely on the cyber recovery proposition, which is applicable to MSPs and end users alike.
Blocks & Files: Crunchbase lists no funding details. I guess Predatar got its original funding from Silverstring? And I think Predatar was founded from within Silverstring by CEO Alistair Mackenzie. Is that right?
Ben Hodge: Correct, Predatar and Silverstring are owned by Alistair. Predatar had no external funding.
Bootnote
Index Engines’ Rob Mossi, Senior Director of Product Marketing, tells us: “CyberSense integrates with recovery products and finds hidden ransomware corruption deep into the content of files and databases with 99.99 percent accuracy, preventing reinfections by finding both active and dormant ransomware executables in the backups, and providing forensic details of the attack. This ensures trusted data can be recovered by the backup software to minimize downtime and data loss.”