Arctera is collaborating with Red Hat and its InfoScale cyber-resilience product is certified on Red Hat OpenShift Virtualization.
…
Hitachi Vantara recently suffered a malware attack and we asked: “In the light of suffering its own malware attack, what would Hitachi Vantara say to customers about detecting, repelling and recovering from such attacks?”
The company replied: “Hitachi Vantara’s recent experience underscores the reality that no organization is immune from today’s sophisticated cyber threats, but it reinforces that how you detect, contain, and respond to such events is what matters most. At Hitachi Vantara, our focus has been on acting with integrity and urgency.
“We would emphasize three key lessons:
1. Containment Measures Must Be Quick and Decisive. The moment we detected suspicious activity on April 26, 2025, we immediately activated our incident response protocols and engaged leading third-party cybersecurity experts. We proactively took servers offline and restricted traffic to our data centers as a containment strategy.
2. Recovery Depends on Resilient Infrastructure. Our own technology played a key role in accelerating recovery. For example, we used immutable snapshot backups stored in an air-gapped data center to help restore core systems securely and efficiently. This approach helped reduce downtime and complexity during recovery.
3. Transparency and Continuous Communication Matter. Throughout the incident, we’ve prioritized open communication with customers, employees, and partners, while relying on the forensic analysis and our third-party cybersecurity experts to ensure decisions are based on verified data. As of April 27, we have no evidence of lateral movement beyond our environment.
“Ultimately, our experience reinforces the need for layered security, rigorous backup strategies, and well-practiced incident response plans. We continue to invest in and evolve our security posture, and we’re committed to sharing those insights to help other organizations strengthen theirs.”
…
HYCU has extended its support for Dell’s PowerProtect virtual backup target appliance to protect SaaS and cloud workloads with backup, disaster recovery, data retention, and offline recovery.
The addition of support for Dell PowerProtect Data Domain Virtual Edition by HYCU R-Cloud SaaS complements existing support for Dell PowerProtect Data Domain with R-Cloud Hybrid Cloud edition. HYCU says it’s first company to offer the ability to protect data across on-premises, cloud, and SaaS to the most efficient and secure storage in the market; PowerProtect. HYCU protects more than 90 SaaS apps. It says that what’s new here is that, of the 90+ offerings, only a handful of backup suppliers offer customer-owned storage.
…
OWC announced the launch and pre-order availability of the Thunderbolt 5 Dock with up to 80Gb/s of bi-directional data speed and 120Gb/s for higher display bandwidth needs. You can connect up to three 8K displays or dual 6K displays on Macs. It works with Thunderbolt 5, 4, 3, USB4, and USB-C devices, and delivers up to 140W of power to charge notebooks. The dock has 11 versatile ports, including three Thunderbolt 5 (USB-C), two USB-A 10Gb/s, one USB-A 5 Gbps, 2.5GbE Ethernet (MDM ready), microSD and SD UHS-II slots, and 3.5mm audio combo. The price is $329.99 with Thunderbolt 5 cable and external power supply.
…
At Taiwan’s Computex Phison announced the Pascari X200Z enterprise SSD with PCIe gen5 with near-SCM latency and – get this – up to 60 DWPD endurance – it’s designed for the high write endurance demands of generative AI and real-time analytics. It also announced aiDAPTIVGPT supporting generative tasks such as conversational AI, speech services, code generation, web search, and data analytics. Phison launched aiDAPTIVCache AI150EJ: a GPU memory extension optimized for AI Edge and Robotics Systems by enhancing edge inference performance by optimizing Time to First Token (TTFT) and increasing the number of tokens processed.
An E28 PCIe 5.0 SSD controller, built on TSMC’s 6nm process, is the first in the world to feature integrated AI processing, and achieving up to 2,600K/3,000K IOPS (random read/write) — over 10 percent higher than comparable products. It has up to 15 percent lower power consumption versus competing 6nm-based controllers.
The E31T DRAM-less PCIe 5.0 SSD controller is designed for ultra-thin laptops and handheld gaming devices with M.2 2230 and 2242 form factors. It delivers high performance, low power consumption, and space efficiency.
Phison also announced PCIe signal IC products:
- The world’s first PCIe 5.0 Retimer certified for CXL 2.0
- PCIe 5.0 Redriver with over 50% global market share
- The industry’s first PCIe 6.0 Redriver
- Upcoming PCIe 6.0 Retimer, Redriver, SerDes PHY, and PCIe-over-Optical platforms co-developed with customers
…
IBM-owned Red Hat has set up an open source llm-d project and community; llm-d standing for, we understand, Large Language Model – Development. It is focused on AI inference at scale and aims to make production generative AI as omnipresent as Linux. It features:
- vLLM, the open source de facto standard inference server, providing support for emerging frontier models and a broad list of accelerators, including Google Cloud Tensor Processor Units (TPUs).
- Prefill and Decode Disaggregation to separate the input context and token generation phases of AI into discrete operations, where they can then be distributed across multiple servers.
- KV (key-value) Cache Offloading, based on LMCache, shifts the memory burden of the KV cache from GPU memory to more cost-efficient and abundant standard storage, like CPU memory or network storage.
- Kubernetes-powered clusters and controllers for more efficient scheduling of compute and storage resources as workload demands fluctuate, while maintaining performance and lower latency.
- AI-Aware Network Routing for scheduling incoming requests to the servers and accelerators that are most likely to have hot caches of past inference calculations.
- High-performance communication APIs for faster and more efficient data transfer between servers, with support for NVIDIA Inference Xfer Library (NIXL).
CoreWeave, Google Cloud, IBM Research and NVIDIA are founding contributors, with AMD, Cisco, Hugging Face, Intel, Lambda and Mistral AI as partners. The llm-d community includes founding supporters at the Sky Computing Lab at the University of California, originators of vLLM, and the LMCache Lab at the University of Chicago, originators of LMCache. Red Hat intends to make vLLM the definitive open standard for inference across the new hybrid cloud.
…
Red Hat has published a tech paper entitled “Accelerate model training on OpenShift AI with NVIDIA GPUDirect RDMA.” It says :”Starting with Red Hat OpenShift AI 2.19, you can leverage networking platforms such as Nvidia Spectrum-X with high-speed GPU interconnects to accelerate model training using GPUDirect RDMA over Ethernet or InfiniBand physical link. …this article demonstrates how to adapt the example from Fine-tune LLMs with Kubeflow Trainer on OpenShift AI so it runs on Red Hat OpenShift Container Platform with accelerated NVIDIA networking and gives you a sense of how it can improve performance dramatically.“

…
SK hynix says it has developed UFS 4.1 product adopting the world’s highest 321-layer 1Tb triple level cell 3D NAND for mobile applications. It has a 7 percent improvement in power efficiency, compared with the previous generation based on 238-high NAND and a slimmer 0.85mm thickness, down from 1mm before, to fit into a ultra-slim smartphone. It supports data transfer speed of 4,300 MBps, the fastest sequential read for a fourth-generation UFS, while improving random read and write speed by 15 percent and 40 percent, respectively. SK hynix plans to win customer qualification within the year and ship in volume from the Q1 2026 in 512GB and 1TB capacities.
…
Snowflake’s latest quarterly revenues (Q1fy2026) showed 26 percent Y/Y growth to $1 billion. It’s still growing fast and its customer count is 11,578, up 19 percent. There was a loss of $429,952,000, compared to the previous quarter’s $325,724,000 loss – 32 percent worse. It’s expecting around 25 percent Y/Y revenue growth next quarter.

…
Data lakehouser Starburst announced a strategic investment from Citi without revealing the amount.
…
Starburst announced new Starburst AI Agent and new AI Workflows across its flagship offerings: Starburst Enterprise Platform and Starburst Galaxy. AI Agent is an out-of-the-box natural language interface for Starburst’s data platform that can be built and deployed by data analysts and application-layer AI agents. AI Workflows connect the dots between vector-native search, metadata-driven context, and robust governance, all on an open data lakehouse architecture. With AI Workflows and the Starburst AI Agent, enterprises can build and scale AI applications faster, with reliable performance, lower cost, and greater confidence in security, compliance and control. AI Agent and AI Workflows are available in private preview.
…
Veeam’s Kasten for Kubernetes v8 release has new File Level Recovery (FLR) for KubeVirt VMs allowing granular restores enabling organizations to recover individual files from backups without the need to restore entire VM clones. A new virtual machine dashboard offers a workload-centric view across cluster namespaces to simplify the process of identifying each VM’s Kubernetes-dependent resources and makes configuring backup consistency easier. KforK v8 supports X8, ARM and IBM Power CPUs and is integrated with Veeam Vault. It broadens support for NetApp Trident storage provisioner with backup capabilities for ONTAP NAS “Economy” volumes. A refreshed user interface simplifies onboarding, policy creation, and ongoing operations.
Veeam has released Kasten for Modern Virtualization – a tailored pricing option designed to align seamlessly with Red Hat OpenShift Virtualization Engine. Veeam Kasten for Kubernetes v8 and Kasten for Modern Virtualization are now available
…
Wasabi’s S3 cloud storage service is using Kioxia CM7 Series and CD8 Series PCIe gen 5, NVMe SSDs for its Hot Cloud Storage Service.
…
Zettlab launched its flagship product – Zettlab AI NAS, a high-performance personal cloud system that combines advanced offline AI, enterprise-grade hardware, and a sleek, modern design. Now live on Kickstarter, the Zettlab AI NAS gives users a smarter, more secure way to store, search, and manage digital files – with complete privacy, powerful performance, and an intuitive experience. Zettlab AI NAS transforms traditional NAS into a fully AI-powered data management platform with local computing, privacy-first AI tools, and a clean, user-friendly operating system – available to early backers at a special launch price.

it’s a full AI platform running locally on powerful hardware:
- Semantic file search, voice-to-text, media categorization, and OCR – all offline
- Built-in Creator Studio: plan shoots, auto-subtitle videos, organize files without lifting a finger
- ZettOS: an intuitive OS designed for everyday users with pro-level power
- Specs: Intel Core Ultra 5, up to 200TB, 10GbE, 96GB RAM expandable
…