UK-based Bridgeworks has a deal with SpectraLogic for the tape and disk systems vendor to resell Bridgeworks’s product, PORTrockIT, which uses WAN Acceleration to mitigate the effects of latency and packet loss over Wide Area Networks with artificial intelligence, machine learning and data parallelization. It will be used in SpectraLogic’s BlackPearl object filestore product. The partners have been working together since 2005 with an ISCSi to SCSI bridge for one of SpectraLogic’s compact library products. Spectra uses RESTful interfaces (S3 and S3-like) to replicate data between physical sites and Bridgeworks offers products that accelerate that data movement.
…
Software supply chain protection supplier Codenotary announced Trustcenter 4.0 with capabilities to manage data in the VEX (Vulnerability Exploitability eXchange) format with a newly designed search engine guided by machine learning (ML). Vulnerability information contained in VEX can be analyzed more effectively, enabling organizations to prioritize and address security issues in their software. The new version of Trustcenter is being deployed at a top tier Swiss bank. Codenotary Trustcenter starts at $5,000 per year for a team of five developers as a cloud offering. The on-premises version of Trustcenter starts at $38,000 per year which includes 24×7 support. For more information and for a free trial, go here.
…
DDN is working with GPU-as-a-Service farm Lambda Labs to supply its storage to Lambda’s customers, as a Tweet indicates:
Access the tweet to play a video exploring the DDN-Lambda relationship. VAST Data has previously said it’s working with Lambda Labs to supply storage for GPU server customers – so Lambda, it appears, has a dual supply policy.
…
Dremio CEO Sendur Sellakumar predicts the Generative AI hype train will continue to grow exponentially: “I think we’re still in a GenAI hype cycle … Things around GenAI have been very compelling. We hardly talked about GenAI a year ago; now we do, which is excellent. Generative AI will be the future of user interfaces. All applications will embed generative AI to drive user interaction, which guides user productivity. Companies are embedding GenAI to do semantic searching to solve some of those old data problems – discovery becomes easier, creating pipelines becomes more accessible.”
…
The Futurum Group, which acquired The Evaluator Group, has announced the retirement of the very well-known analyst, Randy Kearns. Camberly Bates wrote: “As an executive, his leadership developed new storage systems at StorageTek, Tandem, Fujitsu, Sun Microsystems, and ProStor. He joined the Evaluator Group in 1999, and then again in 2010. … While Randy is ‘retiring,’ we all know that is probably impossible for him to do. Thus, we are pleased he will work with the Evaluator Group, now a part of The Futurum Group team, as an emeritus to provide counsel and strategy for our IT End User clients.”
…
General Micro Systems (GMS) has new removable mass storage with the X9 Spider Storage Module for US military applications. The 128TB of portable solid state storage fits in the palm of the hand – perfect, it says, for battlefield AI datasets or mobile sensor data recorders. X9 Storage includes a 4- or 8-drive removable cartridge supporting 2.5-inch or M.2 solid state storage. The small form factor (SFF) system offers 128TB (max) of removable storage capacity, CSfC or FIPS-140-2 secure encrypted SSDs, and up to 80Gbit/sec of data I/O streaming. Details and downloadable datasheet here.
…
Hammerspace global marketing head Molly Presley argues: “One of the biggest challenges facing organizations is putting distributed unstructured data sets to work in their AI strategies while simultaneously delivering the performance and scale not found in traditional enterprise solutions. It is critical that a data pipeline is designed to use all available compute power and can make data available to the cloud models such as those found in Databricks and Snowflake. In 2024, high-performance local read/write access to data that is orchestrated globally in real time, in a global data environment, will become indispensable and ubiquitous.”
…
HPE is building two new supercomputers at the High-Performance Computing Center of the University of Stuttgart (HLRS). In the first stage, a transitional supercomputer, called Hunter, will begin operation in 2025. This will be followed in 2027 with the installation of Herder, an exascale system that will provide an expansion of Germany’s high-performance computing (HPC) capabilities. Hunter and Herder will offer researchers world-class infrastructure for simulation, artificial intelligence (AI), and high-performance data analytics (HPDA). The total combined cost for Hunter and Herder is €115 million ($126.6 million).
Hunter will be based on the HPE Cray EX4000 supercomputer, which is designed to deliver exascale performance to support large-scale workloads across modeling, simulation, AI, and HPDA. Each of the 136 HPE Cray EX4000 nodes will be equipped with four HPE Slingshot high-performance interconnects. Hunter will also leverage the next generation of HPE’s Cray ClusterStor storage system, and the HPE Cray Programming Environment, which offers programmers a set of tools for developing, porting, debugging, and tuning applications.
…
Kioxia Europe GmbH has started mass production of an Exceria Plus G2 2TB UHS-1 microSDXC memory card for smartphone owners, content creators and gamers. The card stacks 16x 1 terabit dies of 3D flash memory while maintaining the SDXC specification’s maximum thickness of 0.8mm. It has a read speed up to 100MB/sec and write speed up to 90MB/sec, and meets UHS Speed Class 3 (U3) , Application Class 1 (A1) and Video Speed Class 30 (V30) standards. The 2TB capacity allows users to record over 41 hours of video @100Mbit/sec. Market availability will be in the first 2024 quarter.
…
Jim Handy of Objective Analysis has written a blog discussing Memory-Semantic SSDs (MS-SSDs) and NVMe-over-CXL (NVMe-oC). An MS-SSD is an SSD with DRAM in its controller. The DRAM is loaded with the estimated or known next item needed by the host from the SSD, thus making the SSD controller’s DRAM a cache, and speeding data access – if the cache has the correct data item loaded. NVMe over CXL uses CXL requests to tell an MS-SSD to put data into its controller’s DRAM, which forms part of a CXL memory pool. This enables a host to read data from the SSD’s DRAM without waiting for that data to be fetched from the SSD’s NAND, which takes longer.
Handy writes: “The basic idea is that many SSD reads are for smaller chunks of data than the standard 4KB delivered by an SSD access. Why move all that data over CXL-io or NVMe over PCIe if the processor only needs one 64-byte cache line of it? NVMe-oC is expected to both reduce the I/O traffic and the effort spent by the host to move the data itself.”
This enables the SSD to pre-fetch data for the host and place it in the controller’s DRAM ready to be read. NVMe-oC uses uses CXL.io to access the SSD and CXL.mem to access the memory. Handy writes: “Special commands tell the SSD to write data into that memory or to write from the memory into the SSD, without any interaction from the host, to reduce host-device data movement.” The host needs an NVMe-oC driver.
…
Distributed SQL database supplier PingCAP launched TiDB 7.5, an open source distributed SQL database featuring shardless horizontal scaling, MySQL wire compatibility, support for ACID transactions, and hybrid transactional/analytical processing (HTAP) for real-time operational insights. TiDB 7.5 is the second long-term-support (LTS) release of 2023. These new capabilities make TiDB, PinCAP claims, a far better performing alternative to MySQL for large-scale applications. See the TiDB 7.5 Release Notes here.
…
Cloud data warehouser Snowflake is acquiring Samooha, a data clean room provider. Christian Kleinerman, SVP of Product at Snowflake, blogs: “When businesses share sensitive first-party data with outside partners or customers, they must do so in a way that meets strict governance requirements around security and privacy. Data clean rooms have emerged as the technology to meet this need, enabling interoperability where multiple parties can collaborate on and analyze sensitive data in a governed way without exposing direct access to the underlying data and business logic.”
“Samooha is built for developers and business users and delivers industry specific analysis templates. Samooha’s intuitive user interface makes it easy to build data clean rooms that run as Snowflake Native Apps in the Data Cloud.”
…
StorPool has had Marc Staimer, president of Dragon Slayer Consulting, write a 2024 Data Storage Buyers Guide – a practical paper on evaluating, selecting, and deploying block storage systems. Download a copy here. The StorPool platform is a Storage as a Service (STaaS) offering, with a bring-your-own-servers model. It combines software, plus a fully managed data storage service that transforms standard hardware into fast, highly available and scalable block-first storage systems.
…
Toshiba has been taken private by a private equity consortium led by Japan Industrial partners. The deal is worth 2 trillion yen ($11 billion) and follows years of turmoil with the debt-ridden company being fought over by Japanese interests and external activist investors. Follow the backlinks here to find out more about the gory details.
…
Japan’s Toshiba Group has reduced costs by 30 percent after adopting Wasabi as its cloud storage supplier. Toshiba Group now stores 85 percent of its data on Wasabi, with unused data on its file servers automatically transferring to Wasabi cloud storage after 30 days, reducing costs and allowing the manufacturing giant to manage rapidly growing volumes of data more effectively. In addition, the elimination of file servers in each department has reduced the operational load.
Toshiba Group operates a 1PB file server with the volume of data growing ten percent year-over-year as data accumulates through daily operations. The conglomerate was previously using NetApp on-premises as a file server to store data. However, scaling on-premise storage in anticipation of future data growth proved to be a significant investment. Rather than purchasing additional hardware, Toshiba Group adopted NTT Communications’ Wasabi Tiering for NetApp service, which uses NetApp’s tiering feature to automatically transfer and store infrequently accessed data from NetApp to Wasabi’s cloud storage.
…