Home Blog Page 68

Samsung targets AI with first mass production of quad-level QLC V-NAND

Samsung Electronics has begun the mass production of the industry’s first 1TB quad-level cell QLC 9th-generation V-NAND. The memory technology promises high performance, high-capacity NAND solutions across various AI applications.

Samsung Gen 9 QLC V-NAND

The firm previously claimed the industry’s first triple-level cell (TLC) 9th-generation V-NAND production in April this year.

The QLC 9th-generation V-NAND will be targeted at branded consumer products, mobile universal flash storage, PCs, and server SSDs for enterprises and cloud service providers.

Samsung’s Channel Hole Etching technology has been used to achieve the highest layer count in the industry (286L) with a double stack structure. The new QLC V-NAND boasts a 86 percent higher density than the Korean giant’s previous generation, we are told.

In addition, the firm says data retention performance is improved by “roughly 20 percent” over previous versions.

Predictive program technology also anticipates and controls cell state changes to minimize unnecessary actions. As a result, Samsung claims it has doubled write performance, and improved data input/output speed by 60 percent.

On top of that, data read and write power consumption is said to have decreased by “about 30 percent and 50 percent,” respectively, using low-power design technology. Samsung claims it has reduced the voltage that drives NAND cells, and has minimized power consumption by sensing only the necessary bit lines (BLs).

3D NAND layer count generations.

“Kicking off the successful mass production of QLC 9th-generation V-NAND just four months after the TLC version allows us to offer a full line-up of advanced SSD solutions that address the needs for the AI era,” declared SungHoi Hur, executive vice president and head of flash product and technology at Samsung Electronics.

“As the enterprise SSD market shows rapid growth with stronger demand for AI applications, we will continue to solidify our leadership in the segment through our QLC and TLC products.”

VDURA appoints former Cray CEO to the board and brings back founder to drive AI effort

VDURA, which rebranded from Panasas earlier this year, has boosted its board and formed a technical advisory board to drive its AI and HPC data infrastructure software efforts.

Peter Ungaro

It has named former Cray president and CEO Peter Ungaro as a board director, and has added Panasas founder Garth Gibson, and Datrium co-founder Hugo Patterson, to the technical advisory board.

“Peter’s deep expertise in AI and HPC applications, ecosystems, and business models will be invaluable as VDURA builds out its SaaS business model,” said VDURA. It added that Hugo Patterson was known for his “groundbreaking” work in data deduplication and virtualization at Data Domain, which was acquired by EMC in 2009. He co-founded Datrium, where he was chief scientist, which was bought by VMware in 2020.

At Cray, Ungaro collaborated with the likes of AMD, Intel, Microsoft, and Nvidia. He spearheaded the development of the Cray XC series supercomputers, the Cray Urika-GX analytics platform, and YARC Data. After HPE acquired Cray for $1.3 billion in 2019, Ungaro became senior vice president and general manager at HPE. He later left HPE in 2021.

Garth Gibson

“I have always recognized the impressive capabilities of Panasas’ PanFS technology, and I join the board at a time when the company is transitioning to a SaaS business model,” said Ungaro. “I see great potential for VDURA to innovate and lead in the era of AI data infrastructure.”

Gibson’s return is a blend of the past and its “exciting future”, said VDURA. He is the co-author of the “pNFS IETF rationale,” VDURA said, covering parallel file systems. Most recently, Gibson served as the president and CEO of the Vector Institute, an AI institute that supports researchers, businesses, and governments to responsibly develop and adopt AI.

“When I founded Panasas, I set out to build a software platform that would serve as a parallel file system with enterprise attributes, and I look forward to helping VDURA evolve the platform to meet the needs of tomorrow’s AI and HPC markets,” said Gibson.

Hugo Patterson

“Pete is one of the most respected leaders in HPC and AI, and I’m pleased to have his insights shape our strategy and direction. And having Garth and Hugo work with our architecture team is critical, as their expertise will ensure we adopt the right architectural approach for long-term success,” said VDURA CEO Ken Claffey.

He added: “We have big plans in the coming months, as VDURA is positioning itself to become the leading data infrastructure software platform for AI and HPC applications.”

To support growth, VDURA recently launched its Velocity Partner Program.

Dell and Red Hat upshift OpenShift for APEX

Dell and Red Hat have enhanced the APEX Cloud Platform for Red Hat OpenShift with upgraded software, virtualization, and more storage options.

Dell launched its APEX Red Hat OpenShift service almost a year ago, supporting Red Hat’s OpenShift container orchestration service on Dell PowerEdge servers and PowerFlex block storage with SSDs. APEX is a set of services from Dell where it supplies compute, storage, and networking gear through a public cloud-like subscription model.

Alyson Langon, Dell
Alyson Langon

Alyson Langon, Dell’s director of Product Marketing for Multicloud and as-a-Service, blogged: “By managing everything on a single platform, operations are simplified.” To help with this, the APEX Red Hat OpenShift offering “now includes Red Hat OpenShift Virtualization by default … for organizations to run, deploy, and manage new and existing virtual machine workloads, alongside containers and AI/ML workloads.”

Dell and Red Hat have “expanded the block storage options from PowerFlex to include PowerStore and Red Hat OpenShift Data Foundation, which gives customers additional options who want a smaller footprint.” 

OpenShift Data Foundation (ODF) includes Ceph, NooBaa multicloud object storage and data management, persistent container storage, multi-cloud object gateway and snapshot, clone, encryption, replication, and DR data services. PowerStore is Dell’s unified block and file, dual-controller storage array.

Existing PowerStore and PowerFlex appliances can also be used by the new APEX OpenShift platform.

On top of that, customers can hook up Dell’s enterprise block, file, and object storage offerings, such as PowerScale and ObjectScale, which Langon says is “important for the growing number of AI workloads.” With AI workloads in mind, we note that PowerEdge servers are x86-based, supporting 5th generation Xeons, and can run Nvidia GPUs as well. PowerScale is also Nvidia SuperPOD-certified.

Red Hat OpenShift v4.14 and v4.16 are supported by this new APEX OpenShift offering, which adds support for CPU hot-plug and picking a specific node for live migration.

Dell APEX Cloud Platform for Red Hat OpenShift diagram
Dell APEX Cloud Platform for Red Hat OpenShift diagram

Langon says the new APEX OpenShift cloud platform eases management of intricate systems and diverse workloads, streamlining IT operations for improved efficiency, and reducing the complexities associated with managing multiple versions of software.

Samsung and SK hynix gain NAND market share as Kioxia, Micron and Western Digital lose it

TrendForce NAND market tracking reveals that Samsung and SK hynix-Solidigm have grown their market share at the expense of the Kioxia-Western Digital joint venture and Micron.

Its latest quarterly report positions Samsung as the industry leader (36.9 percent), placing SK Group (SK hynix and Solidigm) in the same position it was placed previously: the second-largest NAND flash supplier (22.1 percent), both by revenue in 2024’s second quarter. 

The next three suppliers were Kioxia (13.8 percent), Micron (11.8 percent) and Western Digital (WDC – 10.5 percent). The total revenue of $16.8 billion was 14.2 percent higher than Q1’s $14.7 billion.

We looked at TrendForce’s previous reports and charted its view of the suppliers’ market share history over the past two years:

We can see that the SK Hynix Group has grown its share consistently since the start of 2023, with Samsung also increasing its share from the second 2023 quarter as well. The share losers have been Kioxia, Micron and WDC, particularly Kioxia. As Kioxia and WDC are in a NAND manufacturing joint-venture we have added their shares together (dotted line on chart) ad we can see that SK GRoup’s rise has coincided with a quite dramatic fall in their joint share of the market.

The TrendForce report shows that Kioxia grew its share from 2024’s Q1 to Q2, 12.4 percent rising to 13.8 percent, while WD lost share; 11.6 percent to 10.5 percent. SK Group also lost a little share, dropping from 22.2 percent to 22.1 percent, while Micron gained a point – 11.7 percent rising to 11.8 percent.

Kioxia’s fiscal 2024 Q1 revenues of ¥428.5 billion ($3.1 billion) were a record and up 33 percent on the year-ago ¥322.1 billion ($2.3 billion), with net income rising from ¥10.3 billion ($73 million)  to ¥69.8 billion $500 million). It said: “Demand for datacenter and enterprise SSDs is growing due to normalization of customer inventories and AI demand.”

We’re told by TrendForce that “Micron attributes its Q2 revenue growth to the strong uptake of high-capacity enterprise SSDs and plans to shift its product focus, expecting continued growth in enterprise SSD shipments in Q3.” Also: “WDC plans to launch two new products to capture opportunities in the AI market.”

Both Samsung and SK Hynix Group are focusing more on enterprise SSDs.

TrendForce forecasts that NAND industry revenue in the third 2024 quarter “is expected to remain largely flat compared to the previous quarter.”

Datashelter has global ambitions for its automated backup

French backup startup Datashelter is fleshing out its product line as it seeks to add customers outside its home country. The company was founded in Toulouse in 2023 and has been selling products since March 2024.

Datashelter told the recent IT Press Tour in Istanbul, Turkey, that it currently has around 50 customers for its automated data backup system, mainly generated through CEO Malo Paletou’s connections from his previous job as a consultant for SMEs. Its customers’ index-based backups are end-to-end encrypted using AES-256 and TLS protocols, and stored in S3-compatible datacenters in France.

Paletou outlined the typical data management problems for companies employing less than 30 staff. How do you make sure your backups are performed, and how do you ensure your data is restorable? Also, where and how will you store your backups?

Malo Paletou, Datashelter
Malo Paletou

“As a DevOps consultant I have seen many different types of infrastructures, and deployed many backup solutions,” said Paletou. “Most SMEs without backups are facing two pain points: Unpredictable pricing, and a backup solutions market that is wide. We provide an easy response to these complexities.”

Backup automation covers both files and databases, and currently three main types of databases are supported: PostgreSQL, MongoDB, and MySQL. It’s a fully integrated solution, from the web interface to the client servers, and alerting is included. The service is currently hosted on OVHcloud infrastructure, and further cloud support is being planned elsewhere, Paletou told us.

“It’s a super easy-to-use backup solution that opens the world of backups to a non-tech person. It relies on the S3 standard protocol for its communications,” he added.

The price of the SaaS service starts at €7 a month for 1 TB of backed-up data, with incremental backup, compression and decompression, and AES-256 encryption, for one backed-up server. You can find a backup cost calculator here.

Customers include the Klapp Agency, a web agency that rents VPS/dedicated servers to host the websites of its clients. They chose Datashelter to secure those websites for “just a few hundred euros per year.” Ten websites are backed up daily with 2 TB of storage.

Imajing, a mobile mapping company, serves millions of images and videos for its clients. It relies on a large PostgreSQL database, for which Datashelter can be applied. 500 TB of data is backed up across six servers.

The company says it is developing a reseller network mainly made up of small IT consulting agencies that deploy the product as part of wider customer projects. But it is also talking to “large” channel reseller partners as part of its evolving go-to-market.

Later this quarter, Datashelter will also offer Kubernetes backup support, and in the fourth quarter, the firm will offer serverless integration and Windows compatibility, said Paletou. In 2025, he said there will be “bring your own device,” whereby users are able to only pay a license price of about €8 a month and use their own S3 buckets.

A Paletou blog provides more information about Datashelter’s index-based backup approach.

Bootnote

There is a separate and unconnected Data Shelter colocation and disaster recovery company based in Florida.

Spectra Logic streamlines content management in Avid Media Composer

Spectra Logic has introduced its RioPanel content management and archiving application for Avid Media Composer environments.

RioPanel “seamlessly integrates” into the Media Composer user interface, says Spectra. The application promises to enhance media production workflows by allowing users to manage, archive, and access Avid and non-Avid content directly within the Media Composer user interface.

Developed using the Media Composer Panel SDK, RioPanel software offers “unparalleled flexibility and efficiency,” the vendor claims, enabling editors and producers to move content easily and quickly to and from Avid Media Composer without leaving the platform.

Spectra Logic RioPanel
RioPanel screen grab from demo video

“Available for standalone and shared Media Composer environments, RioPanel streamlines the media production process, reducing time and complexity,” said Spectra.

With the ability to archive Avid assets, including master clips, sub-clips, sequences, and bins, the application ensures that all media is preserved in its native format, with full project and bin metadata captured and indexed for easy search and retrieval.

Additionally, the application is storage agnostic, supporting a heterogeneous mix of flash, disk, tape, and cloud storage, allowing administrators to avoid vendor lock-in and choose the best storage options for their needs.

Hossein ZiaShakeri, Spectra Logic
Hossein ZiaShakeri

“As media production demands continue to grow, organizations need tools that not only enhance their workflows and facilitate collaboration, but also provide greater flexibility and control over their content,” said Hossein ZiaShakeri, SVP of business development and strategic alliances at Spectra Logic, and a 40-year Spectra veteran.

Raymond Thompson, senior director of worldwide partners and alliances at Avid Technology, added: “By offering a seamless integration within Media Composer, RioPanel significantly enhances our users’ ability to manage both Avid and non-Avid content, ensuring a more efficient and streamlined editing process.”

Other useful features of the app include comprehensive search to filter and discover content based on multiple types of metadata and content, and true scalability, with the ability to scale performance and availability from several users to hundreds. Watch a RioPanel demo video here.

Druva AI assistant adds cyber incident support copilot

SaaS data protector Druva has a GenAI-powered Dru Investigate copilot providing faster and more accurate cyber incident detection and response.

The company’s CoPilot-like assistant Dru was launched in October last year, with which admins and users can request custom reports, ask follow-up questions, and act on AI-powered suggestions to fix backup failures within reports. The Dru Assist customer support variant was announced in July and provides natural language answers, suggestions, and instant responses for real-time guidance on Druva setup, configurations, and problems. Dru Investigate focuses on user/admin SecOps-type activities. 

Stephen Manley, Druva
Stephen Manley

Druva CTO Stephen Manley stated: “During cyber investigations, security teams know what data they need, but often don’t know where to find it – while IT teams know their data but not what the security team needs. Druva connects these teams with the insight and centralized access to the right data at the right time.”

It “helps guide teams through investigating and analyzing protected data.”

Dru Investigate users can detect if attackers are misusing admin credentials by spotting unusual behaviors, like creating shadow accounts or destroying backup data, and take action to address potential breaches. It can search across all protected data to find indicators of compromise and artifacts for quicker remediation and recovery.

Dru copilots are built on Amazon’s Bedrock customizable set of large language models (LLMs) and Druva emphasizes that its Dru AI products do not access or learn from customer data. This data is encrypted and not shared with any third parties. They are designed with LLMs and private retrieval-augmented generation (RAG). Dru Investigate “ensures secure analysis and works exclusively with an organization’s metadata to safeguard its sensitive information.” Cohesity is also working with Bedrock.

Baskar Sridharan, AWS VP of AI and Infrastructure, said: “Dru Investigate is an ideal, real-world example of how AI can tangibly and positively impact organizations and people today, across a wide range of functions. Advanced cyber threats necessitate AI-driven response, and we’re proud to partner with Druva to bring intelligence to data security.”

Overall, Druva says “Dru Investigate simplifies data investigations, dramatically accelerating decision-making and facilitating collaboration in high-pressure situations.”

NeuroPace IT director Bill Teeple said: “Dru Investigate exemplifies the power of AI. It streamlines the investigation process to find, analyze, and mitigate data risk. Rather than building complex search queries, simply being able to just ask questions and instantly access insights will save a lot of time and speed up decision-making. Data lives everywhere and is constantly being generated, and I see Dru Investigate speeding up my ability to analyze and act on critical data.”

We believe that every data protection supplier will be urged by customers to set up similar support assistance copilots, and so will the major storage system suppliers as well. Such assistants will surely become table stakes. Dru Investigate is now available to all Druva customers at no extra cost. 

Where retail meets the intelligent edge, great things are happening

PARTNER CONTENT: Most retailers are saddled with aging, heterogeneous IT environments spread across wide geographical areas, making it difficult to adopt the latest and greatest advancements of the data-driven, artificial intelligence (AI) age. With the ability to consolidate resources and run new and legacy applications in a cloudlike manner, retailers can embrace AI while getting the most out of legacy investments.

When I think of the modern retailer, I think of “A Tale of Two Realities.” On the one hand, we live in this digital age where analytics, AI and generative AI (GenAI) promise to change the world as we know it. On the other hand, we’re well into the 21st century, and retailers are still dealing with the same old challenges they’ve faced for decades — if not centuries. Some reports suggest labor costs are growing, margins are tightening, and shrink and theft losses are at an all-time high. And while AI and GenAI introduce promising capabilities to address these challenges, most retailers are faced with distributed, heterogenous IT footprints. These two realities leave the modern retail organization like a kid in a candy shop with a pocket full of savings bonds: The value to get what you want is there, but it takes an extra step to unlock it.

For example, think of a typical retailer with hundreds or thousands of brick-and-mortar locations. Then imagine the proverbial IT closet at the back of every store. For most retailers, this closet is a bit of a mess — lots of hardware of varying ages and capabilities from multiple vendors, all individually siloed for a particular application and only partially utilized. Each was duly selected and deployed to solve a specific business need, whether that’s running the point-of-sale (POS) machine or the video surveillance system or something else. But a jumble of aging equipment, complete with security vulnerabilities, network connectivity issues and the ensuing suck on IT resources, is not the right foundation for the retailers of tomorrow.

You need to be able to deploy new applications at the edge in a cloudlike manner — quickly and cost-effectively. But purchasing and rolling out all new infrastructure to every store is a non-starter; you need to continue leveraging some of these legacy IT investments, both hardware and software.

Addressing the edge attack surface

Consolidating workloads with an edge operations software platform built on cloud-based technologies can solve these challenges by disaggregating the hardware and software lifecycle so you can deploy applications as virtual machines (VMs) or in containers. This means you can continue to use software applications as long as you need to without having to worry about the underlying hardware. Plus — and this is a very powerful plus for retailers and other industries with extremely distributed IT estates — you can also now deploy and manage it all from a distance.

So, when it comes time to deploy a new application you can simply select or create an application blueprint (which is basically a configurable script that may even be site-specific) and deploy it across all locations with just a couple of clicks. Many of these blueprints already exist in public catalogs, or they can be created on private catalogs for home-grown or lesser-known applications.

There’s no doubt that expanding your IT footprint expands your attack surface. That’s why deploying new technology with Zero Trust principles is a great start toward mitigating security risks. Zero Trust means that the applications, the data, the system and the infrastructure itself is all cryptographically signed in the factory, and then the orchestrator uses a public key to make sure that it’s the exact device that left the factory. Nobody’s tampered with it, nobody’s touched it and it’s going to be very secure.

And as we all know, there’s no such thing as perfect security. Hackers are always looking for ways to infiltrate and compromise systems; it’s just a question of when and how you’re going to respond when it happens. The best way to address security breaches is to act quickly. Server telemetry can be used in conjunction with an operations software platform to help security teams rapidly detect and respond to breaches. The Dell edge operations software further gives you the ability to restore back to a known-good state or quickly tear down a VM and deploy another one.

Taken all together, an edge operations software platform like Dell NativeEdge, combined with advanced security and automation features, can help you leverage legacy IT investments, address your biggest business challenges, deploy AI and GenAI at the edge faster and improve cybersecurity — all while reducing truck rolls and increasing the speed of deployment.

To learn more, you can watch the webinar where Samir Sandesara and I discuss how we helped a large retailer overcome common IT challenges during a recent customer engagement.

Contributed by Dell Technologies.

Cohesity gets closer to threat hunter CrowdStrike

Cohesity is deepening its strategic partnership with CrowdStrike so that customers threat hunting on backup copies can investigate incidents while preventing adversaries from enacting countermeasures.

CrowdStrike’s real-time threat-hunting software has a sensor function that is regularly updated with new threat details covering emerging malware and other invasive system activity. Even though a faulty Falcon sensor update, attributed to human error, in July sent 8.5 million Windows systems into meltdowns, grounding thousands of flights worldwide, delaying medical services, and downing some US states’ 911 emergency services, the supplier’s services are still highly valued.

Cohesity CTO Craig Martell, who was the first Chief Digital and Artificial Intelligence Officer (CDAO) for the US Department of Defense, stated: “Elevating your organization’s threat detection and response is crucial in today’s threat environment, especially with AI at the disposal of cyber adversaries. Secondary data estates offer a perfect opportunity for minimizing attackers’ advantages and, together with CrowdStrike, our customers can enhance their threat hunting and response while also automating defenses across their security stack.”

Craig Martell, Cohesity
Craig Martell

Cohesity has its own threat-hunting capability with DataHawk. This combines threat protection, via scans for attack indicators, and ML-based data classification to identify sensitive or critical data. It set up a DataHawk integration with CrowdStrike and its Falcon LogScale dashboard in November last year for faster correlation, investigation, and response to incidents in one location. This provided closed-loop detection and response for attacks directly within the CrowdStrike Falcon platform.

A Cohesity spokesperson told us that in 2023, 75 percent of attacks were malware-free, making detection and containment difficult. Adversaries have moved to using more effective means, such as credential harvesting and exploiting vulnerabilities to break through legacy defenses while using AI and other advanced technologies to evolve their techniques rapidly.

We’re told that by implementing Cohesity’s clean-room design and integrated tooling, customers gain specialized forensic capabilities to analyze malware, investigate breaches, and understand attack vectors without risking contamination in the broader IT environment.

Daniel Bernard, CrowdStrike
Daniel Bernard

CrowdStrike chief business officer Daniel Bernard said: “Our continued partnership with Cohesity and latest joint efforts reflect our shared commitment to cyber resilience. To stay ahead, enterprises benefit from streamlining threat intelligence and response efforts while also harnessing their vast secondary data to gain security insights.”

Cohesity competitor Rubrik linked up with CrowdStrike in March, feeding data to CrowdStrike’s Falcon XDR (Extended Detection and Response) product. Commvault has a Metallic-CrowdStrike integration and Veeam also has a CrowdStrike integration. Druva has developed its own threat-hunting service as well and a CrowdStrike integration plays a part in this.

Cohesity says it’s the only backup vendor that provides multiple modes of threat scanning: its own in-built threat feed, custom YARA rules, and now through CrowdStrike. It reckons it has the largest security ecosystem in the industry as it works with 23 security partners. Cohesity presents itself as a data protection time machine, telling us: “CrowdStrike is the leader in detecting adversaries, but the picture they operate on is always the present state. With Cohesity, we now bring the past into visibility, allowing analysis on secondary storage against newly characterized threats. This integration brings the same high standards that SecOps teams place on their primary data, to the backups.”

Get more info on the Cohesity-CrowdStrike partnership here when the blog goes live later today.

WEKA report finds storage a challenge for AI projects

An S&P Global Market Intelligence report commissioned by WEKA found that storage and data management issues are a problem area for enterprises implementing AI projects.

The second annual Global Trends in AI report surveyed more than 1,500 practitioners and decision-makers to identify underlying trends influencing AI adoption and implementation.

WEKA supplies parallel file system software that can be used to feed data to training and inferencing activities. Generative AI activity is spreading like wildfire and organizations are having to deal with implementation problems such as storage and data management architectures and GPU availability. There are regional disparities in availability, suggesting global AI demand is outpacing access to AI accelerators and GPUs needed to power AI projects.

John Abbott, 451 Research
John Abbott

John Abbott, principal research analyst at 451 Research, part of S&P Global Market Intelligence, said: “One of the most striking takeaways from our 2024 Trends In AI study is the astonishing rate of change that’s taken place since the onset of ChatGPT 3 and the first wave of generative AI models reached the market in early 2023. In less than two years, generative AI adoption has eclipsed all other AI applications in the enterprise, defining a new cohort of AI leaders and shaping an emergent market of specialty AI and GPU cloud providers.”

Findings include:

  • 88 percent of organizations are actively investigating GenAI, far outstripping other applications such as prediction models (61 percent), classification (51 percent), expert systems (39 percent), and robotics (30 percent).
  • 24 percent of organizations say they already see GenAI as an integrated capability deployed across their organization. 37 percent have generative AI in production but not yet scaled. Just 11 percent are not investing in generative AI at all.
  • 33 percent of survey respondents have reached enterprise scale, with AI projects being widely implemented and driving significant business value, up from 28 percent last year.
  • North America leads in enterprise AI adoption, with 48 percent of North American respondents indicating that AI is widely implemented, compared to APAC (26 percent) and EMEA (25 percent).
  • Product improvement and operational effectiveness are key investment drivers, with organizations leveraging AI to improve product or service quality (42 percent), target increased revenue growth (39 percent), improve workforce productivity (40 percent) and IT efficiencies (41 percent), and accelerate their overall pace of innovation (39 percent).

The report found that the most frequently cited technological inhibitors to AI/ML deployments are storage and data management (35 percent) – significantly greater than computing (26 percent), security (23 percent), and networking (15 percent). 

Lira Zvibel, WEKA
Lira Zvibel

There is a sustainability angle, with nearly two-thirds (64 percent) of organizations saying they are concerned about the impact of AI/ML projects on their energy use and carbon footprint; 25 percent indicate they are very concerned. Some 42 percent of organizations indicated that they have invested in energy-efficient IT hardware/systems to address the potential environmental impacts of their AI initiatives over the past 12 months. Of those, 56 percent believe this has had a “high or very high” impact.

Liran Zvibel, cofounder and CEO at WEKA, said: “Like the internet, the smartphone, and cloud computing before it, AI represents a paradigm shift that will leave an indelible mark on business and society and is already defining a new generation of industry leaders and disruptors. Unlike past technology transitions, AI’s adoption and maturation are growing with unprecedented velocity.”

Read the full 2024 Global Trends in AI study from S&P Global Market Intelligence here.

Seagate livens up Lyve Cloud

Seagate has added cost-saving and data access features, plus new geographic regions, to its Lyve Cloud object storage service.

Lyve Cloud is an S3-compatible public cloud data storage service based on Seagate software running in Equinix and other colocation centers. The software is integrated with ISV Hammerspace’s Global Data Environment management and orchestration software, and Zadara’s compute service. Other partners include Commvault, Milestone, and Veeam, Lyve Cloud being compatible with most backup SW providers. Lyve Cloud has no API or egress fees, so customers control expenses and take advantage of effectively unlimited capacity to scale up or down as needed.

Melyssa Banda, Seagate
Melyssa Banda

Melyssa Banda, Seagate’s VP of enterprise systems and solutions, stated: “With the new features and expanded regional coverage of Lyve Cloud Object Storage we help our global customers looking to build their regional presence and scale cloud-native workloads such as AI and machine learning, camera-to-cloud workflows, surveillance and edge data applications.”

Currently supported regions include US-East (N. Virginia), US-Central (Texas) and US-West (N. California), EU-West-1 (London), and AP-Southeast-1 (Singapore). A Tokyo, Japan, region has been added, along with Frankfurt in Germany. 

Existing Lyve Cloud features include WORM, data at rest encryption, single sign-on, and Cross-Origin Resource Sharing (CORS), which enables web applications to request resources from different origins.

The new features are:

  • Settable object data lifecycle rules to save cost by by moving data from a hot tier to cooler tier  and also deleting unwanted objects.
  • No minimum object retention, enabling users to manage and delete data as needed. This reduces storage costs and improves data management efficiency.
  • Near-instant geo replication for fast distributed workforce data access and protection against disasters.
  • White Label offering so Seagate channel partners can expand their revenue with their own branded S3-compatible storage services.
  • IP source control, which allows customers to have greater control over access by specifying a range of IP source addresses.

Lyve Cloud is competing with regional CSPs offering S3 storage services, like OVH, and also multinational CSPs such as Backblaze and Wasabi, which both offer S3 cloud storage. All of these suppliers exist underneath the AWS S3 price umbrella.

Arcitecta expands Mediaflux to tackle edge, multi-site, burst workloads

Australian data manager Arcitecta has extended its Mediaflux product to cover edge, multi-site, and burst workloads.

Mediaflux is called a Universal Data System covering file and object data storage and management software. This has a single namespace and tiering capability covering on-premises SSDs, disk and tape, plus the public cloud, with a data mover and metadata database.

The sales pitch is that the new Mediaflux Multi-Site, Mediaflux Edge, and Mediaflux Burst products enable users within geographically dispersed workforces to collaborate better, spend far less time waiting for needed data, and avoid unnecessary investments in compute resources in burst periods when usage times peak.

Jason Lohrey, Arcitecta
Jason Lohrey

Jason Lohrey, Arcitecta CEO and founder, stated: “We are expanding our ecosystem of data management solutions that fit together like a puzzle, ensuring data is moved to the right place at the right time to optimize value for users.”

Arcitecta says transmitting data over distance has become commonplace for organizations with increasingly distributed workforces, leading to latency and application performance issues. Another problem is peak compute resource needs with purchased compute left under-used between the peak or burst periods.

Mediaflux Multi-Site provides “follow the sun” pipelines in which data can move across networks to destinations where people and computer resources are located. It offers either a global file system (GFS) using a single global namespace for subcontinental distances, or a federated file system (FFS) that uses multiple independent synchronized namespaces for data sets separated by intercontinental distances – across oceans, for example. FFS is, Arcitecta says, especially beneficial for content and creative organizations such as visual effects production houses, where teams may collaborate between New York and Tokyo, or Los Angeles, and the United Kingdom.

Mediaflux Edge has a hub-and-spoke design with frequently accessed data cached at edge sites in a bid to reduce redundant data transfers, ease network traffic, and meet applications’ low latency requirements. A copy of the data is kept within a central repository to provide a full recovery capability should a threat or unplanned event occur at the edge. This is somewhat similar to how Nasuni, Panzura, and CTERA cloud file services operate.

Arcitecta says it allows customers to realize the cost efficiencies of centralized hubs, while enabling edge users to run applications at full speed without over-provisioning multiple datacenters.

Mediaflux Burst, we’re told, enables customers to expand access to compute processing resources in the public cloud or any other site that has additional resource availability, when they exceed their current on-premises computing resources. Arcitecta says this decouples compute from the data’s location, allowing it to be stored for optimal cost or security, while computing tasks are optimized for performance or scalability.  

Arcitecta will showcase its new products, bundled with Dell PowerScale and ECS/ObjectScale, in the Dell Technologies booth #7.A45 at IBC2024, September 13-16, in Amsterdam.

The Mediaflux Multi-Site, Edge, and Burst solutions are available immediately and can be purchased from Arcitecta, Dell Technologies, or any Dell reseller. Find out more at the Mediaflux product website.