Unstructured data management specialist Komprise has been named an “innovator” in its field by analyst house IDC.
In the analyst’s report, IDC Innovators: Knowledge Management Technologies, Komprise was recognized for its Intelligent Data Management offering, a single platform to analyze, move and manage unstructured data.
Komprise customers are enterprises with petabyte-scale environments, including Pfizer, Marriott, Kroger, NYU and Fossil.
The IDC report says “over 90 percent” of data is unstructured, and that it is a “key asset” of enterprise intelligence, as well as a “big part” of storage costs.
IDC says Komprise reduces complexity in managing unstructured data growth with location-agnostic file analysis and indexing. “That analysis is purpose-built to not ‘get in the way’, i.e., it will not disrupt data movement and operations,” IDC said.
The platform enables migration, data tiering, replication, workflows, and AI data preparedness.
Intelligent Data Management, adds IDC, helps enterprises unlock the value hidden in unstructured data and reduces storage costs. “Proper metadata tagging and access ensures AI solutions can extract and present the right data, at the right time, and to the right person,” it says.
Earlier this month, Komprise announced Smart Data Workflow Manager, a no-code AI data workflow builder, that addresses use cases such as sensitive data identification, chatbot augmentation, and image recognition, for instance.
Krishna Subramanian
“Being named an IDC Innovator is a great honor and we believe our inclusion indicates how organizations are starting to treat data independently of storage, to ascertain and nurture its true value across hybrid cloud infrastructures,” said Krishna Subramanian, chief operating officer and co-founder of Komprise.
IDC Innovators are vendors with current annual sales of under $100 million, chosen by an IDC analyst within a specific market that offer a new technology, a “ground-breaking solution” to an existing issue, and/or an innovative business model. It is not an exhaustive evaluation or a comparative ranking of all companies, says IDC.
In the past, like many early bird technology companies, VC-backed Komprise has been shy about revealing its sales figures. At least we now know it hasn’t broken the $100 million barrier, yet.
Earlier this year, Komprise introduced elastic replication, that it says provides more affordable disaster recovery for non-mission-critical file data at the sub-volume level.
NetApp’s latest quarterly revenues swelled higher led by all-flash array demand.
Revenues rose for the third successive time to $1.67 billion in the fourth quarter of NetApp’s fiscal 2024, ended April 26, 2024, up 6 percent and exceeding its guidance mid-point. There was a $291 million profit, up 18.8 percent annually, and NetApp’s 25th successive profitable quarter. This Kurian-led cash machine is so consistent. The all-flash array annual run rate (ARR) beat last quarter’s record to reach $3.6 billion, with $850 million AFA revenues in the quarter.
Full fy2024 revenues of $6.27 billion were 1 percent down Y/Y with profits of $986 million down 2.2 percent Y/Y.
Hybrid cloud revenues of $1.52 billion in the quarter rose 6.3 percent, basically driving the business. Cloud revenues, meaning public cloud, remain depressingly low at $152 million, a miniscule 0.7 percent higher Y/Y as the Anthony Lye-led buying spree still fails to deliver meaningful revenue increases.
George Kurian
CEO George Kurian’s results quote said: ”We concluded fiscal year 2024 on a high note, delivering company records for annual gross margin, operating margin, EPS, operating cash flow, and free cash flow and building positive momentum. … In fiscal year 2025, we will remain laser focused on our top priorities of driving growth in all-flash and cloud storage services while maintaining our operational discipline.”
NetApp probably has the best public cloud storage service offerings of all the storage suppliers, with its data fabric spanning the on-prem ONTAP world, and SW instantiations on AWS, Azure and GCP, and Blue XP hybrid cloud storage monitoring and management services. Yet significant demand has proved almost illusory and its competitors, notably, Dell, HPE and Pure are slowly but steadily catching up. Indeed Pure has a big announcement coming in this area next month.
Kurian pointed out NetApp’s public cloud progress in the earnings call, saying: “In Q4, we had a good number of takeouts of competitors’ on-premises infrastructure with cloud storage services based on NetApp ONTAP technology, which helped drive our best quarter for cloud storage services with each of our hyperscaler partners. We are well ahead of the competition in cloud storage services and we are innovating to further extend our leadership position.”
He referred to expected public cloud revenue growth, saying: ‘We expect that cloud first-party and marketplace cloud storage should continue to ramp strongly, which will deliver overall growth in cloud, consistent revenue growth in cloud in fiscal year ’25, stronger in the second half than in the first half.”
Quarterly financial summary
Consolidated gross margin: 71.5 percent vs 68 percent a year ago
Operating cash flow: $613 million vs $235 million a year ago
Free cash flow: $567 million vs $196 million last year
Cash, cash equivalents, and investments: $3.25 billion
EPS: $1.37 vs $1.13 a year ago
Share repurchases& dividends: $204 million in stock repurchases
NetApp widened the gap between its own all-flash array revenues and Pure Storage:
The AFA revenues were singled out by Kurian saying: “Strong customer demand for our broad portfolio of modern all-flash arrays, particularly the C-series capacity flash and ASA block optimized flash, was again ahead of our expectations.”
A look at NetApp’s quarterly revenue history shows that it reversed the lower Q1 and Q2 revenues this fiscal year with higher ones in Q3 and Q4 to end the year pretty much level pegging with the previous one:
Just like its competitors NetApp is seizing the AI opportunity in front of it, positioning itself as a provider of the data infrastructure foundation for enterprise AI.
Kurian said: “Customers choose NetApp to support them at every phase of the AI lifecycle due to our high performance all-flash storage complemented by comprehensive data management capabilities that support requirements from data preparation, model training and tuning, retrieval-augmented generation or RAG, and inferencing, as well as requirements for responsible AI including model and data versioning, data governance and privacy. We continue to strengthen our position in enterprise AI, focusing on making it easier for customers to derive value from their AI investments.”
He then said: “We had about more than 50 AI wins in Q4 across all elements of the AI landscape I talked about, both in data foundations like data lakes as well as model training and inferencing across all of the geographies. I would tell you that in the AI market, the ramp on AI servers will be much ahead of storage because what clients our doing is they’re building new computing stacks but using their existing data. And so we expect that over time there will be a lot more data created and unified to continue to feed the model. But at this stage, we are in proof of concept. We think that there’s a strong opportunity over time for us and all of the AI growth is factored into our guidance for next year.”
He reckons that: “AI … is the opportunity that will become much more meaningful over time. We are well positioned with the huge installed base of unstructured data, which is the fuel for GenAI, and we are focused on helping customers do in-place RAG and inferencing of that data.”
Next quarter’s revenue outlook is $1.455 to $1.605 billion, a $1.53 billion mid-point which would be a 7.0 percent annual rise. The full fiscal 2025 revenue outlook is $6.45 to $6.65 billion, with the $6.5 billion mid-point representing a 4.5 percent Y/Y increase and NetApp’s highest ever annual revenue.
Fiscal year 2025 projected revenue is the red bar
This would end NetApp’s 11-year failure to beat its fy2013 $6.332 billion highpoint. George Kurian became CEO in 2015 and reaching a record revenue highpoint in fy2025, after 10 years in office would be quite a satisfying feat.
Wedbush analyst Maat Bryson told subscribers: “Q1 and FY’25 guidance both presented upside vs. prior Street expectations, while management’s tendency towards conservatism, likely means that NTAP will eventually deliver results closer to the high end of the new forecast.”
Object storage software specialist Scality has made a clarion call for new partners on the back of company growth, and profitability.
Jerome Lecat
Scality CEO Jérôme Lecat confirmed the biz is profitable, on the back of 20 percent annual growth, at this week’s Technology Live! Paris – a vendor showcase event for press and analysts.
He said regional and global growth meant the firm was in need of new partners to meet demand for its software, which is targeted at two different segments using two different products.
The RING product is aimed at large organizations building their own substantial cloud infrastructure, and its ARTESCA platform – which has just been updated to version 3.0 – is used by smaller organizations or edge/branch sites that have less dense object storage needs.
Lecat said of the channel: “We have tripled investment in our VAR network and ARTESCA is already handled by the big three distributors – Ingram Micro, Arrow and TD Synnex – but we still need many more partners globally.”
He added: “Even in territories like France, where we are not spread thinly, we still need more partners, and that goes for the likes of Germany and the UK too. If anyone has any potential partners send them to me.”
Lecat said the RING product, which is primarily sold through HPE, only had a four percent churn rate. That might not be that surprising though, as it is used by customers with high numbers of petabytes to deal with – so they’re not exactly going to migrate at the drop of a hat.
As for ARTESCA, over half of that business comes through the Veeam partner channel, with Scality’s technology tightly integrated with the cloud data management/backup king.
At the back end of last year, Scality launched an ARTESCA hardware appliance specifically for Veeam users, that competes with an Object First product. Object First is another object storage partner of Veeam.
SaaS data protection vendor Keepit has set out its stall to win big in the relatively fledgling market.
The firm is exploiting demand for increasing demand for SaaS protection – particularly as companies begin to realize they are responsible for backing up their own SaaS data, not the SaaS providers.
At this week’s Technology Live! Paris – a regular data management vendor showcase for press and analysts – Copenhagen, Denmark-headquartered Keepit said it planned to increase the number of SaaS workloads it planned to protect for end customers. The company reaches those customers through channel partners.
Currently, it protects SaaS data generated through eight key business applications/software suites – including Microsoft 365, Google Workplace, Dynamics 365, Salesforce, Azure DevOps, Zendesk, Power Platform, and Extra ID (formerly Azure AD).
Keepit will be increasing this number over the short- to medium-term, it declared. To do this, it will have to develop the connectors that customers will have to use to back up data from the new applications covered.
Overall, it’s a market that is also targeted by the likes of cloud data management vendor Veeam, which offers protection for Salesforce and Microsoft 365 data, for instance. And cloud data backup player HYCU has also been making big noise in the market too, after commercially launching a specific SaaS backup service about 18 months ago.
Keepit has been building up its own datacenter capacity to support customers’ SaaS data, which can be accessed by them on demand. Six of the seven datacenters the company runs involve capacity it rents in Equinix facilities across the US, Canada, Australia, the UK, Germany, Denmark, and Switzerland. To address data sovereignty requirements, customers can stipulate their data stays in their particular region.
The service is aimed at companies of all sizes and charged per seat/user, with no extra charges for putting in or taking out data from the Keepit backup infrastructure. The cost is around €3 a month for each user, with the average cost coming down for enterprises with large numbers of users.
Sylvain Siou.
Sylvain Siou, VP of go-to-market technology at Keepit, said: “All data storage is included in the cost, so customers have simplicity and total cost control.
“We are much cheaper than storing the data in the public cloud – over time, that cost tends to steadily increase. Our charges are flat.”
Users have to buy seats for each separate workload that is backed up. When asked how Keepit charges match up against the likes of HYCU, and the many other data backup companies, Keepit maintains that on price it doesn’t often get beaten after sitting down with customers.
One reason Keepit reckons it can win on price is that it runs its own infrastructure, unlike some rivals that pay to store customers’ data in the public cloud.
All Keepit data is stored on high density disks and is deemed as “hot” – so customers can get immediate access to it when it is needed. More expensive flash systems are not seen as “viable” for the service that Keepit provides. There is no room for tape either, as none of the data is regarded as “cold.” The company mainly uses Dell Technologies hardware in its datacenters.
Pure substantially beat its own first fiscal 2024 quarter revenue estimates and is getting more confident about winning a future hyperscaler customer.
The $693.5 million revenue in the quarter ended May 5 was up by nearly 20 percent year-on-year and way past the $680 million outlook predicted in the prior quarter’s earnings report. There was a loss of $35 million, better than the year-ago $67.4 million loss.
Charlie Giancarlo, Pure’s CEO, said: “We are pleased with our Q1 performance, returning to double-digit revenue growth for the quarter.”
According to CFO Kevan Krysler said: “revenue growth of 18 percent and profitability both outperformed.” Why the surprise revenue jump this quarter? Krysler said: ‘Two key drivers of our revenue growth this quarter were: one, sales to new and existing enterprise customers across our entire data storage platform; and two, strong customer demand for our FlashBlade solutions, including FlashBlade//E. … we are aggressively competing and winning customers’ secondary and lower storage tiers with our //E family solutions and FlashArray//C.”
There were 262 new customers in the quarter with the total customer count being more than 12,500. The subscription annual recurring revenue (ARR) of $1.45 billion was up 25 percent year-on-year. Product revenues were $347.4 million, an increase of 12.5 percent Y/Y. There were record first quarter sales of the FlashBlade product.
A gratifying return to double digit revenue growth in fy2025’s first quarter.
Quarterly financial summary
Gross margin: 73.9 percent, up from year-ago 72.2 percent
Free cash flow: $173 million vs year-ago $122 million
Operating cash flow: $221.5 million vs year-ago $173 million
Total cash, cash equivalents and marketable securities: $1.72 billion
Remaining Performance Obligations: $2.29 billion vs year-ago $1.8 billion.
Pure won a Meta (Facebook) deal a couple of years ago but little has been heard abut that recently. However Giancarlo was bullish about future hyperscaler business in his prepared remarks: ”Our //E family continues its strong growth and was also a key enabler in our discussions with hyperscalers. …The quantity and quality of our discussions with hyperscalers have advanced considerably this past quarter.”
He explained: “Hyperscalers have a broad range of storage environments. These include high-performance storage based on SSDs, multiple levels of lower-cost HDD-based nearline storage, and tape-based offline storage. We are in a unique position to provide our Purity and DirectFlash technology for both their high performance and their nearline environments, which make up the majority of their storage purchases. Our most advanced engagements now include both testing and commercial discussions. As such, we continue to believe we will see a design win this year.”
Krysler talked about Pure’s “pursuit of replacing the vast majority of data storage with Pure Flash, for all customer workloads, including hyperscalers’ bulk [disk] storage.” He means the public cloud and Meta-like hyperscalers, the FAANGs-type companies: Meta(Facebook), Amazon, Apple, Netflix, and Alphabet (formerly known as Google). Pure is looking at their general purpose bulk storage – 80 to 90 percent of their total storage buildouts sitting on disk.” CTO Rob Lee said in the earnings call “that’s really the opportunity that we see as we refer to the hyperscalers.”
A win there would be a landmark deal.
Giancarlo sees three other opportunities ahead: “The recent advances in AI have opened up multiple opportunities for Pure in several market segments. Of greatest interest to the media and financial analysts has been the high-performance data storage market for large public or private GPU farms. A second opportunity is providing specialized storage for enterprise inference engine or RAG environments. The third opportunity, which we believe to be the largest in the long term, is upgrading all enterprise storage to perform as a storage cloud, simplifying data access and management, and eliminating data silos, enabling easier data access for AI.”
He asserted: “We also believe that long-term secular trends for data storage are no longer based on the expectation of commoditized storage, but rather on high-technology data storage systems, and run very much in our favor.”
Next quarter’s revenues are expected to be $755 million, 9.6 percent up on the year-ago quarter. Pure is looking for 2025 revenues to be around $3.1 billion, meaning a 10.5 percent rise on 2024.
Giancarlo teased future announcements, saying: “At our June Accelerate conference, global customers will see how our latest innovations enable enterprises to adapt to rapid technological change with a platform that fuses data centers and cloud environments.” It sounds like Cloud Block Store is getting a big boost. Maybe we’ll see the Purity OS ported to the public cloud and and a cloud version of FlashBlade.
ARTESCA object storage supplier Scality says its latest v3.0 release has five levels of ransomware protection, and claims API-level immutability is not good enough.
Update. Time shift attacks and Network Time Protocol attack points amended. 3 June 2024.
The ARTESCA product is a cloud-native version of Scalitiy’s RING object storage, co-designed with HPE, and S3-compatible. It can be a backup target for Veeam and there is a hardware appliance version specifically for Veeam customers.
Paul Speciale
Scality CMO Paul Speciale said in canned quote: “Every vendor selling immutable storage claims its solution will make your data ransomware-proof, but it’s clear — immutability is not enough to keep data 100 percent protected.
“94 percent of IT leaders rely on immutable storage as a foundational aspect of their cybersecurity strategy. If immutable backups were the answer, then why did ransom payments double in 2023 to more than $1 billion? It’s time that the storage industry goes beyond immutability to deliver end-to-end cyber resilience.”
A Scality blog states: “Today, a staggering 91 percent of ransomware attacks involve data exfiltration. This meteoric rise can be seen as a direct attempt by threat actors to sidestep the protections afforded by immutability.”
Speciale’s suggestion that immutability failings have helped cause a doubling of ransomware payments seems unlikely when a blog by his own company attributes 91 percent of ransomware attacks to exfiltration attacks. Immutability will not stop an exfiltration attack.
Be that as it may, ARTESCA v3.0’s five levels of defense are:
API-level: Immutability implemented via S3 object lock ensures backups are immutable the instant they’re created. Multi-factor authentication (MFA) and access control help administrators prevent breaches on employees.
Data-level: Multiple layers of data-level security measures are employed to prevent attackers from accessing and exfiltrating stored data
Storage-level: Encoding techniques prevent destruction or exfiltration of backups by rendering stored data indecipherable to attackers, even when using stolen access privileges to bypass higher-level protections.
Geographic-level: Multi-site data copies prevent data loss even if an entire data centre is targeted in an attack.
Architecture-level: An intrinsically immutable core architecture ensures data is always preserved in its original form once stored, even if the attacker attains the necessary access privileges to bypass API-level immutability.
Speciale makes the point in a Solved magazine article that file-based storage systems basically allow file data to be re-written while object storage always creates a new object when data in an existing object changes. It is architecturally immutable whereas file storage is not.
He writes: “This means data remains intrinsically immutable, even to an attacker with superadmin privileges, due to the way the system handles data writes to the drive. The effect is simple — no deletes or overwrites, ever. Additionally, all Scality products disallow root access by default, reducing exposure to common vulnerabilities and exposures (CVEs) and a wide range of threats.”
Scality claims that the following offerings are considered insufficient when it comes to immutability:
NAS/file system snapshots
Dedupe appliances
Linux-hardened repositories
Tape
S3 proxies (S3 API implemented on mutable architectures)
Only solutions based on native object storage design are truly immutable because they preserve data in its original form the very moment it is written, and never overwrite existing data.
Scality immutability checklist graphic.
File storage is much weaker on this front in Speciale’s view: “because the underlying file system is still inherently mutable, data remains vulnerable to attacks below the API layer. This creates multiple viable avenues for a skilled attacker to bypass the system’s defenses using common tactics like privilege escalation and time-shift attacks.”
Privilege escalation means getting higher-level access such as root level. A time-shift attack focused on Network Time Protocol servers, resetting time to hours before the present time and, maybe, enabling access to data before it was made immutable. Scality tells us: “Immutability is driven by specific time periods – let’s say a bad actor does something as simple as going in and changing the system clock. If a server believes it’s day 31, then a 30-day object lock is invalid and the data is no longer immutable. Similarly, it’s also possible to compromise credentials and change the policy on immutability.
“These vulnerabilities are very much legitimate, so these soft spots absolutely must be addressed if storage vendors want to truly earn their “ransomware-proof” claims.”
There were no search results we could find that explicitly mention the use of NTP time shifts as a tactic employed by ransomware attackers. This is not to say it could not happen but, overall, there seems to be no empirical evidence that says a ransomware attack circumvented a file’s immutability by using an attack below the API layer.
Also object storage always has had the create-a-new-object-when-an-old-one-is-changed attribute and there is nothing new in ARTESCA v3 here. S3 object locking was introduced in ARTESCA v2.0 and that’s not new to v3 either.
ARTESCA v3.0 features:
Design in accordance with US Executive Order 14028 Improving the Nation’s Cybersecurity and zero-trust architecture principles, including enforced authentication and end-to-end encryption of data.
Multi-factor authentication (MFA) for admin users that can now be globally enforced to provide additional login protection for admins and data managers
Integration with Microsoft Active Directory (AD), configurable directly through the secure ARTESCA administrative UI.
Center for Internet Security (CIS) compliance testing through OpenSCAP project tools for continual conformance with CIS cybersecurity recommendations, including password strength compliance based on length, complexity and history.
Extended security-hardening of the integrated OS that disallows root access including remote shell or su as root; admin access is only granted through a system-defined Artesca-OS user identity adhering to the principle of least privileges.
A software bill of materials (SBOM) of components and suppliers, scanned and continuously patched for CVEs, to provide customers with visibility into their software supply chain, and automated OS updates to patch vulnerabilities.
Increased growth to 8.5PB of usable capacity, with support for high-density servers, a wide choice of storage hardware including multiple types of flash drives.
Enhanced dual-level erasure-coding.
Speciale proclaims: “ARTESCA’s CORE5 capabilities set the bar for a new standard of truly cyber-resilient storage in modern data centres. Windows of exposure are effectively eliminated by providing not only the strongest form of data immutability, but also cyber resilience at all levels of the system. Together with Veeam, our customers achieve unbreakable data protection.”
Scality says ARTESCA 3.0 for Veeam is:
Veeam Ready validated for Veeam high-performance tier deployments on hybrid and all-flash storage servers at an affordable cost
VMware Instant Recovery Ready with ultra-high performance on all-flash servers
Simple-to-use compatibility with Veeam Backup & Replication, Veeam Backup for Microsoft 365 and Veeam Kasten in a single system
Quickly and effortlessly configured as a ransomware-hardened Veeam repository, thanks to its unique built-in Veeam Assistant tool
Offered as a turnkey hardware appliance for Veeam with a Quickstart Wizard to simplify integration into network environments
ARTESCA 3.0 can be deployed as a turnkey hardware appliance for Veeam built on Supermicro servers, software on industry-standard servers, or a virtual appliance for VMware-powered data centres.
It will be available in Q3 2024 through global resellers, supported by distribution partners Ingram Micro, Carahsoft, TD Synnex, Arrow, and other regional distributors.
Accounting investigations at Quantum, assailed by financial reporting problems and Nasdaq delisting woes, discovered more revenues and extra profits for the affected 2022 and 2023 fiscal years. it was revealed yesterday when the company issued an update on accounting matters.
Under the financial management of a new CFO, Ken Gianella, Quantum discovered problems in reporting Q2 of its fiscal 2024’s financial results concerning the reconciliation of standalone selling prices for components it sold in product bundles. These affected the reporting of its Q3 and Q4 results as well. It started an accounting investigation and had to ask for a reporting extension from the Nasdaq stock exchange, where its shares are listed.
Separately Quantum’s stock price fell below an average $1.00 value required by Nasdaq and it faces a delisting threat because of that.
Now the accounting review has been completed, and company states it will “release financial results for its full year fiscal 2024 ended March 31, 2024 on Monday, June 17, 2024 after markets close.”
The review, supported by outside experts, has found that reported revenue and profit numbers for the first three quarters of fiscal years 2022 and 2023 will be increased, as will those for the first fiscal 2024 quarter.
Quantum states that the adjustment “does not impact the Company’s invoicing, cash, or contractual obligations to its customers.” Also, during the review “the Company found no evidence of fraud or intentional misconduct associated with its revenue recognition process.”
There is more good news, for investors at least. During the investigation Quantum identified a series of outstanding warrant agreements issued to its prior and current lenders in 2018, 2020, and 2023. It needs to assess the impact of revised liabilities for these warrants in fy 2022 and 2023 and Q1 fy2024. Quantum said: “The impact from the revised liability accounting treatment for outstanding warrants is estimated to increase net income in all restated periods.” More profits in other words.
Quantum also announced a revised agreement with its lenders to amend the company’s existing term loan and revolving credit agreements.
All in all this is a good result. The stock price rose 5.1 percent to $0.45 at the end of trading on May 29 as a result. We think chairman and CEO Jamie Lerner and CFO Gianella will address the Nasdaq delisting issue on June 17.
Hitachi Vantara has launched a VSP One Block scale-out storage appliance for mid-sized biz.
Update. VSP One Block data multi-access protocol details, spec sheet, data sheet, images and installation video references added, 1 June 2024.
Hitachi V rebranded its storage portfolio of products to Virtual Storage Platform One in October last year, taking an HPE Alletra-like tack to overall brand conformity. Its portfolio included the VSP (Virtual Storage Platform) high-end (VSP 5000) and mid-range (E Series) block arrays, HNAS file storage, VSS (Virtual Software-defined Storage), and the HCP (Hitachi Content Platform) for object data. Specific VSP One products were launched earlier this year, comprising three products: SDS (Software-Defined Storage) Block, SDS Cloud and File, the old HNAS. The all-NVMe flash SDS Block supports up to 1.6PBe (effective capacity) in its 2RU chassis. SDS Cloud comes as an appliance, a VM or public cloud offering and will be available in the AWS marketplace. Ops Center Clear Sight provides cloud-based monitoring and management for these products.
VSP One Block is different from SDS Block in that, starting as a single all-flash appliance, it scales out to a 65-node cluster. The nodes operate Hitachi Storage Virtualization Operating System (SVOS) software, which can manage virtualized 3rd party arrays. SVOS supports block, file (NFS, CIFS/SMB) and S3 access protocols. A spokesperson told us: “You can run Block natively, then add File and HCP for Object. … All Block models are running SVOS, you layer Block with File together in a 5U solution. File is doing pass through to block so we now offer 4:1 data reduction no questions asked (no T&Cs nothing to sign up for, no exception! You don’t get 4:1 we make you whole on block and file) and 100 percent data availability. We also have one user GUI for consumption of block and file and cloud observability with analytics and sustainability on Clear Sight.”
Octavian Tanase
Hitachi V’s Chief Product Officer, Octavian Tanase, newly recruited from NetApp, said in a prepared remark: “Our Virtual Storage Platform One Block appliance is powerful and dense, delivering the data processing and reliability that mid-sized businesses need while minimizing rack space and reducing power and cooling costs for a more sustainable datacenter environment.”
It “sets a new standard for storage performance.”
The VSP One Block is likely positioned to replace the existing VSP E Series. Hitachi V claims it has “breakthroughs in simplicity, security, and sustainability.” There are “three dedicated models providing businesses with a common data plane across structured and unstructured data in block storage, specifically designed to remove complexity, enhance data protection, and reduce carbon emissions.” The three variants are the VSP One Block 24, 26 and 28 – see the table below. These are claimed to optimize rack space while reducing power consumption and cooling costs, but without saying what these are being compared to.
Hitachi V positions VSP One Block as being suitable for AI needs, saying the rise of AI and connected technologies has led to an exponential surge in data volumes, as businesses expect the amount of data they use to double between 2023 and 2025. As a result, businesses, especially mid-sized organizations, are being forced to rethink how to build and scale their data architectures. Enter VSP One Block.
The products are designed to be self-installable and feature:
Per-appliance 32TB effective capacity to maximum of 1.8 PB of effective capacity using the internal 24 drive slot in 2RU chassis.
Hitachi Thin Image Advanced (TIA) snapshot software creates copies for decision support and software development, and “defends structured data against ransomware. Every volume can have up to 1,024 Safe Snaps taken, and the array supports up to 1 million Safe Snaps.
Always-available production data copies for data protection, with TIA saving up to 90 percent of disk space by only storing changed data blocks.
Pre-configured, including the creation of Dynamic Drive Protection (DDP) groups which replace traditional RAID groups, providing the resilience of RAID6 with distributed spare space and support for an arbitrary number of drives (from 9-32 per group). It supports adding drives one (or more) at a time. DDP dramatically lowers rebuild times.
Dynamic Carbon Reduction technology reduces energy consumption by switching CPUs into eco-mode during periods of low activity.
“Always on compression” allows the system to switch from inline data reduction to post-processing which reduces energy consumption and contributes to a lower CO2 footprint by as much as 30-40 percent.
New, patented compression accelerator modules (CAM) with a new compression algorithm.
4:1 No Questions Asked data reduction guarantee.
Management tools, including an embedded graphical user interface (GUI) and the intuitive SaaS-based Ops Center Clear Sight portal make it easy to manage and consume the storage.
100 percent data availability guarantee.
FIPS 140-3 lvl 1 data at rest encryption automatically enabled from the distribution center for most customers
Supports running cloud-native apps alongside traditional block workloads.
Hitachi V has released datasheets, and specsheets for the VSP One Block products, and there is an installation video here. Hint; it takes c30 minutes.
Hitachi, Hitachi Vantara and Google
Hitachi Vantara’s parent company, Hitachi, has just announced a multi-year partnership with Google focused on generative AI. It will form the Hitachi Google Cloud business unit focused on offering business customers Gemini models, Vertex AI, and other cloud technologies, and it will also adopt Google Cloud’s AI to enhance its own products and services.
Hitachi’s new products will support customers running both on-premises and in the cloud, enabling them to modernize operations while retaining existing IT environments. They will also be compatible with VSP One, so that users can build GenAI applications using data stored on Hitachi Vantara’s hybrid cloud storage products.
Analysis: Customers are sometimes faced with costly and sometimes unexpected permanent withdrawal fees when ending digital tape cartridge vaulting contracts. What lessons should be learned from this and how should organizations deal with multi-decade information retention?
For companies wanting to store physical media for long periods, Iron Mountain, for example, has long provided physical storage of boxed paper records in its vaults, initially inside the deep excavated caverns of an old iron ore mine, and subsequently in storage vault buildings around the globe.
Organizations with overflowing filed paper archives can send them offsite to Iron Mountain facilities for long-term storage, paying a yearly fee for the privilege. Extending this to storage of digital tape cartridges was a natural step for Iron Mountain.
But what happens if you want to end such a contract and remove your tapes? Well, there is a withdrawal fee. One customer claimed: “They are the Hotel California of offsite storage. You can check out your tapes any time you want – but you can never leave.”
“If you permanently take a tape out of Iron Mountain, they charge you a fee that is roughly five years of whatever revenue that tape would have made … So people just leave tapes there that they know have no purpose, because they can’t afford to take them out,” the source alleged.
It happens that Iron Mountain charges a similar fee for permanent withdrawal if customers want to remove boxed paper files and store them in a cheaper facility or simply destroy them. This is stated in Iron Mountain’s customer contract’s Ts&Cs but can still be a surprise to some customers as it’s charged on top of a simple retrieval fee.
Permanent withdrawal section of Iron Mountain boxed paper vaulting contract with withdrawal fee surcharge paragraph highlighted
A court document states: ”On August 12, 2004, Berens and Tate notified Iron Mountain that it wanted to remove all of its records from Iron Mountain and transfer those records to another storage company. Iron Mountain informed Berens and Tate that pursuant to the operative ‘Schedule A,’ Berens and Tate would be obligated to pay the ‘Permanent Withdrawal’ fee of $3.70 per cubic foot and a retrieval fee of $2.10 per cubic foot. According to Berens and Tate, the total charge to permanently remove its records would have been approximately $10,000.”
Iron Mountain told the court that the permanent withdrawal fee was necessary in order to compensate for the additional labor and services that are provided when large amounts of records are permanently removed. Because Berens and Tate had freely entered into the storage contract with Iron Mountain, the court decided the permanent withdrawal fee “was the parties’ agreed-upon compensation for services to be performed – specifically, the permanent removal of records.” Berens and Tate had to pay the fee.
Based on what we have been told, the vendor may be attaching a digital tape cartridge permanent removal fee along the same lines as its boxed paper record permanent removal fee. We sent an email to Iron Mountain on May 17 to check this but have heard nothing back despite sending follow-up requests for comment. If we hear back from Iron Mountain, we’ll add its reply to this story.
Our customer contact’s final thought was: “I’m sure it’s legal because it’s in the contract. But it’s wrong.”
It seems clear that prospective tape cartridge vaulting customers need to pay careful attention to all T&Cs and, if a permanent withdrawal fee is charged on top of a retrieval fee when ending the contract, be aware of this and be willing to pay it. The devil is often in the detail.
There are two questions that occur to us about this situation. One is how to deal with such contracts and the second, much bigger one, is how to obtain cost-effective and multi-decade information storage. Should that be on-site or off-site, and in what format?
Neuralytix
Ben Woo
Neuralytix founding analyst Ben Woo said: “Of course any legal contract should be reviewed by a qualified attorney-at-law. But, to abandon long term tape storage providers on the basis of high retrieval costs would be a poor decision. Long term archive providers store backup tapes. Backup tapes by its very nature contain data that a customer hopes [it] will never see again, but is stored there for a small number of reasons – incremental forever backups, regulatory reasons, or (at least for the more recent tapes) disaster recovery.
“If a tape is required to be retrieved, the need must be extraordinary. VTL and technologies like these allow the customer to hopefully never rely on the tapes stored in storage.
“Whether the tapes need to be retrieved or whether a customer desires to combine older formats into newer ones require a lot of human effort, both of which carry costs.
“The only real alternative to tape is long term storage in the cloud – the problem with this is that customers will be paying for the storage of data irrespective of whether they use it or not for 10, 20, etc. years which quickly becomes cost-prohibitive and fiscally/economically irresponsible.”
Architecting IT
Chris Evans.
Architecting IT principal analyst Chris Evans said: “To summarise the problem, businesses use companies like Iron Mountain to create data landfill, rather than sending their content to a recycling centre. The Iron Mountain contract is constructed in such a way to make it more expensive to resolve the problem of the data landfill, than to simply add to it.”
“This is a massive topic. In my experience, tape has been used as a dumping ground for data that is typically backup media, where the assumption is the business ‘might’ need the data one day. In reality, 99 percent of the information held is probably inaccessible without specialist help.
“Setting aside the obvious concept of fully reading and understanding a contract before signing it, the long-term retention of data really comes down to one thing – future value. Most companies assume one or more of the following:
We may need the data again in the future. E.g. legal claims, historic data restores.
We might be able to derive value from the data.
We have no idea what’s on our old media, so we better keep it “just in case”. (Most likely and the root cause of most problems)
“Storage costs decline year on year, for disk and tape. Generally, on a pure cost basis, it’s cheaper just to buy more storage and keep the data than to process it and determine whether it has value. So, businesses, which are an extension of humans and human behaviour, generally push the management process aside for ‘another day’, kicking the can down the road for someone else to solve in the future.
“The obvious technical answer to the problem is consolidation. … LTO-8 and LTO-9 are only 1 generation backward compatible. So, to recycle old media you will need old tape drives of no worse than 2 generations behind.
“[A] second problem is data formats. If you used a traditional data protection solution, then the data is in that format. Netbackup, for example, used the tar file format, storing data in fragments (blocks) of typically 2GB. Theoretically, you can read a tape and understand the format, but adding in encryption and compression can make this process impossible.
Specialists that can help make their money by having a suite of old technology (LTO drives, IBM 3490/3480, Jaguar etc) and tools that can unpick the data from the tape. However, none of these solutions restack. The restacking process would require updating the original metadata database that stores information on all backups, which is probably long gone. Even if it still exists, the software and O/S to run it will be hard to maintain and only adds to further costs. So stacking of old legacy data is pretty much impossible.
Data Retention Policy
Evans added: “The best way to look at the data landfill issue is with a business perspective. This means creating a data retention policy and model that helps understand ongoing costs. Part of the process is also to create a long-term database that keeps track of data formats, backup formats etc.
Imagine, for example, you created a data application in 2010, which stored customer data. In 2015 you migrated the live records to a new system and archived the old one. It’s 2024. Who within your organization can remember the name of the old application? Who can remember the server names (which could be tricky if they were virtual machines)? Who can remember the data formats, the database structures? Was any of that data retained? If not, the archive/backup images are effectively useless as no one can interpret them.
“So, businesses need to have data librarians whose job is to log, track, audit and index data over the long term.
“My strategy would be as follows:
Fix Forward – get your house in order, create data librarians and data retention policies if they’re not already in place. Work with the technical folks to actively expire data when it reaches end of life. Work with the technical folks to build a pro-active refresh policy that retires old media and moves data to newer media. If you change data protection platform, create a plan to sunset legacy systems, running them down over time.
For example, if your oldest data is 10 years, then you’ll be keeping some type of system to access that data for at least 10 years. Look at storing data in more flexible platforms. For example, using cloud (S3) as the long-term archive for backup/archive data makes that content easier to manage than tape. It also makes costs more transparent – you pay for what you keep until you don’t keep it any longer.
Remediate Backward – Create a process to remediate legacy data/media. Agree a budget and timescale (for example, to eliminate all legacy content in 10 years). Triage the media. Identify data that can be discarded immediately, the data that must be retained and the data that can’t be easily identified. Create a plan to wind each of these down.
“…The exact strategy is determined by the details of the contract. For example, say the current contract is slot/item based and is $1/item per year. If you have 10,000 tapes, then reducing that by 1,000 items per year should eliminate the archive in 10 years. If the penalty is determined by the last year’s bill, then (theoretically) the final bill might be 5x times $1,000 = $5,000 rather than $1,000, but significantly better than the $10,000/year being paid in year 1. This is speculation of course, because it depends on the specifics of the contract. I’d have a specialist read it and provide some remediation strategy, based on minimizing ongoing costs.
“Most businesses don’t want to solve the legacy data sprawl problem because it represents spending money for no perceived value. This is why Fix Forward is so important, as it establishes data management as part of normal IT business costs.
“The opposite side of the coin to costs is risk. Are your tapes encrypted and if not, what would happen if they were stolen/lost? What happens if you’re the subject of a legal discovery process? You may be forced to prove that no data exists on old media. That could be very costly. Or, if you can’t produce data that was known to be on media, then the regulatory fine could be significant.
“So, the justification for solving the data landfill can be mitigated by looking at the risk profile,” Evans told B&F.
Lenovo’s latest quarterly revenues rose as AI opportunities drive growth and profitability.
Revenue of $13.83 billion was generated in the fourth fiscal 2024 quarter ending March 31, up 9.5 percent year-on-year, with profits up 117.5 percent to $248 million. The Hong Kong-based firm said full fy2024 revenues were $59.6 billion, down 8.1 percent and net profit came in at $1 billion, 37.8 percent less than the prior financial year.
Chairman and CEO Yuanqing Yang’s prepared quote said: “Lenovo’s fourth quarter results clearly demonstrate that we have not only resumed growth across all our businesses but that our business momentum is accelerating, driven by the unprecedented opportunities brought by Hybrid AI. … Supported by our strong execution, persistent innovation, operational excellence, and ecosystem partnerships, we are confident we can deliver sustainable growth and profitability improvement in the coming year.”
Lenovo said that, from the second half of the fiscal year, it achieved year-on-year revenue growth of 6 percent and net margin recovered from a first half year-on-year decline to be flat in the second half.
The company has three business units, Solutions & Services Group (SSG), Infrastructure Solutions Group (ISG – servers and storage) and Intelligent Devices Group (IDG – PCs and smartphones), with all three showing year-on-year revenue growth.
The IDG business unit has a long way to climb to regain its fy2022 Q3 glory.
IDG, the largest business unit (BU), reported $10.5 billion in revenues, up 7.2 percent on the year. ISG reported $2.5 billion in net income, 13.6 percent higher, while SSG brought in $1.8 billion, an increase of 9.1 percent. IDG is so large that the other BUs can’t do much to swing Lenovo’s profitability dial one way or the other.
Still Lenovo was complimentary about all its BUs. It said the results “strengthened SSG’s position as a growth engine and profit contributor” by delivering its year-on-year revenue growth “and high profitability with an operating margin exceeding 21 percent.”
ISG “resumed growth” with a record fourth quarter but it made an operating loss in all four quarters of the year. Lenovo’s “storage, software and services businesses all achieved hypergrowth, with the combined revenue increasing more than 50 percent year-on-year. High Performance Computing revenue hit a record high.” The growth percentage was actually 52 percent. Lenovo resells a lot of NetApp storage so that Sunnyvale business will be pleased.
Lwnovo’s ISG financial summary slide
IDG had a solid quarter, Lenovo said: “strengthening its global market leadership for PCs with a market share of 22.9 percent.” Lenovo’s PCs, tablets, and smartphones all resumed growth in the second half of its fy2024.
Yuanqing said: ”We’ve built a full stack of AI capabilities and are at the forefront of pioneering the revolutionary AI PC market.” Lenovo expects the AI PC to grow from its current premium position to mainstream over the next three years, driving a new refresh cycle for the industry, and bolstering its revenues substantially.
Lenovo hopes to accelerate growth and have sustainable profitability increases across its entire business in fy2025. The driver will be hybrid AI with every product sector getting an infusion of AI.
Commissioned: Generative AI adoption within organizations is probably much higher than many realize when you account for the tools employees are using in secret to boost productivity. Such shadow AI is a growing burden IT departments must shoulder, as employees embrace these digital content creators.
Seventy-eight percent of employees are “bringing their own AI technologies” (BYOAI) to work, according to a joint Microsoft and LinkedIn survey. While the study acknowledges that such BYOAI puts corporate data at risk it downplays the sweeping perils organizations face to their data security.
Whether you call it BYOAI or shadow AI this phenomenon is potentially far worse than the unsanctioned use of cloud and mobile applications that pre-dated it.
As an IT leader, you’ll recall the bring-your-own-device (BYOD) trend that marked the early days of the consumer smartphone 15 years ago.
You may have even watched in horror as employees ditched their beloved corporate Blackberries for iPhones and Android smartphones. The proliferation of unsanctioned applications downloaded from application stores exacerbated the risks.
The reality is that consumers often move faster than organizations. But consumers who insist on using their preferred devices and software ignore integrating with enterprise services and don’t concern themselves with risk or compliance needs.
As risky as shadow IT was, shadow AI has the potential to be far worse – a decentralized Wild West or free-for-all of tool consumption. And while you can hope that employees have the common sense not to drop strategy documents into public GPTs such as OpenAI, even something innocuous like meeting transcriptions can have serious consequences for the business.
Of course, as an IT leader you know you can’t sit on the sidelines while employees prompt any GenAI service they prefer. If ignored, Shadow AI courts potentially catastrophic consequences for organizations from IP leakage to tipping off competitors to critical strategy.
Despite the risks, most organizations aren’t moving fast enough to put guardrails in place that ensure safe use, as 69% companies surveyed by KPMG were in the initial stages of or had not begun evaluating GenAI risks and risk mitigation strategies.
Deploy AI safely and at scale
Fortunately, organizations have at their disposal a playbook to implement AI at scale in a way that helps bolster employees’ skills while respecting the necessary governance and guardrails to protect corporate IP. Here’s what IT leaders should do:
Institute governance policies: Establish guidelines addressing AI usage within the organization. Define what constitutes approved AI systems, vet those applications and clearly communicate the potential consequences of using unapproved AI in a questionable way.
Educate and train: Giving employees approved AI applications that can help them perform their jobs reduces the incentive for employees to use unauthorized tools. You must also educate them on the risks associated with inputting sensitive content, as well as what falls in that category. If you do decide to allow employees to try unauthorized tools, or BYOAI, provide the right guardrails to ensure safe use.
Provide use cases and personas: Education includes offering employees use cases that could help their roles, supported by user “personas” or role-based adoption paths to foster fair use.
Audit and monitor use: Regular audits and compliance monitoring mechanisms, including software that sniffs out anomalous network activity, can help you detect unauthorized AI systems or applications.
Encourage transparency and reporting: Create a culture where employees feel comfortable reporting the use of unauthorized AI tools or systems. This will help facilitate rapid response and remediation to minimize the fallout of use or escalation of incidents.
Communicate constantly: GenAI tools are evolving rapidly so you’ll need to regularly refresh your AI policies and guidelines and communicate changes to employees. The good news? Most employees are receptive to guidance and are eager to do the right thing.
Solutions to help steer you
GenAI models and services are evolving daily, but there are some constants that remain as true as ever.
To deploy AI at scale, you must account for everything from choosing the right infrastructure to picking the right GenAI models for your business to security and governance risks.
Your AI strategy will be pivotal to your business transformation so you should weigh whether to assume control of GenAI deployments or let employees choose their own adventures, knowing the consequences of the latter path.
And if you do allow for latitude with BYOAI, shadow AI or whatever you choose to call it, do you have the safeguards in place to protect the business?
Trusted partners can help steer you through the learning curves. Dell Technologies offers a portfolio of AI-ready solutions and professional services to guide you along every step of your GenAI journey.
Data intelligence supplier Alation has achieved Federal Risk and Authorization Management Program (FedRAMP) “In Process” status and gained a listing on FedRAMP Marketplace at a Moderate impact level. Alation’s partnership with Constellation GovCloud and Merlin Cyber allows government agencies to search for and discover secure, FedRAMP-compliant data. Constellation GovCloud de-risks the FedRAMP authorization process for Alation by handling most compliance tasks and reducing costs.
…
Cloud backup and general S3 storage vault provider Backblaze has won SaaS-based legal biz Centerbase as a customer. Back in Dec 2023 Veeam using Centerbase chose Object First’s Ootbi appliance as its on-prem backup store for its Veeam backups. It has also decided to use Backblaze’s B2 service as a cloud-based offsite disaster recovery facility. The scheme has Veeam Backup and Replication tiering infrastructure to Backblaze B2 as well as sending it to the Ootbi appliance. If Centerbase’s primary site is affected by ransomware or natural disaster, it can turn to its B2 backups to restore data and meet recovery time objective (RTO) requirements.
…
Supercomputer, HPC and enterprise/AI storage supplier DDN has won Qatar’s telecommunications operator and information and communications technology (ICT) provider Ooredoo as a customer. It will adopt DDN’s AI infrastructure across its networks.
…
HPE Storage bloke Dimitris Krekoukias, blogging as Recovery Monkey, has posted a blog about resiliency aspects of HPE’s Alletra MP scale-out block storage system’s DASE (disaggregated, shared everything) architecture – like VAST Data. Alletra MP has no concept of dual controllers. He writes: “all the write cache resiliency has been moved out of the controllers. It goes hand-in-hand with not having a concept of HA pairs of controllers. Ergo, losing a controller doesn’t reduce write cache integrity. Which is unlike most other storage systems.” Also: “all controllers see all disks and shelves, all the time.” Users can add new controller nodes (compute) without having to add capacity (storage). The system rebalances itself.
If a controller goes down, the system rebalances itself. Extra capacity can be added at will. If a component disk shelf is lost, the system recovers. Controller resilience is high – (N/2)-1 nodes can be lost at the same time for Alletra MP Block. So in a six-node cluster, any two nodes can be lost simultaneously. “Conceptually, the architecture allows arbitrary node counts – so if in the future we do, say, eight-node clusters, (8/2)-1=3 so any three nodes could be truly simultaneously lost without issues, and so on and so forth, as cluster size increases.”
He notes: “We now allow N/2 rolling failures in R4, and later may potentially allow more.”
…
Seagate’s HAMR drive qualification at customers has been hindered, Wedbush analyst Matt Bryson suggests, by mechanical components being stored in a non-climate-controlled environment. This resulted in a material reaction to the environment and, when the components were used in the HAMR process, media contamination followed. Bryson argues this is basically a logistics problem and relatively easily fixed – much more easily than fixing a physics or chemistry problem. The delayed HAMR take-up will benefit WD, with its 24/28 TB PMR/SMR HDDs. He thinks that, overall, the HAMR situation will improve for Seagate from now on, with the next HAMTR iteration due in H2 2025 with a 40TB-class product launch providing a Seagate revenue boost.
…
Storage Newsletter reports Trendfocus SSD shipment stats for calendar Q1, 2024, showed capacity shipped rose 6.5 percent quarter on quarter to 90.87EB with units dropping 5.1 percent to 83.79 million. But these low-level changes hid dramatic market sector differences.
Client units decreased 9.5 percent to 65.65 million and EB shipped went down 11.5 percent to 43.67EB;
SAS SSD units dropped 19.9 percent to 751,000 and EB shipped went down 5.6 percent to 3.03EB;
Enterprise SATA units rose 2.1 percent to 3.64 million, with 5.12EB shipped, up 3.7 percent;
Enterprise PCIe units shipped soared 50 percent to 8.08 million, and EB shipped went up 45.5 percent to 33.71EB.
Supplier capacity market shares:
Samsung – 36.8 percent
WD – 15.1 percent
Kioxia – 7.8 percent
Combined WDC + Kioxia – 22.9 percent
Solidigm – 14.8 percent
SK hynix – 6.9 percent
Combined SK hynix & Solidigm – 21.7 percent
Micron – 8.2 percent
Kingston – 4.5 percent
SSSTC – 0.6 percent
Others – 5.3 percent
Units shipped supplier market shares:
Samsung – 31.3 percent
WD – 18.5 percent
Kioxia – 9.3 percent
Combined WDC + Kioxia – 27.8 percent
Solidigm – 5.4 percent
SK hynix – 10.3 percent
Combined SK hynix & Solidigm – 25.7 percent
Micron – 9.8 percent
Kingston – 6.7 percent
SSSTC – 1.6 percent
Others – 7.2 percent
SK hynix is benefitting greatly from Solidigm’s high-capacity SSDs.
…
Vector database supplier Qdrant claims its dedicated vector database is faster than other databases with vector extensions added in, and also other dedicated Vector databases such as Pinecone. It claims Qdrant enhances both speed and accuracy by ingesting additional context, enabling LLMs to deliver quicker and more precise results. Qdrant delivers higher retrieval capabilities in Requests Per Second (RPS). Qdrant is designed for high-capacity workloads, making it ideal for large-scale deployments like in the global medical industry, where patient data is continuously updated. For smaller applications, such as a startup’s chatbot, the performance and accuracy benefits may be less noticeable between Qdrant and other products. For extensive datasets, Qdrant’s optimization and scalability offer significant advantages.
Qdrant benchmark chart. Rps = requests per second.
Qdrant benchmarked its performance against named vector databases – Elastic Search, Milvus, Redis, and Weaviate, but not Pinecone – using various configurations of them on different datasets. You can reference the parameters and results, summarized in tables and charts here.
…
ReRAM developer Weebit Nano has partnered with Efabless to allow quick and easy access to prototyping of new designs using Weebit ReRAM. Key points:
This will enable a broad range of designers (startups, government agencies, product companies, research centers and academia) to prototype their unique designs with limited quantities, before they decide to proceed to full production.
The manufacturing will be done at SkyWater, where Weebit is already proven and qualified up to 125 degrees C.
Efabless has thousands of users and customers and it has a unique value proposition: allowing anyone to design their own SoCs for a fraction of the price of a full-mask production, for limited quantities only.
…
Veeam announced new technical training and certification programs through Veeam University, which delivers Veeam technical training to IT professionals on-demand anytime, anywhere. The online offering is the result of a global partnership with Tsunati, a Veeam Accredited Service Partner, revolutionizing on-demand technical certification training for partners and customers worldwide. Veeam University offers maximum flexibility and an immersive, engaging learning experience in a self-paced format. This innovative approach includes clickable labs that can be accessed 24x7x365, video-based demos, and technical deep dives allowing students to effectively absorb concepts and prepare for real-world cyber security and disaster recovery scenarios. Completion of on-demand courses offered through Veeam University qualifies learners for Veeam certification exams – including Veeam Certified Engineer (VMCE).