Pavilion Data has added three availability and resilience features to its all-flash array products to guard against controller and SSD failures.
The company makes parallel, multi-controller NVMe-oF-accessed all-flash arrays offering high and scalable performance. The three data assurance features are aimed at heightening its availability and data resiliency features.
V.R. Satish.
V.R. Satish, Pavilion co-founder and CTO, issued a statement saying: “As big and fast data applications become business critical, Pavilion allows customers to manage their modern workloads with the same SLAs as traditional workloads with its new features.
The first new feature is controller redundancy. With up to 20 active:active controllers, Pavilion has always offered high availability but, should a controller fail, the overall available controller resource shrinks.
Pavilion hardware unit.
Now there is N+1 controller redundancy which adds additional controllers operating in standby mode: an active: passive scheme. The result is that the aggregate performance of up to 16 controllers per 4RU array can be performing I/O at any given time, with up to 4 additional controllers on standby. Should a controller fail there is no loss of performance.
Second is swarm rebuild. When a component SSD fails its contents are rebuilt in Pavilion’s RAID 6 (dual parity) scheme. That work is now being spread across multiple controllers, a swarm, to speed rebuilds up to a 4TB/hour rate. Pavilion looked at erasure coding as an alternative drive-level protection scheme but says the amount of extra storage capacity needed is unacceptable.
For comparison, an 8TB SATA SSD rebuild time of 4.4 hours was quoted by Anandtech in March last year. That’s 1.8TB/hour.
A third new feature deals with internal data write failures in SSDs. This can happen when the drive tells the system’s metadata that a write has completed but the actual data block is not written. Pavilion says this is a rare event but statistically significant errr. Its software now adds a version checksum number to every 4k block, as well as the standard cyclic redundancy check (CRC) that is done as part of a T10 Data Integrity Field (Dif) operation.
When data is written with standard T10 Dif there is a CRC done, which is compared on read to confirm that the data is valid. But if there was supposed to have been a write to that data, which did not occur, the CRC will not catch it. Pavilion adds an additional checksum to confirm that the data is both valid and current.
Satish said: “Customers can now treat their modern workloads as Tier 1 applications when it comes to data resiliency and availability.” Or, customers can view Pavilion arrays as equivalent in data resiliency and availability terms to the legacy high-end arrays they have used for mission-critical applications and data.
A new SEC filing has revealed that Datto aims to sell its shares for between $24 and $27 in an October 21 IPO, valuing the firm at up to $4.25bn if the shares are sold at the top price.
Datto filed for an IPO with the SEC in July. The firm provides cloud backup services to over one million small and medium sized businesses (SMBs) through more than 17,000 managed service provider (MSPs) partners. Of that 17,000, over 1,000 accounted for annual run-rate revenue (ARR) of $100,000 or more.
With 22 million common stock shares on offer at the NYSE, Datto could pull in $521.7m net. It will use the proceeds to pay off debt. Funds controlled by its private equity sponsor, Vista Equity Partners, will own about 72.2 per cent of the stock if the offering proceeds as planned. Vista acquired Datto in 2017.
Datto graphic.
ARR at the end of 2019 was $474.4m and revenues were $506.7m with a net loss of $31.2m. That compares to revenues of $387.4m at the end of 2018 when there was a loss of $37.2m. As of June 30 this year, ARR was $506.7m, with six months of revenue totalling $249.1m.
Datto provides a suite of cloud-managed and turnkey products and security offerings, including business continuity, disaster recovery and SaaS protection, to its MSPs. It has said MSPs represent the future of IT management for SMBs and claimed they are increasingly adopting digital and cloud-based technologies.
It also said the SMB IT market represents a massive opportunity, with Gartner estimating global SMB IT spend in 2019 reaching $1.2 trillion and growing at a 4.6 per cent compound annual growth rate through 2023. Datto itself will grow by adding more MSP partners, encouraging them to get more SMB customers, and both cross-sell and upsell its product offerings.
To date, the company has claimed to have experienced only a minor disruption in operations from the COVID-19 pandemic, meaning lower sales bookings, and moderately slower ARR and subscription revenue growth.
A side note: Datto’s CTO is Bob Petrocelli. He was a co-founder of GreenBytes, which Oracle bought for its desktop virtualization software and deduplication smarts in May 2014.
Cohesity has hooked up with AWS, which is already a backer of the very well-funded data protection business, to sell its Data Platform facilities as a service, with backup being the first product.
Update; 8pm BST Oct 14. Commvault-Azure relationship added.
Data Management as a Service (DMaaS) is the new thing and builds on from Cohesity’s Helios cloud-delivered management service for its 1,500 customers with Cohesity DataProtect offered as a service. It also comes days after Cohesity announced its SiteContinuity automated disaster recovery product.
Amazon made an investment in Cohesity in its April $250m E-round. AWS’s Doug Yeum, its head of Worldwide Channels and Alliances, said of the deal: “Working with Cohesity, we are charting a new course in how data is managed in an as a service model, leveraging disruptive, modern data management capabilities from Cohesity, and industry-leading cloud services from AWS.”
Backup as a service (BaaS) will provide backup of on-premises workloads, via a Cohesity SaaS Connector agent, and also of in-cloud application. Backup data can be restored from AWS to AWS or to an on-premises site. The service will be managed by Cohesity and have consumption-based pricing, with a pre-paid alternative. Customers will get a global and searchable view of all their data backed up with Cohesity, both on-premises and in the cloud, through the Helios management facility.
The umbrella brand for Cohesity’s DMaaS will be Helios and the DataProtect will be the first of a series of data management offerings for mid to large enterprises. The general idea is that Cohesity services will ingest data into AWS and then upstream Amazon services will process it. For example AWS Macie (security services), Glue (extract, transform and load into data warehouse), Redshift (data warehouse) and SageMaker (machine learning training).
Other potential Cohesity-AWS products include DRaaS, Files-aaS and Test Data Management-aaS. The latter would deliver zero-cost clones for developers in the AWS cloud. We understand DRaaS will follow quite soon.
Mohit Aron.
Mohit Aron, Cohesity CEO and founder, offered his thoughts on this: “Cohesity and AWS are also focused on helping customers derive value from data. Through AWS, customers can access a wealth of AWS services, including Amazon Macie, to help customers meet compliance needs, and Amazon Redshift for analytics.”
Cohesity’s Data Platform software converges a customer’s secondary data into a single storage vault covering on-premises and public cloud environments. Data is backed up and protected with immutable snapshots, and the public cloud can be used as a file tier and to store backup data.
The Backup as a Service (BaaS) offering is currently in early access preview with general availability planned by the end of the year. Cohesity will continue to sell its on-premises product.
Comment
Cohesity and AWS call this deal a strategic collaboration and its impact could be widespread. The largest public cloud has taken a massively-funded data protection and management supplier on board and invested in it. Cohesity customers could look to AWS for in-cloud storage and upstream data management services and AWS customers could now move to Cohesity to look after their on-premises data.
Although data protection and management suppliers like Actifio, Clumio, Dell, Delphix, Druva, HYCU, Rubrik, Veeam, Veritas and others have varying degrees of public cloud involvement, none have as close a relationship as this with with a top 3 public cloud supplier; except one.
Commvault is the exception. In June it announced that it and Microsoft were integrating Commvault’s Metallic SaaS data protection with Azure Blob Storage and will develop other product integrations with native Azure services.
The two have a multi-year agreement and Metallic will be a featured app for SaaS data protection in the Azure Marketplace. Metallic Backup & Recovery for Office 365 is already available on the site.
The Google Cloud Platform does not have any equivalent relationship with any supplier like Cohesity.
B&F can envisage exploratory talks between the public cloud companies and the data management companies as suppliers try to become the GCP preferred partners or enlist with AWS alongside Cohesity or Are alongside Commvault.
A second follow-on from the Cohesity-AWS and Commvault-Azure deals could be that mid-to-large enterprise organisations could look to reduce their multiple data protection and management supplier lists and data silos by converging onto fewer suppliers. This would simplify their management burden and make accountability more manageable as well.
RING object and file storage supplier Scality has said it is working with Reforest’Action to plant 10,000 trees in California by spring 2021.
CEO Jerome Lecat lives in the forested Marin peninsula north of San Francisco and said in a statement: “The recent fires surrounding the San Francisco, Bay Area have touched us personally. There is nothing like the smell of smoke in your own backyard and realising that this represents the destruction of some of California’s most vibrant communities and beautiful 2,000-year old-growth redwood forests.”
California Redwood trees.
Scality had previously been working with Reforest’Action to offset its travel-based carbon emissions. The Covid-19 epidemic and travel restrictions rendered that less urgent. Now the forest fire tree replacement program seems to be an appropriate continuation.
As well as the 10,000 trees in California, Scality and Reforest’Action will plant another 4,000 trees near Paris.
Reforest’Action and Scality employees will work with forestry experts and agricultural technicians to plant the trees and will connect with local communities to ensure their growth with the best possible conditions.
Orange, smoke-polluted skies above San Francisco
Roxanne Crossley, Scality chief of staff, said: “Fourteen thousand trees may seem like a small drop in the bucket but we believe every drop counts. We remain committed to our communities and our future generations to make even a small difference in global climate change.”
Since Reforest’Action was founded in 2010 it has helped get 10 million trees planted across the globe. Stéphanie Bonet, its partnership manager, welcomed Scality’s action: “They now join a diverse group of businesses including Enterprise Rent-a-Car, Pampers, and the Arbor Day Foundation, to improve the environment through local action.”
Lecat blogged: “This is not a marketing gimmick. It’s simply the right thing to do.”
Can we expect a Scality tree RING product? [Ed: just stop the jokes already.]
Quobyte has dropped the veil on v3.0 of its Data Centre File System (DCFS) software, saying customers need scale-out, self-service, automated, secure, run anywhere, multi-tenant file services, and NFS is not up to the job.
Its hoping to step into that breach with its own DCFS software, which provides block, file and object access for high-performance computing (HPC) and machine learning apps. It runs on-premises, on bare metal, in VMs and containers, and in the Google cloud. V3.0 adds a whole lot of policy-driven, multi-tenant, security and self-service goodness.
Bjorn Kolbeck.
CEO and co-founder Bjorn Kolbeck told B&F it’s time “to ditch the NFS protocol.”
NFS has no load-balancing, no failover, and no access to multiple metadata servers. All-in-all: “You can’t do true scale-out with a 40-year-old protocol.”
V3.0 adds a Policy Engine that:
Controls all aspects of a Quobyte cluster through flexible policies, from data redundancy, automatic recoding to immutability
The policies can be reconfigured at runtime without interruption of service
Automation ensures the optimal selection of redundancy and placement, including the new automatic policy that switches between replication and erasure coding as well as flash and HDD inside a file
Policy-based data tiering between clusters and recoding;
It supports multi-tenancy with self-service features:
Storage-as-a-Service / Cloud-like experience
Self-service for users from the web console
Automatic resource provisioning from Kubernetes
DCFS v3.0 has a raft of security improvements:
End-to-end AES encryption (In transit / at rest / untrusted storage system)
Selective TLS support, e.g. between data centres
Access Keys for the file system
X.509 certificates
Event stream (metadata, file access)
It also adds a multi-cluster data mover with bi-directional sync, and more native drivers, including HDFS and MPI-IO. The latter provides the benefits of lower latency and less memory bandwidth by bypassing the kernel.
Randy Kerns, senior strategist & analyst at the Evaluator Group, supplied supportive words: “The industry continues to move towards infrastructures that require high-performance storage to supplement compute-intensive applications. Quobyte’s powerful policy engine, holistic approach to data protection and storage-as-as-service usability makes it a viable candidate for deployment in organisations that are increasingly moving towards transactional processes and AI/ML workloads.”
Quobyte 3.0 is available now through the company’s VAR channel. Pricing is based on volume with unlimited capacity and unlimited clusters available. Discounts are available for academic institutions.
You can obtain Quobyte software via a freemium deal – there is a free edition with 150TB of capacity. Otherwise customers buy Cluster or Infrastructure editions.
According to reports along the Korea supply chain, Micron Technology has told DRAM module manufacturers that DRAM chips being developed by China-based ChangXin Memory Technologies (CXMT) might infringe its patents.
ChangXin Memory Technologies (CXMT) is building 8Gbit LPDDR4 chips. It was founded in 2016 and is based in Hefei city, in China’s Anhui province, and was previously known as Innotron. CXMT has a capacity of 40,000 wafers per month and uses a 19nm process. It plans to start manufacturing with a more advanced 17nm process by the end of the year.
CXMT plant.
CXMT signed a licence agreement with Rambus for DRAM patent access in April. This followed a December 2019 agreement with Wi-LAN for access to Qimonda-developed DRAM IP.
Back in 2017, Micron sued China’s Fujian Jin Hua (JHICC) regarding it using Micron IP obtained illegally by Taiwan-based UMC.
There has been speculation that the US could target CXMT with sanctions concerning exports of US-originated semiconductor technology. Any Micron legal action would then add to CXMT’s problems, and hinder China’s road to achieving DRAM manufacturing self-sufficiency.
New York storage firm StorONE has emitted a new release of its S1 array software that it claims rebuilds failed SSDs in three minutes and can sort out a 16TB disk drive in less than five hours.
It also said it would be selling a fully HA, 1PB all-flash array for under $500 per TB, with hybrid arrays costing under $250/TB.
The disk rebuild is ostensibly almost five times faster than an ordinary RAID rebuild of a 16TB drive, which, with 100 per cent drive availability, could take 24 hours or more.
Gal Naor, StorONE co-founder and CEO, issued a canned quote: “This release extends our lead in data protection and affordability, delivering enterprises unmatched resiliency and breaking the $500 per TB AFA price point.”
StorONE AFA.next all-flash array.
StorONE maintains a quarterly array software release schedule and the latest release includes:
Parallelism in the vRAID erasure code-based drive protection scheme has been improved to speed rebuilds,
New vRACK feature builds erasure code-based redundancy across drive shelves and racks, without needing doubled capacity or storage controllers,
New vReplicate facility extends existing on-site sync replication to provide cascading async replication to up to sixteen target sites, with replication being from any StorONE source to any target,
Support for NVMe-oF (RoCE) connectivity,
15 per cent write performance improvement for existing arrays,
Improved NAS capabilities
The vRAID software enables admins to set redundancies on a per-volume level, mix drive sizes within the RAID group, and experience fast rebuilds on any media type.
George Crump, StorONE CMO, said of the software: “The Q3-2020 release enables customers to not only consolidate storage, but also consolidate backup and DR into a single process that lowers costs and simplifies operations.”
The company said it will add 100 per cent seamless connectivity to the public cloud by year-end, plus the ability to drag and drop files between on-premises and cloud environments. It will also make it easy to use its technology for cloud-based disaster recovery and data migration.
StorONE positions its S1 product as a high performance clusterable array that supports any storage server hardware, any storage media, and any storage protocol. The S1 arrays can, it claims, consolidate many existing storage arrays, filers and data protection devices into a single array.
WekaIO’s filesystem has transferred a terabyte of data in less than nine seconds, using a Nvidia DGX-2 server in tandem with Nvidia’s GPUDirect. That works out at a blindingly fast 113.1GB/sec.
Update; additional data added to 113.1GB/sec benchmark details section.
GPUDirect Storage (GDS) enables a storage system to send data directly to its GPUs and bypass the host server’s operating system IO stack and infrastructure. GDS enables direct memory access (DMA) between GPU memory and NVMe storage drives. WekaIO had previously achieved 82GB/sec of throughput to a DGX-2.
GPUDirect storage-to-GPU scheme (green arrows) and normal storage-to-GPU scheme (red arrows).
In this new test Microsoft Research benchmarked update WekaIO 3.8 software, hooked across InfiniBand to a DGX-2 and GPUDirect. The testers initially achieved 97.9GB/sec of throughout, with a single mount point to the WekaFS system. This is the highest throughput of any GPUDirect system tested to date. The result was verified by running an Nvidia GDSIO utility for more than 10 minutes and getting sustained performance over that time.
VAST Data has achieved 92.6GB/sec pumping data to a DGX-2. WekaIO says its software was not running flat out. The test configuration used 10 single-port network interface card (NIC) ports, and these were doubled to 20 by replacing them with dual-port NICs.
The testers added more GDSIO processes to use these ports, use more of the available PCIe bandwidth, and put more load on the GPUs. This second test configuration achieved 113.13GB/s throughput – 38 per cent faster.
The testers reckoned the system ran at 5 million IOPS with the 113.1GB/sec throughput. A source in Microsoft said the 113.1GB/s reflected bandwidth to the host CPUs as well as the GPUs. Any comparison of this WekaIO result to other supplier’s standard DGX-2 results is invalid as it used a non-standard DGX-2.
A source knowledgable about benchmark matters said: “Once you swap out NICs on a DGX-2 it’s not really a DGX-2 but a DGX-2 with an aftermarket fuel injection chip, turbocharger, and muffler, not the same.”
The WekaIO performance is great but… how much does it cost? Without that input it is impossible to assess price performance. We wait patiently for this, but in the meantime DDN and Excelero also support GPUDirect and are yet to release performance numbers. They now have a target to aim for.
Actifio, the venture-backed data management startup, has used a reverse stock split to torch the shares of up to 1,000 staff and former employees.
The company last week implemented a reverse stock split ratio of 100,000-to-one, which “decreases the aggregate number of shares of Common Stock issuable thereunder to 9,728,360 shares of Common Stock,” priced at 24c “fair value” each.
That $0.24 “fair value” means Actifio’s 9,728,360 shares of Common Stock are valued at $2.335m. Yet the company was classed as a unicorn in August 2018, worth more than $1bn by VCs, and has taken in $311.5m of VC investment since it was founded in 2009. The reverse stock split ratio indicates that an incredible 972 billion shares were previously issued.
What’s going on here? Peter Godman, the co-founder and former CEO of startup Qumulo, has a plausible explanation. “If I were guessing I’d say they took new investment at near-zero valuation,” he writes on Twitter. “The level of outstanding stock imputed a very low per-share valuation. So, they printed a trillion shares, awarded it to new investors and loyal employees, and then reverse split to fix it all. … Whatever the reason, a sad day for early employees particularly.”
Let’s see how this affects Actifio’s small stockholders.
Something to nothing
Jeff Greene
Jeff Greene was a Professional Services Engineer at Actifio from July 2012 to December 2014. “I joined in July 2012 and was employee number 90,” he told us. “I was offered 5,000 shares and when I left in 2014 I purchase 2,333 vested shares for approximately $5k. I viewed it as Vegas money and rolled the dice.” He is now receiving a payout of $0.005599, a five thousandth of a dollar. Yes, you read that correctly.
Greene received the news via a letter from Actifio’s CFO Ed Durkin, dated October 5, 2020.
Two manoeuvres by Actifio turned Greene’s shares priced at $5,202.59 to dust. First was the recapitalisation and 100,000:1 reverse stock split. That converted 2,333 shares of old Actifio Common Stock into a fractional share, comprising 0.02333 of of the new Voting Common Stock.
The second event is detailed in the letter, which states; “In lieu of maintaining as outstanding the fractional shares of Voting Common Stock that resulted from the Reverse Stock Split, the company has opted to pay in cash the fair value of such fractional shares, which is based on the fair market value of a single share of Voting Common Stock of $0.24, as determined by the Company’s Board of Directors.”
Jason Axne was a Professional Services Engineer and then Principal Systems Architect at Actifio from August 2012 to March 2018. He said on LinkedIn: “If you are a minority stockholder, a reverse split could extinguish your position and force you out. Unfortunately, there is not much you can do as long as the reverse split follows legal procedures and you receive the correct number of new shares.
“Your chance of prevailing in a lawsuit brought against the board of directors is slim. The courts have held that, absent fraud, misrepresentation or misconduct, a corporation has the right to eliminate minority stockholders through a reverse split.
“Incredible. So many of us poured our heart and souls into that company only for them to squash us like cockroaches… so disappointing.”
This week has been busy with storage news, like last week and the one before that. Ransomware protection has become a top-of-mind CIO concern and – among others – Backblaze and Veeam have teamed up to provide it.
Fibre Channel speeds are a concern to IT Brand Pulse; it has said indicated speeds, like car speedometers, overstate the real speed. And media workflow supplier OpenDrives has bulked up its NAS product by adding clustering while start-up Observe introduces great log file diagnosis and analytics capabilities.
Backblaze and ransomware
Backblaze B2 Cloud Storage now supports data immutability for Veeam backups and is available. It uses virtual air-gapping.
A Blackblaze blog by Natasha Rabinov says: “With object lock functionality, there is no longer a need for tapes or a Veeam virtual tape library. You can now create virtual air-gapped backups directly in the capacity tier of a Scale-out Backup Repository (SOBR). In doing so, data is Write Once, Read Many (WORM) protected [and] data can be restored on demand.”
Data immutability begins by creating a bucket that has object lock enabled. Then within your SOBR, you can simply check a box to make recent backups immutable and specify a period of time.
Backblaze object immutability screen.
Selecting object lock will ensure that no one can:
Manually remove backups from the Capacity Tier.
Remove data using an alternate retention policy.
Remove data using lifecycle rules.
Remove data via tech support.
Remove by the “Remove deleted items data after” option in Veeam.
Once the lock period expires, data can be modified or deleted as needed.
B2 Cloud Storage is the only public cloud storage alternative to Amazon S3 to earn Veeam’s qualifications for compatibility and immutability.
Fibre Channel 32 and 64Gbit/s actually run slower
IT Brand Pulse’s Frank Berry has said that FCIA Fibre Channel generations don’t match up to their nominal speed billing. He tells us: “For the last few years I have been lobbying the FCIA to fix their naming. With each generation it gets more out of wack.” Meaning it’s slower than the name suggests.
He wrote this in a 2017 LinkedIn article:“The network technology named ’10GbE’ means throughput is 10Gb/s, and if you’re like most people, you would naturally believe the network called ’32GFC’, means throughput is 32Gb/s. The reality is the name matches the speed for Ethernet, but surprisingly not for Fibre Channel. The speed of the network is significantly less than the names for each generation of Fibre Channel.”
Berry said the FCIA uses baud rate as its reference point for naming: “What should be used to identify each new generation of Fibre Channel, is a well established convention: half-duplex throughput in Gbit/s. You can arrive at that number by multiplying the baud rate by a factor for encoding overhead.”
He’s produced a FC speed translation table:
Observe starts up
Ex-Dell Technologies CMO Jeremy Burton has co-founded Observe with VC Mike Speiser. Burton is the CEO and Speiser on the board as an investor.
Burton told B&F: “The idea with Observe was to take all of what folks like Splunk do (logs), all of what DataDog does (metrics) and all of what New Relic does (traces) and do it in a single product…. and store all the data in a single – Snowflake – database. The thought is that bringing data together gives users more visibility into what’s going on and helps them find problems faster. And if we can do that – not unlike Pure attacking Storage or Snowflake attacking databases – the market is massive.”
He said that, with K8s and AWS, we transform raw log events and metrics into things SREs (site reliability engineers) recognise such as containers, pods, EC2 instances and S3 buckets, and understand all the relationships between them.
“In distributed systems like Kubernetes, logs are often riddled with GUIDs that refer to other objects. Under the hood Observe is powered by Snowflake. Because it’s a relational database, we can easily figure out relationships in your K8s infrastructure and tell you what they mean. Our Teleport feature navigates all of the relationships across the data and returns us logs for the subset of jobs we want to look at.”
An Observe blog has a video about using its facilities to diagnose K8s problems.
Video post production NAS system supplier OpenDrives has announced v2.1 of its Atlas NAS software. It includes:
Storage Clustering allows individual scale-up devices, or nodes, to be aggregated together forming a cluster with balanced workloads among cluster nodes.
Containerisation with OpenDrives claiming it’s the first to approach containerisation from the storage side and achieve huge performance gains by intelligently and efficiently delivering data to the container.
Conditional Automation is a complementary feature to containerisation enabling triggering actions, such as time-based or file-based actions, to create automated tasks (snapshiots, bandwidth throttling, containerised apps) that fire up and operate independently of other tasks.
Centralized Management and Visibility through a single pane of glass with easy-to-consume APIs enabling more extensive telemetry information for analytics, and better configuration settings to tune nodes and storage clusters.
Cloud Storage Support for AWS S3. Altas 2.1 users can send and receive both on-premise and cloud data via the S3 protocol, and share S3 remote targets through Service Message Blocks (SMB) locally.
High Availability – users can choose from a dual C-Module or fully clustered and distributed deployments, all supported and managed by Atlas for storage service continuity.
Shorts
NVMe-oF-clsss storage array startup Apeiron appears to have morphed into Paladian Data. Despite hiring Chuck Smith, an ex-HPE exec, as its CEO, Apeiron did not grow enough to survive. Smith has gone. Lee Harrison, Apeiron’s CTO and co-founder, has set up Paladian with the same original focus as Apeiron: big data analytics via Splunk.
Paladian Data web page.
Olga Tokarczuk, Nobel Prize Literature laureate from Poland, has stored her life’s work of 14 books on piqlFilm in the safety and security of the Arctic World Archive (AWA), for perpetuity. The AWA in Longyearbyen, Svalbard, is a joint venture between the Norwegian state coal mining company Store Norske Spitsbergen Kulkompani (SNSK) and Piql, a Norwegian archive technology business. SNSK has the hole in the ground and PiqI makes the film reel cartridges that go into it.
Data protector Assigra has signed up Nimbus Clouds Services to deploy ransomware-enabled cloud backup services throughout Malaysia, Sri Lanka and Indonesia.
CodeLathe, which supplies the FileCloud enterprise file service platform FileCloud, has achieved Amazon Web Services (AWS) Digital Workplace Competency status.
Dalet, a media workflow system supplier, has joined Spectra Logic‘s certification program to deliver an integrated system that combines Dalet Galaxy five with Spectra’s BlackPearl Object Storage platform,
DataStax unveiled a DataStax Fast 100 program to match Apache Cassandra experts to enterprise projects. Partners on-board, with more to come, include: Deloitte, Expero Inc, and Anant Corporation. The program provides enablement for partners with consultants certified and ready to deliver on Cassandra engagements within 30-days.
DDN announced its A31 all-flash and hybrid storage system alongside NVIDIA’s DGXTM POD are being used by CMKL University, Thailand , the premier AI University in Southeast Asia, to support research projects spanning from machine learning, connected and automated transportation, food and agricultural analytics to precision health and more.
Cloud-native Kubernetes storage system supplier Diamanti says its D20 RH release supports Red Hat OpenShift 4.5.
Deduping backup target array supplier Exagrid, competing against Dell EMC PowerProtect, and others, hit its plan for the quarter ending September 30, 2020 and added over 100 new customers in the quarter. They included a record 27 new customers with initial six-figure purchases and one existing customer with a seven-figure purchase.
CEO Bill Andrews said: “We are replacing low-cost primary storage disk from Dell, HPE, and NTAP behind Commvault and Veeam, as ExaGrid is far less expensive for longer-term retention. We are also consistently replacing Dell EMC Data Domain and HPE StoreOnce inline scale-up deduplication appliances.”
HPCnow!, a provider of supercomputing and high performance computing (HPC) services for scientific and research organisations, is using Excelero’s NVMesh software. It has has performed a pilot at ALBA, a synchrotron light facility near Barcelona, using elastic NVMe-based data storage to support the acquisition and processing of the massive volumes of scientific data generated by its high performance beamlines, which use soft and hard X-rays’ intense light source beams to help to characterise materials, their properties and behaviour.
Chinese server systems supplier Inspur has built a Cloud SmartNIC using Nvidia’s BlueField-2 DPU. With a PCIe 4.0 switch the system can deliver up to 200Gbit/s bandwidth. It offloads functions like software-defined networking (SDN), software-defined storage (SDS), and encryption and security processing from the host CPU.
Inspur says that means the Cloud SmartNIC offloads host server CPU functions like traffic management, storage virtualisation, and security isolation, significantly freeing CPU computing resources. It envisages servers with the Cloud Smart NIC being used for AI, big data analytics, cloud, virtualisation, micro-segmentation, and next-generation firewalls.
Inspur has announced an SPC-1 benchmark result for its AS5300 G5 array, which places it at number 10 in the SPC-IOPS list. Its AS5600 G2 array is at number 3 with a 7,520,358 IOPS score. And its AS5500 G5 is ninth with 3,300,292 IOPS.
Inspur SPC-1 benchmark result.
JetStream Software has a business continuity and disaster recovery offering for Azure. By replicating data continuously and directly into Azure Blob Storage, JetStream DR enables continuous data protection (for a near-zero RPO) at a lower cost than other solutions. Recovery resources can be provisioned in AVS as needed, with the protected VMs and their data rehydrated from Azure Blob Storage.
Model9 told us about a September Gartner Cloud Storage Management Is Transforming Mainframe Data report. Its abstract says: “Advances in cloud storage and data protection management are transforming mainframe tape environments, enabling organizations to reduce storage costs and unlock new business value. I&O leaders must modernise tape systems by migrating mainframe backup data to subscription-based cloud solutions.” CTO Gil Peleg says Model9 is gaining momentum in the market and recruiting top talent (including C-level from Kaminario and Infinidat).
Netlist announced production shipments of its high performance N1952 and low power N1552 product line of NVMe SSDs. Both use 96-layer 3D NAND and firmware that provides superior performance, data integrity and security using standards-based T10 Protection Information (PI) and encryption key management features. They have transfer speeds up to 6 GB/sec and up to 1 million random read IOPs (N1952).
Nutanix has anextended multi-level Nutanix Certification Program with certifications across four skill levels and six new technology tracks. The tracks include Digital HCI Services (Multicloud Infrastructure), Sata Centre Services (Data Storage Services, Security & Governance, Business Continuity), DevOps Services (Multicloud Automation, Database Automation) and Desktop Services (End User Computing).
NVME flash array supplier Pavilion Data announced a Federal Agency has expanded its deployment of the Pavilion Hyperparallel Flash Array (HFA) to accelerate workflows in their IBM Spectrum Scale environment. The agency has spent an 8-figure sum on Pavilion Data kit.
Researchers at MINES ParisTech, a French engineering school, have implemented Panasas ActiveStor high-performance computing (HPC) storage to support research efforts at the university’s Materials Research Centre.
Pure Storage’s FlashRecover, the data protection and management system comprising a FlashBlade array with a white box server that runs Cohesity’s Data Platform software, is now generally available.
Kubernetes management platform supplier Rancher Labs announced Rancher 2.5 software with a new installation experience, GitOps at scale for edge clusters, full lifecycle management of EKS clusters and a new security-hardened, certified Kubernetes distribution for government customers.
Sven Oehme.
Scality may have a branding problem with its SOFS (Scale Out File System). We’re told that, since 2008 , SOFS stands for Sven Oehme File System. Who is Sven Oehm? He is DDN’s Chief Research Officer.
Startup Stellus may have closed down. It was backed by Samsung and developing a key:value store-based flash array. Chief Revenue Officer Ken Grohe left in May this year, just 3 months after the first Stellus product was launched. CEO Jeff Treuhaft new appears to have quit, according to his LinkedIn entry. We have been unable to obtain any comment from Samsung.
Research outfit Trendfocus has suggested hard disk drive manufacturers may have to move from 9-platter drives to 10-platter ones to achieve 24TB and higher-capacity drives.
Hosting and cloud services provider Leaseweb Global announced the integration of Veeam SW with its Leaseweb Cloud Services.
Replicator WANdisco announced Public Preview availability for its LiveData Platform for Microsoft Azure. It says open Public Preview is a critical late stage phase of roll-out across Microsoft Azure ahead of a combined Microsoft and WANdisco go-to-market launch of the capability.
Cloud storage supplier Wasabi signed up reseller Micropac Technologies to deliver Wasabi’s cloud storage service to customers in the US health care and IoT markets as well as federal, state, local and education (SLED) government agencies.
Wasabi and Flexential are partnering to launch Flexential Archive Storage which is designed to enable businesses to store long-term unstructured data from any source in an affordable, fast and straightforward way.
Western Digital’s Black brand gaming storage products have been given a go-faster flash upgrade, driving performance to the million IOPS level.
It has aimed the three new pieces of kit at gamers. WD has worked PCIe gen 4 magic to build a furiously fast drive moving data at up to 7GB/sec. For PCIe gen 3 users it has strapped two SSDs together to get to around 80 per cent of PCIe 4 speed, and it’s also built a Thunderbolt 3-connected gaming dock with an SSD inside too.
Jim Welsh, Consumer Solutions SVP at Western Digital, said in a canned quote: “Our latest WD Black products have been purpose-built to allow gamers to meet the increasingly high standards of future games and gaming platforms. We’ve optimised these products to not only provide more storage for gamers but to elevate the gaming experience as a whole.”
The PCIe gen 4 SSD gives a taste of the performance to be expected from coming PCIe gen 4 enterprise drives. Now for details:
SN850
The M.2 gumstick format SN850 SSD succeeds the SN750, with roughly similar capacity levels of 500GB, 1TB, and 2TB. But it has a PCIe gen 4 x4 lane interface instead of the SN750’s gen 3 PCIe link.
SN850 M.2 2280 format drive.
The new drive has WD/Kioxia BiCS4 96-layer TLC 3D NAND inside it and pumps out up to 1 million ransom read IOPS with sequential read/write bandwidth of up to 7GB/sec and 5.3GB/sec respectively. The older SN750 did up to 515,000 random read IOPS, 3.37GB/sec sequential reading and 3GB/sec sequential writing. There’s a near doubling of read IO performance and much faster write performance as well.
SN850 M.2 2280 format drive with heatsink and lighting effects.
There is an optional heatsink to help reduce thermal throttling and this also gets you customisable red-green-blue (RGB) rim lighting effects controlled through a Dashboard utility running on Windows. The drive has a 5-year warranty and the endurance numbers are:
For those of us, the majority, stuck with PCIe gen 3 but still wanting more speed, WD has combined two Black M.2 SN730 SSDs in a PCIe half-height, add-in-card format for its Black AN1500 drive. It has 1, 2 and 4TB capacity levels and uses an 8-lane PCIe gen 3 interface.
The SN730 is an OEM version of the SN750 and has 256GB, 512GB and 1TB capacities. It delivers up to 550,000 random read/write IOPS, 3.4GB/sec sequential reads and 3.1GB/sec sequential writes. An SLC cache helps produce such high speeds from the same NAND as used in the SN850.
The AN1500 SSD has more extensive lighting effects than SN850.
Two of these SN730 drives used as one, via RAID 0 (striping), in the AN1500 deliver up to 780,000/710,000 random read/write IOPS and up to 6.5 and 4.1 GB/sec sequential read and write bandwidth. That’s near-PCIe gen 4 speed and produced using a Marvell 88NR2241 controller, which has 3 Arm CPUs and 8Gb of DRAM cache per SSD.
The AN1500 has a heat sink and customisable RGB lighting, again through the Dashboard utility, with 16 different effects. The prices are; 1TB – $299.99, 2TB – $549.99, 4TB – $999.99 – and there is a downloadable datasheet.
D50 Dock
The third Black product is the D50 gaming dock with a Thunderbolt 3 connection to a host laptop, a 1TB or 2TB Black SDD inside, multiple accessory ports for a display, a mouse, a keyboard and other things, and a customisable RGB lighting bar. That needs the same Windows-only Dashboard utility as the other Black products above.
WD D50 gaming dock and ports.
Essentially it’s an external drive and Thunderbolt 3 dock combined. Thunderbolt 3 runs at 40Gbit/s and includes USB 3.1, supports DisplayPort 1.2, two 4K displays with video and audio signals, HDMI 2.0, and 10GbitE networking.
It’s made to stand vertically with the RGB lighting effects bar near its base.
The DOCK is available on its own with no drive inside and the MSRP prices are $319.99 for the dock on its own, $499.99 for the 1TB version and $679.99 for the 2TB model.
Availability
The Black SN850 non-heatsink model should be ready to buy by the end of October, with the heatsink version ready in the first 2021 quarter.
The Black AN1500 is available now at certain WD retailers, etailers, resellers, system integrators and the Western Digital Store.
The D50 game dock, with or without an internal SSD, is available for pre-order at certain WD retailers, etailers, resellers, system integrators and the Western Digital Store.
Actifio, the venture-backed data management vendor, has instituted a reverse stock split, according to sources. This indicates an IPO could be imminent and will ensure that the stock trades in double digit figures.
So what does it mean for Actifio’s existing stockholders? A reverse stock split means they hold the same value, just in fewer shares. Simple, right? But according to our sources, Actifio’s manoeuvre has prompted some employee disquiet.
According to a document we have seen, the company is implementing a reverse stock split ratio of 100,000 to 1. At first sight this looks like Actifio has been handing out options as if they were confetti. In another section of the document, Actifio says the change “decreases the aggregate number of shares of Common Stock issuable thereunder to 9,728,360 shares of Common Stock.”
If we multiply that figure by 100,000 we arrive at 972 billion… shurely shome mishtake? Typo somewhere? If correct, this would make Actifio the Venezuela of reverse stock splits.
Pre-IPO reverse stock splits are an increasing trend, according to a recent Wall Street Journal article: “Investment bankers say institutional investors generally want to see an IPO price in the $10-to-$20-per-share range, which can be hard to achieve if too many shares dilute the price.”
We’ve asked Actifio for a comment and a company spokesperson said: “As a privately held company, we do not comment on this confidential matter.”
Actifio was established in 2009 and is funded to the tune of $311.5m. The company makes golden copies of primary data and feeds them downstream users such as Test & Dev, compliance officers, data analysts and others.
N.B. We have corrected the shares calculation – an earlier version of the article stated the figure was 9.72 trillion.