Pure Storage made a net loss of $66m on fiscal Q2 revenues up 28 per cent to $396.3m from a year ago. The results were better than analyst forecasts, but shares still fell overnight in response to reduced forward outlook from the company.
Pure quarterly revenue and net income history; showing a seasonal sawtooth pattern
Pure reported:
Non-GAAP profit $275m, up from $210m a year ago,
Gross Margin – 69.4 per cent, was 68 per cent a year ago,
Free cash flow was $19.9m compared to -$11.9m a year ago,
Cash and investments of $1.19bn,
Product revenue grew 24 per cent year-on-year to $300.1m.
Support and subscription revenue grew 42 per cent year-on-year to $96.2m,
The USA accounted for 74 per cent of sales, according to Pure, which gained 450 new customers in the quarter. It said the customer gain was the highest in any Q2 and takes the total past 6,600.
Earnings call
In the earnings call Chairman and CEO Charlie Giancarlo said; “Looking at the market as a whole, Pure is clearly out-executing our traditional competitors, some of whom have expressed concerns around the macro economy. We do not believe the macro environment has affected us this past quarter.”
President David Hatfield added: “Pure is growing approximately 10x faster than any competitor and their rate of spending on innovation is on average 2x less than Pure’s R&D investment… Our win rates continued to hold nicely.”
He said: “Our gross margins continue to be industry-leading, with product margins well above our competitors.”
Pure said it has a lot of new products coming in the next three quarters, and these will help it take more market share.
Outlook, analysts and enterprise buying slowdown
The outlook for the next quarter is $440m at the mid point, 18 per cent higher than the year-ago quarter but a lower grow rate than the current quarter’s 28 per cent. The full year outlook is pared back from $1.76bn to $1.68bn.
Wells Fargo senior analyst Aaron Rakers told subscribers; “We think a reduced forward outlook from Pure has been expected following NetApp’s weak July quarter results and reduced F2020 guide… along with weak enterprise results from Cisco and Intel.”
William Blair analyst Jason Ader sent this message to his readers; “The guide-down was attributed primarily to a precipitous decline in NAND component pricing in the second quarter (which is affecting end-user pricing and deal sizes) and secondarily to increased caution on the macro environment (although Pure has yet to see an impact on its business.)”
Pure CFO Tim Ritters is leaving the company for personal reasons and there were fulsome comments about his great performance as CFO.
Datrium today introduced a disaster recovery-as-a-service (DRaaS) using the VMware Cloud on AWS as the remote DR data centre.
The service is imaginatively called “Datrium DRaaS with VMware Cloud on AWS” and the hybrid converged system supplier said it is much simpler for customers than running their own remote data centre. There is a consistent management environment and Datrium handles all ordering, billing and support. It claims its DRaaS is up to 10 times less expensive that a traditional DR operation, but has not yet provided an example.
Datrium organised a fun quote from Steve Duplessie, senior analyst at ESG: “Let’s face it – DR has been more disaster than recovery. It has been virtually impossible to have legitimate DR for decades, and the problem just keeps getting worse with exponential data growth and overall complexity.”
And here’s another from Bryan Betts, Principal Analyst, Freeform Dynamics. “Cloud-based disaster recovery has been something of a Holy Grail—great in theory, but a lot harder to achieve in practice, thanks to the complexity of marrying up disparate environments,” he said. “Datrium DRaaS has the potential to completely flip that around – given testing and careful planning, of course! – by guaranteeing a matching DR setup in the cloud.”
Datrium DRaaS, the features
Datrium copies on-premises virtual machine (VM) data to the AWS cloud. Should disaster strike it can then start up a replacement cloud version of the on-premises DVX systems.
Datrium DRaaS diagram
Data upload takes place with deduplicated and compressed VM snapshots stored in AWS S3. These VMs are stored in native vSphere format and provide the basis for incremental backup and DR.
Datrium’s DRaaS offers a 30-minute recovery compliance objective (RCO) with autonomous compliance checks and just-in-time creation of VMware SDDCs from the S3-stored snapshots.
The service also delivers a recovery point objective (RPO) from five minutes to multiple years ago, supporting primary and backup data simultaneously.
Single glass of pane
Datrium’s DRaaS runs on the company’s Automatrix data platform, which converges primary storage, backup, disaster recovery, encryption and data mobility capabilities into a single offering with a consistent data plane.
DR operation can be tested in an isolated and non-disruptive way. There is a consistent management environment covering the on-premises and cloud-resident Datrium systems. Customers can operate their cloud DR site and primary site with vSphere
FailBack is automated and can recover VMs when the primary site is operational again or to recover from a ransomware attack. To minimise egress charges only changed data is sent back from the cloud.
Datrium DRaaS currently supports recovery to AWS regions including Asia Pacific (Tokyo), Canada (Central), Europe (London) and the East and West United States. Contact Datrium for pricing.
Competition
Druva also offers DRaaS based on AWS. On-premises VMs or ones in the VMware Cloud on AWS are backed up to Druva’s cloud in AWS, converted to EBS snapshots, and can be spun up as EC2 instances if disaster strikes the source site.
Zerto sells combined backup and DR facilities covering any-to-any mobility between vSphere, Hyper-V, AWS, IBM Cloud, Microsoft Azure and any of several hundred Zerto-powered Cloud Service Providers (CSPs).
Western Digital today announced five external drives for gamers. The new WD_Black brand comprises four disk drives, including two Xbox-specific offerings, and one SSD.
They are designed to be fat and fast. In its release blurb WD said the products were “dedicated to gamers who face the dreaded challenge of choosing which of their favourite games to sacrifice when they reach the storage capacity limit of their gaming station.”
First, the hard disk drives;
P10 – 2TB – 5TB with USB 3.2 Gen 1 port and 3-year warranty. Costs $90 for 2TB to $149.99 for 5TB.
P10 Xbox One- 3TB – 5TB – 2 months Xbox Game Pass Ultimate membership. Costs $110 for 3TB or $150 for 5TB.
D10 – 8TB – 7,200rpm and up to 250MB/sec, 3-year warranty and active cooling (fan). Costs $200
D10 Xbox One – 12TB – 7,200rpm – 3 months Xbox Game Pass Ultimate membership. Costs $300.
The P50 solid state drive has 2TB capacity and supports USB 3.2 gen 2×2 port with speeds up to 2000MB/sec and comes with a 5-year warranty.
The P10 Game Drive for Xbox One, D10 Game Drive and 10 for Xbox One are available this quarter. The P50 Game Drive is expected to be available in calendar Q4.
Internal disk drives
We’re puzzled about what actual disk drives are inside the P10 and D10 portable products.
Examining the dimensions of the P10 and D10 products shows that the D10 drives are physically larger and that there are two P10 sizes:
Going by their physical size and larger capacities the D10 drives are 3.5-inch format products as smaller 2.5-inch format products top out well before 8TB capacities.
It looks like the P10 range is based on 2.5-inch format drives and the 4 and 5TB models have an extra platter or two. This is why their height is 8mm larger and they weigh 0.9kg more than the 2TB product.
The P10’s spin speed isn’t revealed. We’ve asked WD what specific drives are inside the P10 products and how fast they rotate.
WD told us by mail; “Our WD_Black P10 2.5-inch gaming drives are 5400RPM class. The 2TB is 2-platter design (7mm) and 3, 4 and 5TB are a 5-platter design (15 mm).”
DDN is giving Tintri file array users access to file services through a Nexenta virtual storage appliance.
The company bought Tintri in 2018 and Nexenta this year, and this is the first time it has brought together their technologies in one product.
Tom Ellery, general manager for Tintri, said more is to come: “As part of the DDN family of brands, we will continue to extend our technology to deliver automation and analytics up the stack, and add complementary solutions.”
Tintri by DDN, as it is formally known, ships VMstore file access arrays optimised for use by VMware and other virtualized servers. VMstore systems expose NFS for vSphere and SMB elsewhere, such as Hyper-V.
There is now a NexentaStor VSA for Tintri. This gives users Nexenta’s version of filer functionality, meaning NFS and SMB/CIFS file services for Windows and Mac clients.
We asked Tintri why it needs two versions of NFS and SMB in its portfolio. Kurt Kuckein, DDN’s senior director if marketing, said Tintri’s NFS and SMB interfaces “do not include a fully featured set of file services for desktop users, and historically Tintri customers would follow VMware best practices recommendations by adding a Windows server as a VM for desktop file services. “
NexentaStor VSA for Tintri “offers a hardened and mature SMB server to consolidate server and desktop storage on a Tintri system, including user profiles for virtual desktops and other non-virtualized use cases.”
NexentaStor VSA storage pools are built on VMDK files from an ESXi datastore and rely on the durability provided by the Tintri array.
Files stored on the Tintri array are deduplicated. The VSA is managed via the Flex or HTML5 VMware vCenter plugins. A NexentaStor High Availability plugin provides automated failover of file services in the event of virtual machine or node failure.
File storage is almost always needed for applications and document storage, according to DDN. The NexentaStor VSA extends Tintri array use to home directories, user profiles, consolidated file sharing and branch office file data.
The NexentaStor VSA has been certified as VMware Ready for VSAN.
Druva has devised a way of automatically slotting backup data into the appropriate AWS storage layer, a service that it claims, reduces total cost of ownership by up to 50 per cent.
Warm data can be saved in AWS S3, with less often referenced or cool data sent from there to Glacier and rarely-accessed or cold data pushed across to Glacier Deep Archive.
Phil Goodwin, director of research, IDC, gave a warm quote for the Druva launch release that sums up the Druva pitch nicely: “IDC estimates approximately 60% of corporate data is ‘cold,’ about 30% ‘warm’ and 10% ‘hot’. Organisations have typically faced a tradeoff between the cost of storing ever increasing amounts of data and the speed at which they can access the data.
“Druva’s collaboration with AWS will allow organisations to tier data in order to optimise both cost and speed of access. Customers can now choose higher speed for the portion of data that needs it and opt for lower costs for the rest of the data that does not.”
Single pane of glass
The tiering can be performed by Druva’s software, using machine learning and policies or it can be managed explicitly by customers, using a management dashboard. Policies are also set through this dashboard.
Another quote, this time from Mike Palmer, Druva’s chief product officer: “The ability to see multiple tiers of data in a single pane of glass increases control for governance and compliance and eventually analytics.”
Druva claims it is the only SaaS data protection offering built completely on AWS, and as such is cloud-native.
Blocks & Files expects Druva to make moves towards Microsoft’s Azure and in due course, Google Cloud Platform.
No word yet on availability and pricing for the AWS tiering service. However, Druva today confirmd general availability of its Disaster Recovery-as-a-Service (DRaaS), announced in March 2019.
Acronis has a fresh release of its True Image PC-tablet disk image backup and cloning software, with faster performance and better anti-malware defences.
It can stop ransomware and crypto-jacking attacks in real time, automatically restoring any affected files, Acronis claimed.
True Image supports Windows, macOS, iOS and Android operating systems. The latest release, True Image 2020, enables users to automatically replicate local backups in the cloud so they can have a local copy of disk image as well as a further backup in the cloud for added protection.
There are more than 100 point enhancements in the update, which is presented under a cyber protection banner and combines data protection and security attributes.
True Image 2020 has a new backup format that delivers faster backup and recovery speeds. It also enables users to browse files in their cloud backups more quickly.
True Image 2020 desktop screen.
Other enhancements include Wi-Fi network selection and the prevention of backups when a device is operating on battery power. There are also improved machine learning models for detecting malware attacks.
There are three versions; Standard, Advanced and Premium, with pricing from $49.99 for Standard and Advanced editions and $99.99 for the Premium version.
The potential of persistent memory is to return computing to the pre-DRAM era but with vastly more powerful NVRAM servers, and external storage relegated to being a reference data store.
Or so says Steven Sicola, the storage sage whose 40-year career includes senior technical roles at Compaq, Seagate, X-IO, SanDisk, Western Digital and Formulus Black
Server starvation NVRAM fix
In his presentation at the 2019 Flash Memory Summit this month, he argued that modern IT suffers from looming server starvation because data growth is accelerating and server processors and networks cannot keep up. By 2020 a tipping point will be reached, and server scaling will be inadequate in the face of data in the tens of zettabytes.
The memory-storage IO bottleneck is the fundamental flaw afflicting today’s servers. How to overcome this? SIcola’s answer is to re-engineer servers with non-volatile RAM (NVRAM) instead of DRAM.
Sicola presentation deck slide.
Such an NVRAM server would have all its primary storage in memory, and share this with other NVRAM servers, speeding up distributed computing by magnitudes, with no external storage getting in the way. The server is used for sequentially-written and read reference/archive data and holds all primary data in NVRAM memory. External storage is used for secondary, reference data.
Tipping point
Sicola reckons: “NVRAM based servers, accompanied by an NVRAM-optimised Operating System for transparent paradigm shift are just about here…this is a big tipping point.”
He thinks the flawed Von Neumann era could be about to end, with NVRAM servers: “We now see the re-birth of NVRAM with Intel’s Optane, WD’s MRAM, etc.”
NVRAM servers will be able to handle many more virtual machines and much larger working data sets, especially for Big Data, AI, and all the other applications begging for more server power, from databases, mail, VDI, financial software, to, as Sicola says, you name it.
Memory-storage bottleneck
For Sicola, the big bottleneck now is between memory and storage, where there is an IO chokepoint. That’s because storage access is so much slower than DRAM access. Despite the use of SSDs, CPU cycles are still wasted waiting for storage IO to complete.
A slide in his FMS19 presentation declared: “It has been the ‘Holy Grail’ for all software engineers and those making servers to have NVRAM instead of cache (DRAM), but have been living with a flawed Von Neumann architecture for almost 40 years!!
This bottleneck has come about because theVon Neuman computing architecture, seen in the first mainframes, has become modified and flawed
The original mainframes did not have dynamic random access memory (DRAM), the volatile stuff. Instead they used NVRAM – remember ferrite cores? These machines were examples of Von Neumann architecture, which was first described in 1945. Minicomputers also used Von Neumann architecture.
Von Neumann architecture
In these systems a central processing unit accessed instructions and data from a memory unit. This used non-volatile memory as dynamic or volatile random-access memory (DRAM) wasn’t commercially available until the 1970s. Instructions and data were loaded from an input peripheral unit and the system output data to an output peripheral unit.
As systems developed, a bottleneck was identified between the CPU and the memory unit. Instructions and data were read in to the CPU over a single data path. A later Harvard architecture machine has separate instruction and data paths, enabling the system to operate faster.
A modified Harvard architecture machine combines the data and instruction memory units with a single address space, but has separate instruction and data pathways between it and discrete instruction and data caches. The caches are accessed by the CPU.
This architecture is used by x86, ARM and Power ISA processors. The memory units use DRAM technology, dating from the 70s, with x86 processors appearing from 1978 onwards. DRAM usage meant persistent storage was separate from memory, with memory functioning as a cache.
Since then CPUs have gained faster clock rates and more cores, CPUs have developed a hierarchy of ever-faster caches and internal data paths have sped up, with today’s PCIe 3 making way for its successor, PCIe 4.0.
But not today, not yet
However NVRAM servers are not yet ready to go mainstream, according to Sicola. Today’s hybrid RRAM/flash NVDIMMs are too early as they are too expensive, do not have enough capacity and are over-complex with firmware in memory.
He thinks network bandwidth could rise markedly, with today’s switch-based networking changing over to memory-like hops with torus networks, such as those from Rockport Networks. Memory fabrics like Gen Z could also speed data transfer between NVRAM servers, with PCIe 4.0 accelerating internal traffic.
It’s a glorious future but will it come to pass? Will 40 years of x86 servers using DRAM give way to servers based around persistent memory? Some might think that’s a big ask.
Innodisk, the Taiwainese storage vendor, has built an industrial SSD with Microsoft’s Azure Sphere providing software updates, device-level analytics, data security, remote monitoring and control through the Azure cloud.
The InnoAGE SSD will be embedded on an edge device. Its firmware receives commands from Azure Sphere which in turn connects to customers’ Microsoft Azure Cloud deployments. The SSD can collect data and issue requests for management through the cloud.
Example application areas are manufacturing, surveillance, unmanned devices, vending machines and digital signage.
A Microsoft FAQ says: ‘You can connect to other clouds for app data while running Azure Sphere – or optimise efficiencies by using Azure Sphere alongside Visual Studio and Azure IoT.”
This reminds Blocks & Files of HPE’s cloud-based InfoSight device monitoring facility, which is applied to storage arrays, servers and network switches. It seems entirely logical to deploy the same facility at storage device level, particularly in non-data centre, industrial site locations.
Azure Sphere
The InnoAge SSD features a 2.5-inch or M.2 format SATA interface with an Azure Sphere system chip affixed to it. This constitutes the Azure Sphere microprocessor-based control unit (MCU), operating system and security service.
InnoAGE SSD with highlighted on-board Azure Sphere MCU chip.
MCU chips are built by certified Microsoft partners. Customers buy devices using Azure Sphere and that price entitles them to device lifetime access to the Azure Sphere Linux-based OS, updates to it and security service.
The 2.5-inch version has a 1TB maximum capacity while the M.2 variant holds up to 512GB. Both use Toshiba 64-layer 3D NAND with up to 3,000 write/erase cycles. Availability is slated for the end of September, and a development road map is planned by the end of the year.
There is as yet no information available from Innodisk about the device’s performance or pricing. Check out a brief InnoAGE datasheet to find out a little more.
Here are some storage news stories to round off the week.
Dell EMC storage comes closer to Cloudera
Dell EMC and Cloudera have strengthened their partnership whereby Dell EMC storage is used with Cloudera Big Data and analytics software.
Cloudera has a Quality Assurance Test Suite (QATS) process for certifying both Hortonworks Data Platform (HDP) and Cloudera Data Hub (CDH) with hardware vendors.
To date, Dell EMC Isilon has been validated with CDH v5.14 and HDP v3.1. Over the next few months, Dell EMC will work with Cloudera to get Isilon certified through QATS as the primary HDFS (Hadoop File System) store for both CDH v6.3.1 and HDP v3.1. It also plans to get Dell ECS certified as the S3 object store for CDH and HDP.
Dell EMC also plans to launch new joint Hadoop Tiered Storage solutions that enable customers to use direct-attached storage (DAS) for hot data and shared HDFS storage for warm/cold data within the same logical Hadoop cluster. This delivers more performance and more economic scaling.
It is working with Cloudera to align the Isilon and ECS roadmaps with Cloudera’s strategy for Cloudera Data Platform (CDP), the new Hadoop distribution that combines the best of breed components from CDH and HDP.
Dell EMC will also offer phased migration services from CDH or HDP to CDP. These migration services will be launched as CDP becomes available for on-premises deployment.
Background details can be read in a Direct2DellEMC blog by John Shirley, VP of product management.
InfiniteIO chums up with Cloudian
InfiniteIO file access acceleration can now work its metadata magic to speed access to file stored on a [URL] Cloudian object storage system.
The jointly validated system combines Cloudian HyperStore object storage with InfiniteIO Hybrid Cloud Tiering, with no changes to users, applications or systems. HyperStore can include objects stored in the public cloud.
A combined InfiniteIO-Cloudian system helps ensure data is properly placed across primary, secondary storage and public cloud storage, potentially saving millions of dollars in primary and backup storage costs.
The companies said customers can migrate hundreds of petabytes of inactive data from on-premises NAS systems to the exabyte-scalable Cloudian object storage system, with no downtime or disruption to existing IT environments.
Also customers can attain highly available enterprise-class storage with the performance of all-flash NAS in all storage tiers.
Shorts
ATTO Technology announced that its Xstream CORE ET8200 network interface has been adopted by Spectra Logic as a component of its Spectra Swarm system, which adds Ethernet connectivity to Spectra LTO tape libraries.
BackBlaze’s B2 Copy File APIs were in beta and are now public and ready to use
The UK’s University of Leicester has deployed Cloudian’s HyperStore object storage system as the foundation of a revamped backup platform.
Everspin Technologies, the developer and manufacturer of Magnetoresistive RAM (MRAM) persistent memory products, has announced an IP cross-licensing agreement with Seagate Technology. Everspin gets access to Seagate’s MRAM patents, while Seagate gets access to Tunneling Magnetoresistance (TMR) patents owned by Everspin.
The US DOD has bought a $12m, 6 petaflop IBM supercomputer housed in a mobile shipping container and using the Spectrum Scale parallel access file system. It’s designed to be deployed to the tactical edge.
Intel told an analyst briefing group about Optane DC Persistent Memory progress; “To date we are seeing good traction within the market. We currently have over 200 POCs in the works with customers.” It commented: “As with any new disruptive technology, broad customer adoption takes time. The value is there.” That remains to be seen.
Intel Optane DC Persistent Memory status.
Micron is the first memory company to begin mass production of 16Gbit DDR4 products using1z nm process technology. This delivers substantially higher bit density, as well as significant performance enhancements and lower cost compared to the previous generation 1Y nm node.
Micron unveiled the industry’s highest-capacity monolithic 16Gbit low-power double data rate 4X (LPDDR4X) DRAM. It’s capable of delivering up to 16GB of low-power DRAM (LPDRAM) in a single smartphone.
Object Matrix announced that MatrixStore, its object storage product, is now integrated with VSN’s media management and workflow automation platform, VSNExplorer.
Data protection vendor Overland Tandberg, freshly spun out of Sphere 3D, has reported $67m in revenues ,according to Black Enterprise.
Storage Architects, a Dutch consultancy specialising in digital data storage, has chosen Qumulo’s distributed file system to serve the needs of its enterprise clients.
Storage Made Easy has a new release of the Enterprise File Fabric product which features a new Content Intelligence feature hooked into its search which provides on-demand checksums, media info and AI/Machine Learning with Google Vision initially to be followed by Amazon Rekognition and IBM Watson integrations. It also includes in-browser media playback from any storage with Video Scrubbing, in addition to thumbnails and previews for an extended amount of image formats such as RAW images.
SwiftStack has announced its new Technology Partner Program and the “Works with SwiftStack” designation, to provide customers with the confidence that validated integrations have passed testing to ensure compatibility with the company’s object-based cloud storage offering.
Processor developer Tachyum has joined the Peripheral Component Interconnect Special Interest Group (PCI-SIG), a 700+ member association committed to advancing the non-proprietary PCI technology to yield a reliable, scalable solution for high-speed I/O in numerous market applications. IT’s working with PCIe v4.o and v5.0.
People
Nancy Hurley has been appointed a Consultant- Team Lead VxRail Product Marketing at Dell EMC. She had been acting CMO at Bridgeplex.
Several execs have resigned from NetApp. Joel Reich, EVP Products and Operations, has retired from running NetApp’s Storage, Systems and Software business unit.
Brad Anderson has been promoted from running NetApp’s Cloud business to EVP and GM for NetApp’s Cloud Infrastructure and Storage, Systems, and Software business units.
Object storage software house MinIO has demonstrated its storage can run up to 93 per cent faster than a Hadoop system.
In its latest benchmarks, published this week, MinIO was faster than a Hadoop file system (HDFS) configuration. In the test set-up both systems were run in the Amazon public cloud. There was an initial data generation procedure and then three Hadoop process execution times were examined – Sort, Terasort and Wordcount – first using HDFS and then MinIO software.
MinIO was slower than Hadoop running the data generation process but faster with the Sort, Terasort and Wordcount tests. It was also faster overall, based on summing the time taken for the data generation and test runs.
In a blog post announcing the results, MinIO CMO Jonathan Symonds exclaimed: “Basically, modern, high performance object storage has the flexibility, scalability, price, APIs, and performance. With the exception of a few corner cases, object storage will win all of the on-prem workloads.”
In other words, “Goodbye, NAS and SAN.”
Here’s the table of test results:
The benchmarks are charted below:
MinIO’s best result was 93 per cent faster at the Sort run.
Summing the MinIO and HDFS data generation times to the test run times we see that MinIO (3,700 secs) was overall faster than HDFS (4,337 secs.) You can check out the Minio HDFS benchmark details.
Hadoop systems rely on multiple compute + storage nodes, each handling a subset of the overall data set. It involves three copies of the raw data, for reliability, and large numbers of nodes as the data sets increase in size. This means hundreds of servers, potentially.
An object storage system is inherently more reliable at holding data than a Hadoop system and does not need to make three copies. The amount of compute resource to run an analysis can be tailored to the workload instead of being drawn from the HDFS nodes.
MinIO said analytics using its object storage software can typically run on fewer servers than an HDFS system, and needs less disk or SSD capacity to hold the data. This saves time and money.
MinIO’s software can run on-premises or in the public cloud.
Oracle is shuttering its flash storage division and laying off at least 300 employees, according to various sources.
Employees were told of the layoffs on Thursday, August 15 by Mike Workman, SVP of flash storage systems at Oracle, via a conference call.
An outgoing Oracle staffer who attended the conference call, told Blocks & Files: “Today Larry’s Band of Storage Misfits aka Pillar Data was quietly let go and the product discontinued. A small number of people were kept for take the product to the grave via sustaining mode.”
He said: “The layoff estimate is approximately 300 people.”
The job cuts were also reported in two anonymous posts on Thelayoff.com yesterday. One poster wrote: “All but 12 people in Mike Workman’s org were laid off … essentially the entire Flash Storage division. 12 were retained. Over 400 jobs cut…
And:
Oracle is focusing heavily on its public cloud business, with less emphasis for on-premises deployments of its software, and hence hardware. It now looks as if Oracle is stopping building its own flash storage hardware and software.
We asked Oracle for comment and a spokesperson’s response was: “As our cloud business grows, we will continually balance our resources and restructure our development group to help ensure we have the right people delivering the best cloud products to our customers around the world.”
Background
Mike Workman founded Pillar Data in 2001 with then Oracle CEO Larry Ellison. The company was acquired by Oracle in 2011 and subsequently became part of the Oracle Flash Storage division, based at Broomfield in Colorado. Oracle employed 2,000 people at Broomfield last year.
Oracle flash storage products included the FS1, a redesigned Pillar Axiom array.
They were integrated into the Oracle engineered systems, the components of which are architected, co-engineered with Oracle software, integrated, tested, and optimized to work together for better Oracle Database performance.
Engineered systems include the Exadata Database Machine, Big Data Appliance, Zero Data Loss Recovery Appliance, Private Cloud Appliance and Database Appliance.
The Exadata Database Machine X8 was announced in June 2019. Oracle’s Gurmeet Goindi, master product manager, Exadata at Oracle, blogged about the Exadata X8: “The X8 hardware release updates 2- and 8-socket database (compute) servers and intelligent storage servers with the latest Intel chips and significant improvements in storage. The X8 hardware release also adds a new lower-cost storage server that extends Exadata benefits to low-use data.”
NetApp is working on running ONTAP, an operating system for storage arrays, on IOT edge devices and other small things
The game was given away by a tweet today from John Martin, NetApp’s APAC director of strategy and technology, and in a subsequent exchange with Simon Sharwood, the Oz journalist (and former Register APAC editor).
Here’s the tweet stream;
Our take? We think NetApp engineers have installed ONTAP on an Arm-class or RISC-V class CPU system. That means it could run on devices with embedded ARM systems, such as smart NICs and assorted IoT edge devices. They could integrate with larger NetApp systems, on-premises or in the cloud or both.
Octavian Tanese, NetApp’s SVP for ONTAP, commented by Twitter;
We are seeing computational SSDs – from NGD, for example – and we think these could also run ONTAP on their embedded processors. Whether it would be useful to have a single drive ONTAP system is… another question.
We have asked NetApp for comment and a spokesperson said: “We don’t have anything we’re announcing publicly at this time regarding this. It is not a solution that is officially in development, but should that change we will definitely have something for you.”