Home Blog Page 430

Ceph backers apply Foundation in storage makeover

Ceph, the open source storage software platform, has gotten its very own foundation. Just like Linux.

The Ceph Foundation is organised as a directed fund under the Linux Foundation. It “will organise and distribute financial contributions in a coordinated, vendor-neutral fashion for immediate community benefit. This will help galvanise rapid adoption, training and in-person collaboration across the Ceph ecosystem.”

The foundation is the successor to the Ceph Advisory Board, which was formed in 2015 by Canonical, CERN, Cisco, Fujitsu, Intel, Red Hat, SanDisk, and SUSE. 

Thirty vendors and academic institutions have joined the Ceph Foundation. They include Canonical, China Mobile, DigitalOcean, Intel, OVH, Red Hat, SoftIron, SUSE, Ubuntu and Western Digital.

We trust the new organisation will help to speed up Ceph software development, which has moved at a stately pace so far.

Jim Zemlin, executive director of the Linux Foundation, said: “Under the Linux Foundation, the Ceph Foundation will be able to harness investments from a much broader group to help support the infrastructure needed to continue the success and stability of the Ceph ecosystem.”

Tim Massey, CEO of HyperDrive developer Softiron, told us the launch of the Ceph foundation “demonstrates the strong support from the open source community for Ceph. Like other open source projects, once they achieve critical mass, they become the de facto standard for the category. There is a massive amount of Ceph in production around the globe, and I believe that Ceph will inevitably dominate the storage industry.”

Ceph: Inevitable or…evitable?

In a press statement, Softiron argues that “it’s safe to say [Linux] won because it is open, flexible, and widely embraced by the global developer community. And herein lies the argument for why Ceph will fulfil a Linux-like destiny in open-source software-defined storage, because it delivers a similarly feature rich and extremely flexible platform that is already widely embraced.”

Blocks and Files notes that the bulk of software-defined storage innovation currently takes place in venture-funded startup land, outside the world of Ceph.

Let’s set out some examples:

  • WekaIO and its claimed world’s fastest file system
  • Elastifile hybrid high-performance file system
  • Excelero and NVMESH software
  • OpenIO and object storage using serverless computing
  • Portworx and container storage
  • Qumulo CoreOS scale-out file system available on Amazon
  • StoreONE and re-written storage stack
  • Multiple storage data management startups; Actifio, Cohesity, Delphix, Hammerspace, Rubrik and others

Some suggest Linux took over the Unix world because there was little differentiation between Unix distributions. Also writing enterprise-class operating systems for commodity X86 server hardware is a long and hard effort. In comparison re-writing storage stack software is less onerous and the differentiation between products large.

Cephalopods will disagree with the above observation. But it does not seem to us that Ceph is inevitably inevitable. Also in focusing on medium-speed storage need, the community could even restrict itself to a not-state-of-the-art storage backwater.

The Ceph Foundation hosts the second Cephalocon conference in Barcelona, Spain from May 19 – 20, 2019, co-located with KubeCon + CloudNativeCon 2019, May 20-23.

Infinidat: We own the high-end storage array market

Infinidat claims it has a 60 per cent plus share of multi-petabyte storage array shipments. But we have only its word to go on.

Brian Carmody, Infinidat CTO, argues that his company operates in a separate multi-PB market sector. while all-flash array shippers such as HPE, IBM, NetApp and Pure, are shipping multi-TB arrays.

“We see a smart division of the storage market emerging, where great companies like Pure and NetApp are kicking ass in the TB-scale market, and Infinidat is dominating the Petabyte plus market,” he said.

It would be helpful to see independent statistics from IDC or Gartner to validate the idea that there are separate TB-level and PB-level array market sectors, and also to show how the other big array iron vendors are doing.

But we infer that Infinidat is comparing itself with Dell EMC (PowerMAX), Hitachi-Vantara (USP), HPE, and IBM (DS8000) which are available in  hybrid flash+disk and all-flash configurations. It also sees itself competing with high-end arrays from NetApp.

You got to pick a petabyte or two

Infinidat says some 74 per cent of its shipped systems are a petabye or more in size and we have seen a chart of Infinidat shipped systems showing their system capacity:

In an email interview Carmody said that “74 per cent of InfiniBoxes are >1PB, our average customer has 7.3PB of InfiniBox, and our largest have over 100 PB.”

“Right now,” he claimed, “Infinidat is focused on consolidating our lead in the Multi-PB enterprise market space. Our footprint is growing exponentially: We’ve shipped 1.7 Exabytes of capacity in the past year; that is [an] over 60 per cent share of global multi-PB market capacity.”

If Infinidat has shipped 1.7EB of capacity in the last 12 months and the average array size is 7.3PB then it likely shipped around 233 arrays in the period.

Array vendors do not typically release statistics about the capacities of their shipped arrays – unless it suits them. It suits Infinidat. Its marketing message is that it ships high-end, disk-based, monolithic arrays with bulk storage capacities that match the performance of all-flash arrays.


Infinidat says the percentage of its systems greater than a PB in size rose in the second quarter of 2018 to 84 per cent. The median customer had a near-4PB array, the average one 7.3PB of capacity, and its largest customer has a 100PB system.

A 2018 Gartner Critical Capabilities report for general purpose drive arrays points out; “Customers often purchase fewer large multi-petabyte InfiniBox arrays rather than many smaller InfiniBox arrays, and this is a good indicator of Infinidat concentrating its sales in the high-end market segment.”

Give that petabyte arrays

We have been given here a partial and self-serving insight by Infinidat into shipped big array data. But we note the company boasts that its customers are more likely to recommend it than Dell-EMC or NetApp customers, citing Gartner Peer Insight customer satisfaction ratings:

Our thinking is that the other legacy big array suppliers are hurting and will have to refresh their product and cut prices.

Infinidat founder

Infinidat is Moshe Yanai’s latest attempt to disrupt the storage industry, after his previous attempts with Symmetrix for EMC, XIV and Diligent. He is a genius storage array architect and engineer who designed EMC’s original Symmentrix drive array, after joining EMC in 1987. This took on and IBM’s then dominated drive arrays and beat them, with Yanai driving Symmetrix development to extend its momentum.

Moshe Yanai

By 1995 it represented 41 per cent of disk terabytes for mainframes with IBM’s own arrays accounting for 35 per cent.

Eventually he clashed with senior EMC business management and resigned in 2001. He started up the XIV array business in 2002, which prospered and was bought by IBM in 2008. He was also involved with the Diligent deduplicating arrays which were also bought by IBM.

Yanai became an IBM Fellow but left Big Blue in 2010 and founded Infinidat in 2011, recruiting many ex-XIV staff. Product was first shipped in 2015.  Yanai says Infinidat broke storage array footprint growth records with more than 1.4 EB deployed in three years.

He has been given many honours, is a billionaire and is probably the most inventive and disruptive visionary in the entire storage industry.

Western Digital aims to boost NVMe SSD share with this one cunning virtualization trick

Western Digital today introduced its ME200 memory extension NVMe SSD. This uses a hypervisor to implement a software memory management unit which adds SSD capacity into the host server’s memory address space.

The outcome is server DRAM bulked up with virtualized NAND – cheaper than Intel Optane, WD claims.

The Register explores the technology in more detail here. In this companion piece we outline the market realities that WD faces.

If WD is right and there is a large memory extension market then it is the sole supplier and should reap the benefit. But it has a mountain to climb.

For starters the company lags far behind NVMe SSD market leaders Samsung and Intel. TrendFocus, a data storage analyst firm, estimates 5.5 per cent capacity ship share in H12018 for WD in PCIe/NVMe enterprise SSDs versus 48 per cent for Samsung and 30 per cent for Intel.

This is problematic at a time when NVMe SSD shipped capacity as a proportion of all enterprise SSD shipped capacity is shooting up.

According to IDC, PCIe SSD unit shipments will grow at 54 per cent  compound annual growth rate through 2022. Much of this is at the expense of SAS and SATA interface SSD, which see unit shipments decline over that period – spectacularly in the case of SATA.

NVMe (PCIe) SSD capacity shipments jumped more 220 per cent year-on-year in the first half of 2018, according to TrendFocus. Total shipments in Q2’18 accounted for 43 per cent of total enterprise SSD capacity versus 20 per cent a year ago.

TrendFocus estimates that NVMe SSDs accounted for 25 per cent of WD’s total enterprise SSD capacity shipped in that half compared to 50 per cent for Samsung and 60 per cent for Intel.

WD loves ScaleMP

WD has not named its software partner for the software MMU functionality, but Storage Newsletter has figured out that the provider is ScaleMP and its MemoryONE product.

Here is ScaleMP’s description of a MemoryONE demo at SuperComputing 18:

” MemoryONE demonstrations will show how clients can leverage SDM (Software-Defined Memory) to use high-end NVMe SSDs as system memory for transparently replacing or expanding DRAM for memory-intensive applications. This innovative memory solution targets cloud, HPC, and enterprise clients in need of larger system memory, allowing more data to be analyzed in real time, leading to greater insights and faster decision making.”

A perfect match for WD’s ME200 drive.

Faster, cheaper: Pavilion packs massive monolithic features into mini-NVMe array

Pavilion Data Systems, an NVMe array startup, has added encryption at rest to its RF100 series appliance.

Pavilion’s array is a radical architectural change over other all-flash arrays, which are all basically dual controller systems. It says its system delivers the best price performance in the all-flash array industry. So let’s explore the design in a little more detail.

The RF100 comes in a 4U enclosure which holds up to 10 line cards. Each card contains two Broadwell Xeon CPUs in an active-active controller configuration and 4 x 40Gbit/s or 100Gbit/s Ethernet ports. TCP and ROCE (RDMA) access are supported.

Pavilion Data System’s RF100 series appliance

That tots up to 20 controller CPUs and 40 Ethernet ports – a big step-up from  a traditional dual-controller array such as Dell EMC’s Unity or NetApp’s FAS.

The multitude of controllers make the architecture more akin to a monolithic, high-end array such as Dell EMC’s PowerMAX or IBM’s DS8000.   In effect there  are multiple engines inside the array.

Monolithic array-style design attributes of RF100-series array

These controllers, each with their own memory and OS copy, attach across a PCIe fabric to up to 72 NVMe SSDs, in four banks of 18, with 14TB to 1PB of capacity. Two redundant supervisor modules handle the control plane and management functions for the entire system.

The Pavilion software provides dual-parity RAID, with a 12 per cent overhead, zero-space instant snapshots, clones and encryption.

You can scale up SSDs and controllers separately to match a workload profile.

Everything is commodity hardware-based: there is no need for any agent or other Pavilion software in these servers, and the RF100 product has no ASICs or FPGAs. 

Customers can use their own SSDs, with Micron’s 9200, Western Digital’s SN200, Samsung’s 1725b and Intel’s P4800X, P4510 and P4610 all supported.

Fully loaded

The array provides end-to-end NVMe data access to host servers, with average read latency of 117μs. 

In a fully loaded box the performance is up to 120GB/sec read bandwidth, 60GB/sec write bandwidth and 20 million 4K Random Read IOPS.

This is remarkable, coming as it does from a 4U box. Pavilion says the $/IOPS rating is up to 25 times cheaper than competing all-flash arrays. It has not yet published numbers to support that claim but let’s take the IBM DS8888 with 34.3TB capacity as a reference point. This will give you a rough idea of how much monolithic arrays cost. Its list price in Nov 2016 was $1.97m, according to the SPC-1 benchmark report.

In a video, Pavilion Data’s Head of Products Jeff Sosa says the appliance can replace the locally-attached NVMe SSDs of 20 rack servers. This means you can scale compute and storage separately, and so avoid unused and stranded SSD capacity in each server.

Jeff Sosa introducing the RF100 series product

Application use cases cited by Pavilion include MongoDB, MySQL, Splunk Enterprise, Kubernetes on-premises, Spark and Apache Cassandra.

Supercomputing’ 18

Pavilion is to exhibit the RF100 series product next week at SC18, November 12-15 in Dallas, Texas.

It will show the results of two demo workloads:

  • Genomics multi-variant testing analysis to demonstrate rack-scale shared NVMe storage to accelerate human genome analysis,
  • The SPEC-FS benchmark in a clustered file system environment using GPFS (IBM Spectrum Scale) with shared, rack-scale flash storage.

Note that Pavilion has not submitted a SPEC-FS  benchmark (we understand this is SPEC SFS2014).

Sosa told us: “We haven’t yet submitted an official result, which is also why we didn’t specify a specific result in the press announcement. We will be discussing testing we have done so far as part of the booth exhibit, but not releasing official results.”

He said: “We have been focused more on the tests outside of software build up until now, since we believe that the VDA and Database tests are more relevant to our customer base, but we plan to run all of them.”

That will be a good thing as it will provide an objective comparison with systems from rival suppliers.

Lenovo sets SPC-1 price performance record

Lenovo has set a price performance record of $91.76 per 1,000 SPC-1 IOPS (KIOPS), breaking its previous record of $93.29 by 1.6 per cent.

The SPC-1 benchmark is a synthetic piece of workload code that measures storage subsystem performance. It supports deduplication and compression and the set-up simulates a business-critical environment characterised by predominantly random IO operations.

For its latest run Lenovo used a ThinkSystem DE6000H, a twin-controller, flash or hybrid flash+disk storage array supporting raw data-read-throughput rates of up to 21GB/sec.

ThinkSystem DE600H

This was configured as a 2U x 24 slot with 24 x 800GB SAS SSDs. The system scored 460,011 SPC-1 IOPS – a theoretical measure and not the actual number of IOPS carried out by the system.

This is not a high score – the 17th fastest and a long way behind  Huawei’s four 4 million-plus SPC-1 IOPS monsters, which top the SPC-1 performance table.

The record is 7,000,565 SPC-1 IOPS from a Huawei OceanStor Dorado 18000 V3 array, priced at $2,638,917.96. This has a price/performance of $376.96/KIOPS, four times higher than the Lenovo system. For the avoidance of doubt, in this case ‘higher’ means ‘worse’.

The DE6000H used in the benchmark run had a total capacity of 9,448GB. In contrast the top IOPS-scoring Huawei system is, at 211,316GB, more than 22 times larger.

A tale of two charts

A chart of overall SPC-1 performance scores shows Lenovo in 17th place:

But Lenovo occupies pole position for price performance:

Real world computing

The beauty of the SPC-1 benchmark is that it enables you to group similarly sized systems. This makes it easy to see how they rank in IOPS performance and  how much that performance costs.

So, the Lenovo DE6000H is the most cost-efficient using the latter measure. In comparison a 9006GB NetApp EF570 system scored 500,022 SPC-1 IOPS at a cost of $128.42/KIOPS. This is roughly similar IOPS but it costs more to get this result.

A 9073GB Fujitsu ETERNUS AF650 S2 scored 620,153 SPC-1 IOPS at a cost of $269.79/KIOPS. That’s 35 per cent more performance than the Lenovo array with a 34 per cent higher price/performance rating.

Qumulo co-founder Peter Godman quits the company

Peter Godman, a co-founder and CTO of scale-out filer startup Qumulo has quit.

It was confirmed to industry commentator and event organiser Philippe Nicolas by Qumulo CEO Bill Richter.

Richter told us via mail: “I can confirm that after nearly seven years with Qumulo, Pete Godman is moving on. Pete’s a visionary, an inspiration and a good friend. On behalf of the whole team here at Qumulo, I want to thank him and wish him the best on a well deserved break, time with his family and his next adventure.”

Peter Godman

Nicolas points out that Ken Cheney, Qumulo’s VP for Business development, resigned in August and wonders if the two resignations are connected.

Qumulo was started up by then-CEO Godman, Chief Scientist Neil Fachan, and then-CTO Aaron Passey, all ex-Isilon executives,  in 2012. Passey left in 2016, so now there is only one co-founder left.

The company has taken in $233m in funding as it has developed its scale-out, distributed file system product, Qumulo Core, which has notched up sales at marquee accounts such as Dreamworks, and is available with all-flash nodes and supports the Amazon cloud. 

To INFINIBOX and beyond! Infinidat preps NVMe-oF upgrade for high-end array

Infinidat looks set to add ultra-fast NVMe-over-Fabrics access to its INFINIBOX arrays.

The company has not gone public yet but a source tells me that it demonstrated NVMe-over-Fabrics access to an INFINIBOX array at a recent customer event in Boston, Mass.

I hear from a customer attendee that a sample workload completed with latency under 50usec as measured from the host. The NVMe-oF access accelerated access to the INFINIBOX memory caching technology and provided faster data access than all-flash arrays.

We asked Infinidat CTO Brian Carmody if Infinidat plans to provide NVMe-oF functionality to its INFINIBOX arrays. He responded with a crisp: “No comment.”

Infinidat CTO Brian Carmody.

INFINIBOX is already fast

These high-end high-capacity monolithic arrays use disk to store their data and memory caching to deliver high cache hit rates on reads.  Performance is comparable to, if not faster than, all-flash arrays.

Test benchmarks have been publicised showing an INFINIBOX outpacing IBM and Pure Storage all-flash arrays (Infinidat explains its methodology here).

Typically, Infinidat arrays connect to accessing host servers across Fibre Channel or Ethernet, and that introduces network and storage stack delays to data access latency.

If NVMe-over-Fabrics (NVMe-oF) were used instead, data access latency would reduce by effectively extending a server’s PCIe bus out to the shared, external array. This is a much faster link than, say, 16 or 32Gbit/s Fibre Channel, with storage array LUN access requests passing across it.

A LUN is a Logical Unit Number which references a logical volume of block storage accessed by applications running in a server. The storage array controller receives a LUN read or write access request and converts the LUN into actual disk or flash drives.

You are not alone. Cloud repatriation is a thing, a good thing, says Cloudian

Michael Tso, co-founder and CEO of object storage supplier Cloudian, thinks the rush to the public cloud is slowing.

The on-premises world has caught up with the public cloud as it now features consumption-based business models, elasticity from software-defined storage, and agile development (containers.)

That means that the gulf between the two has narrowed which in turn reduces the cost benefits of the public cloud.

Or so says Tso, who told Blocks and Files in an interview that cloud data repatriation and edge data growth are great drivers for the company.  Repatriation enables workloads to return “home”. And cloud is too slow and costly for bulk transmission of edge data.

Who says so? I say, Tso

According to Tso, the idea that workloads and data, once migrated to the public cloud, will stay there forever is wrong. As evidence, he cites an IDC report, Cloud and AI Adoption Survey, which was published in the summer of 2018.

Cloudian co-founder snd CEO Michael Tso

The report authors found 80 per cent of customers repatriating workloads from public cloud environments. Some 85 per cent are expected to repatriate data next year.

On average, survey respondents expect to move 50 per cent of their public cloud applications to hosted private or on-premises locations over the next two years. 

Workload repatriation rates are not consistent across different types of business and business functions. The IDC researchers say younger organisations are significantly more likely to repatriate public cloud workloads than those in business for more than 25 years.

Companies that are self-described market disrupters, market makers, under reinvention or transitioning have the highest rates of repatriation.

Two-way street

Interestingly, companies deploying “true hybrid cloud capabilities” have some of the lowest rates of repatriation. These businesses have the ability to run a single application across multiple cloud environments and hence do not need to move workloads at the same rate as organisations without such automation.

On the other hand, companies reporting high levels of application interdependencies are the most likely to repatriate workloads, probably due to a need to connect to on-premises data or applications,

Kinda obvious, but repatriation rates rise when public cloud costs are perceived to be higher than other computing costs.  Ccosts are more likely to be visible to the IT organisation, which is most likely to repatriate applications and data from public cloud environments.

Correspondingly, line of business decision makers are the least likely to repatriate workloads.  

Hey, you, get my data off of your cloud

Repatriation accounts for some on-premises data growth but machine-sourced unstructured data accounts for the lion’s share, according to Tso. This is growing exponentially, and on-premises data storage requirements will grow in lock-step.

On-premises data growth is helped along by “semiconductor source systems getting higher resolution every year,” such as surveillance cameras taking higher resolution videos. Cloudian is also finding customers with machine-generated data, such as DNA genomics, CAT scans, MPI, surveillance and IoT sensors/systems, are all receptive to these messages.

Data gravity is another factor affecting on-premises data storage; the cost and ease of moving it, and this affects repatriated data as well. (Find out more about data gravity in this Register article.)

Storage drive density is growing exponentially but moving data to the cloud is getting harder because “cable capacity does not grow exponentially,” Tso argues.

The upshot is that  data will continue to grow at the edge of the network “because it can’t move. … data gravity is huge. It takes years to lay cables.”

If sending 2PB today across network links to the public cloud is unaffordable then sending 5PB in 12 months time will be even more unaffordable, and 10-15PB, a year later? Impossible.

Back for good?

Once the data is back on-premises (or private cloud) will it return to the public cloud?

Tso thinks it unlikely, because the data keeps growing and the cost of network transmission to the cloud will be prohibitive

He said: “We’re seeing the pendulum swing back.”  In an aside he mentioned a US customer using the internet for group interactions, in which data was stored in the Amazon cloud. Citing GDPR concerns and data sovereignty issues, the company repatriated to store the data on a dozen Cloudian systems for different countries.

Tso said existing customers account for about 40 per cent of Cloudian sales and their expanding requirements are driving revenues. This of course fits in nicely with his data repatriation and edge data growth thesis. These trends if confirmed will greatly benefit Cloudian and other cloud-like unstructured data storage vendors. 

Overland owner posts deeper loss on lower Q3 revenues

Sphere 3D, the owner of Overland Storage and Tandberg, and virtualization vendor in its own right, saw revenues decline and losses deepen in its third 2018 quarter.

The  company blamed inadequate supply of product to meet customer demand for the revenue decline. That led to an increase of approximately $5.7m in backlog orders.

This may also explain why operating expenses for the  quarter fell to $7.7m from $11.2m a year ago.

Revenues of $15.9m were 26.7 per cent down on the year-ago $21.7m ,while the loss of $4.9m was 40 per cent worse than the -$3.5m reported a year ago.

As the chart above shows there have been losses every quarter since the start of 2015 and the company gives no indication of when and how it might turn around its fortunes.

Product revenues were $13.9 million, compared to $19.6m for the third quarter of 2017, while service revenues were $2.0m compared to $2.1m a year ago,

Within the products category:

  • Disk systems revenue was $10.1m , compared to $14.1m a year ago.
  • Tape archive product revenue was $3.8 million compared to $5.5m last year. 

Disk systems is defined as RDX, SnapServer family, virtual desktop infrastructure, and Glassware-derived products.

We await the impending spin off of Overland Storage- Tandberg business to Silicon Valley Technology Partners (read our analysis of this complex exercise in financial engineering).

This will make Sphere 3D a sister company of Overland-Tandberg and not its owner. The reorg leaves it with Glassware and allied products – and debt-free.

Perhaps, as Overland, its prolonged wallow in red ink will come to an end … perhaps.

Making light of Quantum storage in the ultra-cold atom cloud

Quantum computing inched a little closer with a demonstration of quantum memory using light pulses stored in a cloud of ultra-cold rubidium atoms.

The light pulses, which can scale down to a single photon, are shone into the cloud to write and store data. A second reference, or control, pulse of light of a different nature is shone at the cloud to retrieve the original pulse and the same information.

Lindsay LeBlanc, assistant professor of physics and Canada Research Chair in Ultracold Gases for Quantum Simulation, conducted the research with post-doctoral fellow Erhan Saglamyurek. They say the technique could be used to build quantum memory for quantum computers.

Saglamyurek adds: “The amount of power needed is significantly lower than current options, and these reduced requirements make it easier to implement in other labs.”

Lindsay LeBlanc

Baby, it’s cold outside

Quantum computers need to operate at temperatures near absolute zero (Kelvin scale) and the super-cooled rubidium atomic cloud is at that level.

In their research, LeBlanc and Saglamyurek used a technique scheme that “relies on dynamically controlled absorption of light via the ‘Autler–Townes effect’, which mediates reversible transfer between photonic coherence and the collective ground-state coherence of the storage medium.“

The researchers demonstrated proof-of-concept storage and signal processing capabilities in a laser-cooled gas of rubidium atoms, including storage of nanoseconds-long single-photon-level laser pulses for up to a microsecond.

The microsecond storage time is a good start and the discovery could pave the way to scaling up quantum computing research.

LeBlanc and Saglamyurek’s paper, “Coherent Storage and Manipulation of Broadband Photons Via Dynamically Controlled Autler-Townes Splitting,” (small paywall) is published in Nature Photonics.

OpenIO goes serverless to keep the weight down

OpenIO has devised its own serverless computing framework to add functionality to its SDS object storage software.

The fruits of its labour can be seen in an extensive upgrade of  SDS, v18.10,  in which OpenIO used the new framework to build metadata indexing and search.

Keeping the weight down

SDS is open source software that runs on clusters of commodity servers, It is a relatively small piece of code, requiring a minimum 400MB of DRAM and a single ARM core. But adding new features and improving existing ones would bulk up to the code too much.

And so OpenIO made its own serverless computing framework,  dubbed ‘Grid for Apps’.

Using the framework, SDS can trigger functions based on events and schedule batch jobs to run when wanted – for instance when the infrastructure is less busy, As Enrico Signoretti, the head of product strategy, explains in a company blog,

“Functions are relatively small pieces of code,” he writes, “that are abstracted from the underlaying infrastructure. They are easy to develop and maintain, giving us a huge advantage, because they do not directly interact with the core.”

Enrico Signoretti

Chargeback could be added to OpenIO SDS as a function instead of being added to the core code.

According to Signoretti, the core code already “catches all events that occur in the system, and … can pass them on to Grid for Apps. A function computes all the necessary information and gives you all the metrics you need. It’s asynchronous, scales with the rest of the cluster, and resources to run it are allocated dynamically.”

Summary

Grid for Apps is used to create new features and keep the core lightweight and efficient. The functions are, in effect, small applications that run on the object store’s cluster of servers. The object storage infrastructure handles the compute.

OpenIO SDS 18.10 is free to download from OpenIO’s website and GitHub repository.

Maxta beefs up storage systems management

Maxta this week added an analytics component to its core hyperconvergence software. The intent is to deliver better system management and operational improvements, including less downtime and reduced manual management.

The new reporting software, called MxIQ, collects operational data across the entire customer base to “create a collection of trends and issues that can then be applied to any of its customers, “Mike Leone, senior analyst, Enterprise Strategy Group, said.

Metadata is collected via pre-installed agents on customer servers and transmitted securely to the MxIQ cloud-based service where correlated issues are analysed and resolved.

“What this does,” Leone said, “is allow Maxta to predict an issue – whether it’s the need to add capacity or recognize a future failure of a specific drive – as well as recommend a solution before an outage occurs.”

Yoram Novick, Maxta founder and CEO, issued a canned quote: “With MxIQ, we are using information in a new way that allows us to predict future trends and probable outcomes to help customers plan more accurately and provide extended peace of mind. ”

Maxta founder and CEO Yoram Novick

MxIQ is an integrated component of Maxta Hyperconvergence software. The basic version is free and added extras come at an undisclosed cost. You can find out more from your reseller.

Nimble by name, Nimble by nature

With this launch Maxta follows in the footsteps of Nimble Software, a trailblazer in storage software systems management.

Nimble changed the game with its InfoSight array sensor-based metrics. The software pumped operational data from the entire array base to a Nimble data centre in the cloud where they were aggregated and analysed to find patterns of problems around components, performance and capacity. Customers were alerted to likely problems before they occurred.

HPE, which bought Nimble, is applying a similar analytics scheme to its 3PAR arrays and has ambitions to extend its scope to servers and networking.

Pure Storage and other suppliers have followed in Nimble’s footsteps and now Maxta is worshipping at the same system management analytics temple.

Soon the provision of sensor data from deployed products to a cloud data centre where it is aggregated and analysed for the benefit of the products’ customer base will become table stakes for every IT system supplier.