Home Blog Page 434

Igneous: Bye-bye proprietary hardware. Hello data management services

Igneous Systems has seen the light and is becoming a data management supplier using commodity hardware.


In common with other data protection vendors selling proprietary hardware,  Igneous faces strong competition from players who treat data protection as an entry to the wider data management market. These rivals offer data analytics and insight, and use commodity hardware.

Igneous combines hybrid cloud services with a technically advanced proprietary on-premises hardware appliance. These are nanoservers –  disk drives with added ARM compute. The software includes replication-based tiering off to the public cloud (AWS), and features delivered as a service.

The company protects primary data-handling filers – basically offering NAS backup. But it has always had data services ambitions and now it is making its  move into this market. At the same time it is deepening integrations with three key filer players – Pure Storage, Dell EMC Isilon and Qumulo.

This is a competitive necessity. The company needs to separate itself from the pure-play pack of backup vendors and from fast-growing unstructured data management startups such as Cohesity and Rubrik.

Three new hardware integrations*, on top of the existing NAS filer integrations, help fulfil both needs:

  • Pure Storage FlashBlade – NFS, SMB and S3 object integration
  • Dell EMC Isilon – direct API integration with OneFS providing concurrent multi-protocol support for NFS and SMB, with switching of ACLs and permissions
  • Qumulo QF2 – direct API integration via a strategic alliance

Got to pick a data set or two

Igneous classifies the market into customers with structured data sets (database and VMs) up to 100TB; unstructured data sets up to 10PB or so; and more scalable and larger needs.

For the first group,  Cohesity and Rubrik provide more modern and resource-efficient data protection than legacy backup software tools such as Veritas and CommVault, according to an Igneous spokesperson.

Traditional backup engines can capture NFS and SMB file permissions like Igneous but “they’re based on the NDMP backup protocol, which may take too long to complete and may impact filer performance,” he said.

“Neither Veritas nor Commvault supports Isilon multi-protocol (NFS+SMB) permission protection, API-level integration with Qumulo, or Object support with Pure FlashBlade.”

Igneous says it can protect customers with structured data out to the 100s of TBs and unstructured data beyond 10s of PBs, and do this more affordably and with simpler management. This is a vague boast. There is no simple structured-unstructured data capacity level crossover point and Igneous is claiming tricky-to-prove advantages.

The company is to rebrand its product set from the Hybrid Storage Cloud to something that better reflects the component data services it offers in this cloud. It will  extend public cloud support, possibly adding Azure alongside AWS.

Other NAS supplier integrations are coming too, such as, possibly, WekaIO.

Commodity exchange

As part of its new software focus, Igneous is developing a generic hardware strategy for its on-premises appliance. This will involve close relationships with some unnamed hardware vendors.  And it is not too much of a stretch to assume that the partners will provide popular capacity-focused file storage boxes.

Igneous is also exploring a web-crawler approach to check thought NAS data sets, building up metadata such as an index and doing analytic things with that indexed data. It may also extend data provisioning capability.

Igneous is a work in progress, in product development mode as it pivots from proprietary hardware. It’ is extending data services and adding NAS array integrations to compete with strong data management services competition. Actifio, Cohesity, Druva and Rubrik are just some of the companies breathing down its neck. Object storage suppliers are also hastening into the NAS access, data management area.

Like molten lava, Igneous has to settle on and find its way across an existing landscape before it cools, solidifies, gets stuck and can flow no more. You gotta keep flowing Igneous. Standing still is not allowed. ®

* Pure Storage customers must run Purity for FlashBlade version 2.1.3 or higher with API version 1.1 or higher to protect Object shares with Igneous.

Dell EMC Isilon OneFS 7.2.0 or higher can begin importing and protecting multi-protocol shares with Igneous.

Customers using Qumulo QF2 version 2.7.6 or higher can protect their system with Igneous.

StorageCraft builds Rubrik-esque system for SMBs

StorageCraft aims to provide single system Cohesity and Rubrik-style enterprise converged scale-out storage and data protection for small and medium businesses.

The company bought SMB object storage startup Exablox in January 2017. Exablox built deduplicated scale-out filer arrays using object storage internally.

The OneXafe box from StorageCraft is claimed to be a converged scale-out storage and data protection system for small and medium entreprises’ physical and virtual servers. Exablox had combined StorageCraft’s ShadowProtect backup SW with its OneBlox nodes in October 2016.

At the time we said StorageCraft’s portfolio includes an integrated suite of backup, disaster recovery, system migration, virtualisation and data protection software running on Windows and Linux for small and medium-sized businesses.

Now, in effect the StorageCraft and Exablox technologies have been combined, with StorageCraft saying neither NetBackup, Veeam nor Veritas can offer such a complete or affordable system for SMBs. Roughly similar systems to OneXafe from Cohesity and Rubrik are said to be designed for larger enterprises and be more costly.

There are three models, with OneXafe 4412 and 4417 capacity-optimised systems and an all-flash 5412 version offering instant recovery and fast-access unstructured data to serve primary virtual server production applications.

OneXafe has a patented distributed object-based file system – the Exablox SW technology – that is integrated with data protection services. The file system delivers NFS and SMB data access for primary and secondary storage.

It scales from a single node to a multi-petabyte cluster. The management is delivered remotely as-a-service by a OneSystem facility.

OneXafe has both VMware and VSS integration and provides work flow and capacity usage analytics. The data protection is SLA-based and policy-driven, and data recovery can be pretty much instant; a patented VirtualBoot feature is claimed to deliver terabyte (TB) virtual machine recovery in less than a second regardless of whether deployed on-premises or in the cloud.

StorageCraft_OneXafe

StorageCraft OneXafe

An on-premises OneXafe system’s policy’s can replicate data to StorageCraft Cloud Services with a claimed single-click recovery of data, compute, and network services in an orchestrated recovery workflow.

Will SMB’s want a single storage scale-out box with cloud-based management to handle all their storage and data protection needs? If their current system is a complex and costly mess than OneXafe might be an appealing fix.

Datasheets can be found here. OneXafe disk-based 4400 starts at less than $14,000 for 144TB (c$0.095/GB) with the all-flash 5400 series starting at less than $30,000 for 38TB ($0.77/GB). How do these stack up against Rubrik and Cohesity?

A web source says Rubrik’s R334 (3-node, 36TB) backup appliance unit list price is around $100,000 MSRP. That is $2.71/GB, more expensive than StorageCraft’s OneXafe 5400 at $0.77/GB.

A Cohesity pricing web source says a a 3-node Cohesity C2300 (48TB raw, using 4TB disk drives, 800GB PCIe Flash cards per node) starts at $90,000. That’s $1.83/GB, again more expensive than OneXafe’s 4400 at $0.095/GB.

OneXafe products are available from StorageCraft’s channel.

Did NetApp over-pay buying SolidFire for $870 million?

Did NetApp pay too much for all-flash array vendor SolidFire?

Its latest results prompted financial analysts to update their clients on NetApp’s finances and one* included SolidFire revenues for the four fiscal 2018 quarters. They were $36.3m, $23.5m, $25m and $46m; a total of $130.8m for the year.

NetApp paid $870m for SolidFire in late 2015. A $130.8m run rate is 15 per cent of that, seemingly a not bad return. But that is not the actual return, because the actual return on the SolidFire investment is the profit on that $130.8m.

What is it?

We don’t know, as NetApp doesn’t reveal that information. However it does reveal its own revenues and profits, and the profit can be presented as a percentage of its revenues.

In NetApp’s fiscal 2018 quarters its profits as a percentage of its revenues were 9.92, 12.32, -33.29 and 16.52. That negative third quarter numberwas caused by US tax law changes.

Let’s ignore that number and use the other three to calculate NetApp’s average profit percentage for the year. It comes to 12.92 per cent.

We can apply that to the SolidFire revenues to work out a notional SolidFire profit for the year; $16.9m.

That is 1.94 per cent of the SolidFire acquisition cost. In other words, $870m was invested and is getting a 1.94 per cent return. Is this good or bad?

We understand, simplistically, that a business can evaluate an internal rate of return on investment by comparing it to their internal cost of raising cash, the weighted average cost of capital (WACC). That would be the cost of raising the $870m in this case.

Another way if looking at it is to use an internal rate of return (IRR) and saying that returns acquisition investments must be higher than an IRR threshold level.

If an acquisition brings in more than the WACC or IRR then it is a reasonable deal. Some business sources suggest a large and mature company could have a 7 or 8 per cent WACC.

A venture capitalist company might have a 50 per cent WACC while a private equity concern might have a 35 per cent one; both representing a risk premium, with the VC investment risk being higher than the private equity level.

We could class NetApp as a large and mature company. It seems unlikely that a SolidFire return on investment of 1.94 per cent would exceed NetApp’s WACC or IRR percentage.

Therefore, on the basis of the calculations and assumptions above, NetApp overpaid for SolidFire.

Does this argument stand up?

William Blair analyst Jason Ader said: “Your math makes sense. But I would note that a big reason in my mind for the SolidFire acquisition was to get their hands on its HCI tech, which at the time of the deal, was still under development. While NetApp HCI is still small revenue for NetApp today, it could be much more significant over time, which could help justify the purchase price.”

*Wells Fargo senior analyst Aaron Rakers.

NetApp swaggers over competitors with rampaging flash array results

NetApp is growing revenues solidly with a booming flash array business, competitors in perceived disarray, and its Data Fabric concept resonating with customers looking to the public cloud.

Its first fiscal 2019 quarter revenues were $1.47bn, 12 per cent more than a year ago*. There was a profit of $283m, 58.7 per cent higher than the year-ago $166m. Product revenue was $875m, up 20 per cent annually, software maintenance was $229m (up 3 per cent y-o-y) and $370m came in from hardware maintenance and other services. This was flat y-o-y. Gross margin was 66 per cent.

All-flash revenues exhibited an annual $2.2bn run rate, up 50 per cent year-on-year. Installed base flash array penetration is around 14 per cent; it was under 10 per cent a year ago; implying there is a lot of potential flash array demand inside its customers.

Financially it is in rude health, with free cash flow being 18 per cent of revenue and increasing 22 per cent year-on-year. Some $605m was spent on share repurchases and cash dividends. NetApp closed the quarter with $4.8bn in cash and short-term investments.

CEO George Kurian’s release quote said: “Enterprises are signalling strong confidence in NetApp by making long-term investments to enable the NetApp Data Fabric across their entire enterprise.”

NetApp CEO George Kurian.

NetApp said its cloud data services run rate was $20m.

SolidFire array revenues were $47m according to IDC. The all-flash EF series and ONTAP arrays brought in $433.4m according to senior Wells Fargo analyst Aaron Rakers. HCI revenues were not identified.

What did the earnings call tell us? Kurian said NetApp was starting the fiscal year: “in a stronger position than we’ve been in for years. Revenue, gross margin, operating margin and earnings per share were all above our guidance.”

Kurian said: “as businesses become more data-intensive and builds more data-intensive environments like machine learning, high performance storage and data services are benefiting from spending intention.” HCI product revenues were not separately identified in NetApp’s results. Asked by Rakers about this and NetApp’s HCI product progress, Kurian said: “With regard to whether we will breakout HCI or not, we’ll provide you that clarity when we do it.” Take that, Aaron.

He added: “At this point, HCI is a part of our product revenue. We have seen a broader number of competitive engagements and we are winning our share, right? … We’ve had good competitive wins. … we have some really exciting product announcements at NetApp Insight, and you’ll see how we build NetApp HCI into our compelling Data Fabric story there.”

NetApp Insight will be held in Las Vegas, October 22-24., and it should also contain news about the object storage StorageGRID product.

Kurian talked about the effect of lowering 3D NAND flash prices: “As prices ameliorate for 3D NAND, which we are starting to see clearly, that performance [disk] drive, the 10-K drive segment will concede to the all-flash array segment.”

Then he moved on to the competing suppliers, saying: “I think, the legacy competitors are in a variety of states of challenge. I think, if you look at the large players like EMC or HP, they’re still trying to rationalize their lead flash portfolio, because none of their products is complete.

“If you look at players like IBM and Hitachi and Fujitsu and Oracle, they basically conceded and are no longer in sort of new deployment considerations. They’re essentially defending the installed base and the start-ups are challenged.

“They have essentially been fast on product innovation, and they don’t have the market reach to compete. So it will be, I see some sense of consolidation coming up in the marketplace and we will benefit from that, because we’re very well-positioned.”

One questioner asked about NetApp’s view on composable infrastructure and HCI, to which Kurian replied: “Our solution has many of the elements of composable in it.”

We might hear more about that in October.

Next quarter revenues are expected to be between $1.45bn and $1.55bn, a 5.6 per cent rise on the year-ago Q2’s $1.42bn at the mid-point.

<b>*</b> Note; the fiscal 2017 first quarter revenues have been recalculated to $1.32bn based on NetApp’s adoption of the new accounting standard ASC 606. Originally they were $1.33bn, with profits of $136m, also retrospectively recalculated to be $131m.

Pavilion compares RoCE and TCP NVMe over Fabrics performance

Pavilion Data says NVMe over Fabrics using TCP adds less than 100µs latency to RDMA RoCE and is usable at data centre scale.

It is an NVMe-over-Fabrics (NVMe-oF) flash array pioneer and is already supporting simultaneous RoCE and TCP NVMe-oF transports.

Head of Products Jeff Sosa told B&F: “We are … supporting NVMe-over-TCP.  The NVMe-over-TCP standard is ready to be ratified any time now, and is expected to be before the end of the year.

“We actually have a customer who is deploying both NVMe-oF with RoCE and TCP from one of our arrays simultaneously.”

Pavilion says NVMe-oF provides the performance of DAS, with the operational benefits of SAN. It’s implementation has full HA and no single point of failure. It says it offloads host processing with centralised data management.

That data management, when used for MongoDB for example, allows;

  • Centralized Storage allows writeable clones to be instantly presented to secondary hosts, avoiding copying data over the network
  • Dynamically increase disk space size on-demand in any host
  • Instantly back up the entire cluster using high-speed snapshots
  • Rapidly deploy a copy of the entire cluster for Test/Dev/QA by using Clones
  • Eliminate the need for log forwarding by having each node write log data directly to a shared storage location
  • Orchestrate and automate all operations using Pavilion REST APIs

Pavilion compared NVMe-oF performance over RoCE and TCP with from 1 to 20 client accessors, and found average TCP latency was 183µs and RoCE’s 107µs, TCP being 71 per cent slower.

A Pavilion customer NVMe-oF TCP deployment was data centre rather than rack scale with up to 6 switch-hops between clients and storage. It was focused on random write-latency serving 1,000s of 10GbitE non-RDMA (NVMe-oF over TCP) clients and a few dozen 25GbitE RDMA (NVME-oF with RoCE) clients.

The equipment included Mellanox QSA28 adapters enabling 10/25GbitE breakouts with optical fibre. There were 4 x switch ports consumed to connect 16 x Array/Target ports physically cabled. Eight ports were dedicated to RDMA and 8 dedicated to NVMeTCP; both options equally “on the menu.”

There were no speed-transitions between the storage array and 10GbitE or 25GbitE clients with a reduced risk of over-whelming port-buffers.

Early results put NVMeTCP (~200µs) at twice that of RoCEv2 (~100µs) but half that of NVMe-backed iSCSI (~400µs). On-going experimentation and tuning is pushing these numbers, including iSCSI, lower.

It produced a table indicating how RoCE and TCP NVMe-oF strengths differed;

Suppliers supporting NVMe-oF using TCP as well as Pavilion include Lightbits, Solarflare and Toshiba (Kumoscale.) Will we see other NVMe-oF startups and mainstream storage array suppliers supporting TCP as an NVMe-oF transport? There are no signs yet but it would seem an easy enough (relatively) technology to adopt.

Pavilion’s message here is basically, unless you need the absolute lowest possible access latency, then deploying NVMe-oF using standard Ethernet looks quite feasible and more affordable than alternative NVMe-oF transports – unless perhaps you run NVMe-oF over Fibre Channel.

B&F wonders how FC and TCP compare as transports for NVMe-oF. If any supplier knows please do get in touch. B&F

 

What the Dell? Qumulo gets on to PowerEdge servers

Qumulo’s QF2 (Qumulo File Fabric) software is now available on Dell’s PowerEdge R740xd storage server, a 2U 2-socket Xeon Skylake, all-NVMe flash drive system.

The QF2 SW is also available on Qumulo’s own storage server HW, HPE Apollo HW and in the AWS cloud.

It will compete, as a scale-out file system, with Dell EMC’s own Isilon gear, as well as with IBM’s Spectrum Scale SW, Panasas parallel file system SW, WekaIO, and also Elastifile. 

Qumulo says the Dell QF2 incarnation can scale out to 100s of petabytes and tens of billions of files. It claims QF2 is the highest performance file storage system in the data centre and the public cloud, being the most scalable and most efficient. 

There is no independent, industry-standard validation of these claims as, so far, it is absent from industry-standard benchmarks, such as the one which features Spectrum Scale, WekaIO and NetApp amongst others.

Having Dell server support adds a HW platform and males it easier to sell its SW into Dell’s server customer base, as well as enabling Qumulo to broadcast an anti-lock-in message.

Veeam gets taped up by Quantum in anti-ransomware deal

In the storage Game of Thrones a top contender for the modern data protection throne has forged an alliance with one of the oldest data protection technologies of all; tape.

Quantum and Veeam say Veeam’s backup software can send data to tape via a dedicated external physical server, which hosts Veeam’s tape server. This physical server has to be sized, configured, procured and set up.

(Image above: Quantum Scalar library products with Scalar i3 i front.)

What Quantum has done is stick a blade server inside its Scalar i3 tape library and run Veeam’s tape server on that. The resulting box is called a converged tape appliance and is available to Quantum distributors and resellers as a single line item (SKU.)

The i3 has from 25 to 200 tape cartridge slots, scaled in 25-slot increments, and from from 1 to 12 tape drives. It has Capacity-on-Demand (CoD) software licensing, and compressed LTO-8 tape capacity runs from from 750TB up to 6PB.

The control module is a 3U enclosure and there can be up to three expansion modules, each taking up 3U. Get a datasheet here.

Quantum plays the anti-ransomeware card, saying that tape cartridges are stored offline and therefore provide an effective barrier against ransomware and malware.

Veeam users can store backup data on Quantum’s DXi deduplicating backup-to-disk arrays, which is quicker than writing to tape but the arrays are online, unlike stored tape cartridges.

Quantum’s converged tape appliances for Veeam environments are available today, beginning at $17,000 MSRP.

Latticeworks reinvents personal NAS

Latticeworks claims to have reinvented personal NAS with its WiFi-connected Amber product.

The company was founded in 2014 by Dr Pantas Sutarjdada, a co-founder of Marvell and an ex-CTO of that company. Co-incidentally Western Digital’s MyCloud product has used Marvell controllers.

Amber is about the size of a smallish home speaker – think Sonos One – and contains (tech specs) an Intel Core Duo Gemini Lake CPU (1.1GHz – 2.6 GHz), a built-in AC2600 Wi-Fi Router, and a pair of 1TB disk drives in a mirrored (RAID 1) configuration. Maximum capacity is 4TB.

It has a WAN and HDMI ports, a pair of LAN ports but no USB connectivity.

Mac and Windows notebooks/desktops and IOS/Android smart phones/tablets run a LiFE app which can automatically send picture images, music, videos and other files to the Amber box. It can stream videos to TVs and be accessed remotely to do things with files, such as stream them to TVs or share them.

Latticeworks says it streams videos with no buffering and photos can be facially-indexed. There is no storage of data in third-party clouds and a proprietary LattisNet cloud service is used only for user ID management and data routing verification.

This reminds us of Connected Data’s Transporter box for businesses, which also provide private cloud file-sharing.

The Amber product costs $399.99 with a time-limited discount from $549.99 and is only available in the USA.

The product is said to perform better if it is installed in front of a home router, rather than behind it. The FAQ says; “Being behind another router could in some cases make for poorer connection when connecting from the outside.”

Its claimed main superiority over a home NAS product is cloud security; “Other solutions rely on public third-party clouds, so you never really know who has control of your data. When you store your digital life in Amber – your data is yours and yours alone.”

Western Digital’s MyCloud personal NAS runs up to 20TB in capacity, five times larger. It has pretty much the same feature set and costs less. A 2TB My Cloud Home has a promo price of $139.99, discounted from $159.99.

Is the Amber product worth $260 more? What do you think?

Overland loan default throws spanner in Sphere 3D spin-out works

Overland Storage has missed making an interest payment due on August 1,  which means the creditor could call in  repayment of the entire debt.

Overland’s parent company Sphere 3D said in a statement it is trying to resolve the issue, but “there can be no assurance that the Lenders will ultimately agree to any such resolution.”

How will this affect Sphere 3D’s proposed $45m spin out of Overland Storage and Tandberg Data? This was agreed in July subject to closing conditions and the buyer is Silicon Valley Technology Partners, an entity set up and controlled by Sphere 3D CEO and chairman Eric Kelly. 

Flash price crashing leading to disk drive trashing

A flash price crash is coming and should increase disk cannibalisation rates as SSDs become more affordable.

Objective Analysis’ Jim Handy, presenting at last week’s Flash Memory Summit,  thinks flash over-supply will result in  massive price falls – to near the product cost of 64-layer 3D NAND – meaning $0.08/GB in 2019. Handy characterises it as the largest-ever price correction in the history of semi-conductor products.

Wells Fargo senior analyst Aaron Rakers, using IDC and DRAMeXchange data, currently estimates total NAND Flash pricing at ~$0.30/GB. Rakers notes Objective Analysis’ current expectation is for a 45 per cent per annum growth in NAND flash capacity shipped.

Some 70 per cent of of total industry flash is currently 3D NAND, with the remainder the older 2D or planar NAND. Handy believes this manufacturing capacity could be migrated to making DRAM instead.

Handy thinks this could result in DRAM capacity over-supply in the future.

Deep StorageNet chief scientist Howard Marks, also presenting at FMS 2018, suggested that a 5x differentiation in $/GB between enterprise SSDs and HDDs should be considered the crossover point to move toward SSD cannibalisation. 

Rakers notes enterprise SSDs are currently at a ~3-4x $/GB premium relative to mission-critical HDDs, and enterprise SSDs currently stand at ~15-17x $/GB premium relative to nearline / high-cap enterprise HDDs.

Recently-announced QLC (4bits/cell) SSDS from Intel and Micron are aimed at taking share from nearline disk drives for rad-intensive applications.

If Handy’s  prediction is accurate then SSD pricing will go down too. He sees a trend for NAND prices to fall to roughly 25 per cent of their current prices. If SSD prices go down in lock step then we are looking at a 75 per cent cost reduction.

That means enterprise SSDs, currently at around $0.30/GB compared to $0.92/GB for nearline/high-cap disk drives. Assume a 75 per cent price cut for those SSDs to $0.08/GB and they would then only be 4x more expensive per GB than disk drives; underneath Marks’ xX crossover point.

Logic would then suggest significant cannibalisation of nearline/high-cap disk drive sales by SSDs, at least for read-intensive work.

Handy’s price correction will take several quarters, leading us to suppose there could be a quite severe contraction in nearline/high-cap disk drive sales.

Coincidentally Mark Delaney, a Goldman Sachs analyst, thinks Seagate’s share price – $50.88 at time of writing – will reduce to $44 and has downgraded the stock to sell from neutral.  Delaney’s rationale is that “HDDs [hard disk drives] remain a cyclical industry, and one facing secular challenges in many parts of the market from the growth of SSDs [solid-state drives] … NAND is oversupplied and SSD prices are falling (and in some cases pricing is down by as much as 30-40 per cent from the peak.”

The bulk of Seagate’s revenues come from disk drives, and increasing nearline/high-cap drives,  whereas competitor Western Digital gets more than half of its revenues from flash and is less exposed to a disk downturn.

However, it is more exposed than Seagate to a NAND pricing collapse. Will Seagate’s swing slow more than WD’s roundabout? Who knows? Better call Saul!

Kaminario supports WD’s composable infrastructure gear

All-flash array supplier Kaminario is supporting Western Digital’s composable infrastructure products, announced last week.

These are:

  • NVMe-over-Fabrics-connected OpenFlex hardware:
    • F3000 proprietary flash drives in quasi 3.5-inch form factor.
    • E3000 3U enclosure housing ten F3000s.
    • D3000 1U disk drive enclosure.
  • OpenFlex architecture.
  • Kingfish open APIs to orchestrate and manage SCI systems.
  • OpenFlex ecosystem partners.

Kaminario has its own composable K2 storage array and its collaboration with WD encompasses this product and WD’s OpenFlex and Ultrastar storage platforms. The company will support WD’s Ultrastar NVMe and SAS storage products for its K2 appliance and Cloud Fabric software-defined storage offerings. 

Kaminario’s N software stack is compatible with WD’s OpenFlex series of NVM-oF platforms that provide disaggregated, fabric-connected flash and disk building blocks for scaling at the rack level and beyond.

When combined these products deliver a NVMe/NVM-oF offering with the capability to dynamically compose, control, reconfigure, and manage storage resources at the software layer using Kaminario Flex.

According to the company,  this composable combo enables enterprises to emulate the efficiency and agility of hyperscale providers.

Kaminario is offering the K2 appliance for purchase as a pre-integrated appliance or as a software-only product under its Cloud Fabric program. The appliance will also be available via channel partners. 

NGD computational flash wins best-of-show award at FMS 2018

NGD, a maker of computational processing storage drives, has won a best-of-show award at the Flash Memory Summit in Santa Clara.

With its Newport product, the company aims to solve the issues associated with moving petabytes of data from storage devices into server RAM for processing. The movement of a petabyte of data can take significant time, delaying results and solutions. Computational storage can eliminate the need for this data movement by embedding processing capabilities within storage devices such as SSDs. 

The result can reduce the time to process a petabyte of data to a few seconds for highly parallel, read-intensive analytic applications.  Here’s the rub: it is application-specific and the drive processor software needs producing. But Newport looks a great fit if you have such data.