Kevin McNamara, now ex-CTO and co-founder at HiveIO.
McNamara, was rumoured to have left in February 2019, along with the VP for sales. Six days and many emails after Blocks & Files asked the company if they had left we received a reply on Feb 14: “HiveIO is not commenting on this.”
Fast forward to March 19 and George Nealon has joined HiveIO as VP for Global Sales and Chief Revenue Officer. Nealon is a 20-year sales vet with time at IBM and Apigee in his resume.
A check on HiveIO’s leadership web page now shows no mention of Kevin McNamara as CTO and co-founder, and he is not listed as a board director either. So long, Kevin.
HiveIO said it grew its business in 2018 with a 35 per cent increase in use of its Hive Fabric by customers, an 18 per cent increase in new channel partners, and a 30 per cent increase in overall staff with a 60 per cent increase in sales staff. Note that no actual figures were used in the making of these unverifiable claims.
CTERA Networks, the Edge and cloud file services specialist, has added a cloud storage gateway product to its lineup that builds upon HPE’s SimpliVity systems. It claims this is the first hyperconverged solution to offer multi-cloud data management capabilities.
Available immediately, the CTERA Edge X Series integrates
HPE’s SimpliVity with CTERA’s global file system to provide all-in-one edge
storage and compute capabilities with triple redundancy and multi-site
collaboration.
CTERA Edge X Series filers come in a 2-node cluster configuration
CTERA’s earlier hardware was able to replace file servers,
tape backup and other systems with a single, cloud-integrated platform that can
offload non-critical files and backups to the cloud. The new flagship model
effectively runs CTERA’s platform atop a cluster of HPE SimpliVity nodes, which
should offer greater resiliency, among other things.
HPE acquired SimpliVity two years ago for its hyperconverged software platform and now offers it ready integrated on HPE servers such as the ProLiant DL380 seen in the image above.
The new Edge X Series is aimed primarily at large organisations,
while the existing C Series and H Series target small and medium businesses,
respectively.
The great migration
In a statement, CTERA’s Chief Strategy Officer Oded Nagel said the firm sees a major market shift towards convergence of infrastructure at the enterprise edge, while on the other hand the continuous migration of workloads and data to the cloud.
Highlights of the CTERA
Edge X Series include:
Global file system with cross-site file sharing and data protection and seamless multi-cloud tiering, supporting all major public and private clouds
Managed VMware hypervisor for running multiple tier-1 applications
Elastic scaling: frequently accessed data is stored on high-performance local flash drives, while infrequently accessed data is automatically migrated to ‘cheap and deep’ cloud storage
All-flash system with up to 23TB of raw local storage, 192GB memory and 10Gb/s Ethernet
FIPS-certified cryptography with full control over key management and data residency
Global 24×7 support with onsite service and four-hour response time
CTERA Edge X Series filers are delivered in a 2-node cluster
configuration, with options for 9 or 12 SSD drives per node. Pricing starts at $100,000.
Formulus Black has developed a way to make X86 servers run faster through a data reduction method that effectively enlarges server memory.
Its software runs in a bare metal server and can be run on bare metal server instances in public clouds such as Amazon and Azure.
Application code is mathematically processed to remove repetitive bit patterns. These are replaced with ‘Forsa’ bit markers (FbMs). ForsaOS is the name of the Formulus Black operating system and this is a new form of deduplication.
Processing involves patented algorithms that analyse data in real time. The FbMs represent lossless versions of the raw code and data and there is no need for compression or classic deduplication, as used in backup-to-disk target arrays. Nor is there any thin provisioning or page swapping.
The advantages are that applications are reduced in size and so memory is effectively amplified – up to 3.85 times in a demo system with amplification factors as high as 24x in some circumstances. Basically, your mileage may vary.
The FbMs are sent to the server CPU which decodes them into raw normal form – the usual application code form – processes the instructions and data, re-encodes them and sends them back to memory, ready to fetch the next FbM.
With FbMs the CPU has to do more work and this adds extra time to an application’s run-time. However, Formulus says overall execution time is reduced because the number of transfers across memory-CPU channels is reduced with its FbM instantiation of an application’s code.
Also, an FbM version of an application is reduced in size compared to its normal form and so more of them can fit in memory. As a result a ForsaOS server can run more VMs in a set amount of DRAM than an ordinary server running Linux or Windows.
There is a GUI, that works with any browser, and a RESTFul API set for all system functions. ForsaOS works with Haswell or later X86 servers and has been tested on Dell EMC, HPE and Supermicro servers. The minimum configuration is a 2-socket server with 384GB of DRAM.
ForsaOS GUI photographed during a demo via a Zoom briefing.
Memory storage
Formulus aims to reduce IO to external storage drives and it backs up the server DRAM with a flash store and UPS. Memory contents are written to a flash drive if power is cut, intentionally or not.
DRAM contents are flushed to SSD periodically in a Blink process as well. This is a system-level process, a memory state capture, and not a VM-level one. ForsaOS retains at least one Blink and it cannot be deleted.
According to Rob Peglar, CTO of Formulus Black, this scheme turns server DRAM into a single giant non-volatile DIMM. This means an FbMed app can fit entirely in memory and never has to write to external storage, thus removing all storage IO time from its run-time.
Formulus will have access to true memory storage aka storage-class memory when Intel Cascade Lake CPU systems arrive with Optane storage, Peglar said.
Unchanged application code
Existing applications run unchanged in FbM form on a FORSA server.
ForsaOS uses the Linux kernel and has a built-in hypervisor, based on KVM, which presents the amplified memory to each guest OS as virtual storage, accessible at memory channel speeds.
Applications “think” they are doing IO to a storage volume but the transfer of information is to a virtual LUN in DRAM whose access has no storage IO penalty. Peglar said disk latency can be 7ms whereas DRAM latency is 100,000 times faster at 7ns.
These virtual LUNs are called LEMs – logical extensions of memory – and they are assigned to virtual machines.
Multiple virtual servers can share the same virtual storage space.
Peglar said a SQL Server VM runs well in a ForsaOS system with 8 cores and 12GB of memory. It can pump out 952,000 transactions per minute in a TPC-C style benchmark.This s a very small virtual system for SQL Server, according to Peglar.
Symbolic IO and bit markers
If the term ‘bit markers’ sounds familiar it is because it was used by Symbolic IO, a precursor company to Formulus Black, which went under in 2017, losing its founder and CEO.
The New Jersey company was re-organised and refunded with venture capital. It is led by Chairman and CEO Dr Carl Bettis and ex-Symbolic IO execs Steve Sicola and Rob Peglar are working at Formulus Black, as a Senior Fellow and CTO respectively. There are some 50 employees in total.
Blocks & Files thinks that Symbolic IO represented a kind of false dawn for bit-marker-style application servers. Here we have Formulus Black, a phoenix from the Symbolic IO ashes.
Will the bird fly? Give it a POC test and find out.
Dell next month will refresh the Unity mid-range storage array line with Skylake Xeon processors and NVMe – the latter as expected after I revealed the firm’s plan in November last year.
According to our sources, data reduction capability is “vastly improved” through better deduplication and compression.
The new products will make their public debut at Dell Technologies World in April at Las Vegas and we understand the numbering scheme for the new arrays is 380, 480, 580 and 680. Other details are scant – for now.
We learned in November 2018 that persistent memory in the form of SCM was an incoming technology on PowerMAX, as was NVMe (both drives and fabric), and we also anticipated back then that both mid-range arrays – SC and Unity – would therefore have SCM support added to their operating systems, and that it would happen before the end of 2019.
We also revealed at the time that the firm – which is currently in the middle of a massive multi-year storage line slimdown after the $67bn borg of Dell and EMC in 2016 – was adding NVMe drives and fabric across its entire storage portfolio.
Once the full specs are unveiled we should have better insight into whether the shift will help Dell EMC fend off encroaching attacks from other suppliers.
Micron Technology has introduced a portfolio of SSD drives with NVMe support in the space-saving M.2 form factor, aimed at delivering accelerated read/write performance with lower latency for client devices like laptops.
The Micron 2200 NVMe SSD range uses Micron’s own 64-layer TLC 3D NAND technology and is based on an internally designed ASIC drive controller and firmware. It is sold initially in capacities of 1TB, 512GB and 256GB.
Micron 2200 M.2 SSD
A glance at Micron’s product brief shows that the new NVMe SSDs have a sequential read speed of up to 3,000 MB/s, a sequential write speed of up to 1,600MB/s, while random access speeds are up to 240K IOPS for reads and 210K IOPS for writes.
Micron says the 2200 series offers power efficiency enhancements that can deliver longer battery life thanks to device sleep (DEVSLP) low-power modes in which the SSD consumes less than 5mW of power.
The new NVMe SSD line-up is available immediately and comes in versions with or without self-encrypting drive (SED) support. Those with the SED support use a built-in AES XTS-256-bit hardware engine compliant with TCG Opal 2.0 standards to ensure that encryption has no impact on performance.
IDC has tweaked its NAND flash supply vs demand forecast, estimating that 2019 and 2020 bit volumes will now grow at 39 per cent and 38 per cent year on year. The analyst firm also expects that reduction in flash prices is likely to slow.
According to IDC, the average selling price of flash NAND in $/GB will fall 45 per cent year on year in the second half of 2019 compared with -54 per cent in the first half of the year.
Supply and demand are the two factors affecting the price that customers pay for their flash storage, with increased demand pushing prices up, prompting the vendors to increase production. This is turn can lead to a glut and falling prices if they overestimate demand.
IDC’s estimates reflect the expectation for 2019 and 2020 bit demand to grow at 39 per cent and 38 per cent, with average selling prices down 50 per cent in 2019 and another 24 per cent in 2020.
In terms of revenue, IDC now estimates the total NAND flash industry worth $54.6bn for 2018 – four per cent less than its earlier estimate. In 2017 total revenue was $48.6bn.
On a quarterly basis, revenue for 4Q18 fell 19 per cent, compared with an earlier five per cent estimate. Total NAND flash bit demand for the same quarter grew 3.7 per cent, against previous estimates of 15 per cent growth.
SSDs are a significant part of the flash market and are now starting to plateau at 54 per cent of total NAND flash bit demand, according to IDC. Smartphones are now expected to gradually increase from 39-41 per cent of the total bit demand from 2020 onwards, following a slight slowdown in 2019 and 2020.
Earlier this month we asked Dell EMC to discuss the features of its upcoming unified mid-range storage server. The company declined our request, saying it was too soon to talk.
But what’s this? Check out the February 7 CRN interview with Dell Technologies Vice Chairman Jeff Clarke, who said the new box would have:
A container-based architecture using micro-services
A news software stack supporting
NVMe drives,
NVMe over Fabrics
Future storage-class, persistent memory products
A new filesystem
AI-based management
Compare this with our own suggestions that we helpfully shared with Dell to try and get the conversation started.
Controllers based on Dell servers
A new file system to cope with burgeoning file data growth
A new software stack to cope with new storage media
SSDs for fast access data
Disk drives for bulk capacity data
NVMe drive support
NVMe-oF support (Ethernet, Fibre Channel and TCP versions)
Intel thinks servers will require greater Optane DIMM memory capacity to enable them to run more virtual machines. And it expects systems with this capacity to enter the market from July onwards.
Jason Waxman, the general manager of Intel’s Cloud Platforms Group, presenting yesterday at the Open Compute Project Summit, said the company sees a requirement for 4-socket servers with up 112 cores – 28 per socket (processor) – and 48 DIMMs – 12 per processor.
The DIMMs would support Apache Pass Optane with up to 12TB of Optane capacity. Apache Pass is Intel’s codename for Optane DC Persistent Memory.
Intel expects 4-socket Optane DIMM-capable systems to appear from July onwards from suppliers such as Dell EMC, HPE, Hyve, Lenovo, Inspur, Supermicro and Quanta.
Inspur showcases Crane Mountain
Chinese server supplier Inspur is teaming up with Intel to contribute “Crane Mountain” to the OCP community. It is an NF8260M5 4-socket 2U server, validated for Cascade Lake and Optane DC persistent memory. A single NF8260M5 2U can support up to 18 TB of memory.
Inspur NF8260M5 4-socket Optane DIMM-capable server
Inspur says a 4-socket server provides up to double digit TCO savings in cloud workload, compared with two 2-socket systems.
Microsoft has developed Project Zipline, a compression algorithm, and optimised hardware implementation to suit data in the Azure cloud.
Compressed data occupies less storage capacity and needs less bandwidth when transmitted across networks.
In a keynote yesterday at the OCP Summit, Microsoft said the Zipline data squeezing code works faster, squeezes more and operates with a lower latency than other algorithms.
Zipline is up to two times better at compression than the Zlib-L4 64KB scheme. It provides up to 92 to 96 per cent data reduction for application services, IoT text files and system logs in Azure.
Micrpsoft’s Zipline ecosystem
Microsoft thinks Zipline is suited for network data processing, smart SSDs, archival systems, cloud appliances, general purpose microprocessor, IoT, and edge devices.
A Pure Storage spokesperson said: “We are currently evaluating Zipline alongside our existing range of compression algorithms to deal with the rising tide of data in every data centre.”
Microsoft is open sourcing Project Zipline and associated Verilog source code for register transfer language (RTL). Some of this is available now to Open Compute Project members and a Github spec is coming soon.
Igneous Systems has slurped in $25m investment to fund sales efforts and technology development. Total Igneous funding is now $66.7m.
Seattle-based Igneous was founded in 2013 and took in $12m in a B-round in January 2018. At that time it was developing an Arm-powered and petabyte-scale unstructured data backup, archive and storage system with a public cloud backend.
Seven months later it had evolved into a data management supplier based on commodity hardware. A core NAS backup feature morphed into a data management services offering. At the same time the company developed deeper API-level integration with Pure Storage’s FlashBlade, Isilon OneFS systems and Qumulo Core filers. Igneous software can backup generic NAS file systems with parallel, latency-aware data movement.
Now Igneous says it offers UDMaaS – Unstructured Data Management as a service – covering file and object data, and providing data protection, movement and discovery. Cloud backends for tiering off data include the big three: AWS, Azure and Google Cloud platform.
Igneous software is API-enabled and cloud-native and the company says it handles petabyte scaling. Competitors include Actifio, Cohesity, Komprise, Rubrik and IBM’s Spectrum Discover.
Anthony Bontrager, managing director at lead investor WestRiver Group, said: “Igneous is uniquely positioned to enable enterprises to unlock the value of their datasets and simultaneously reduce their risk profile. This is a complex problem that Igneous has tackled with impressive technology services.”
Komprise is developing a deep analytics function underpinned by an index of all the files in a global namespace that whirs away in the background while it operates as a file management facility for accessing migrated files through dynamic links.
The single virtual index of file locations and the metadata for that file is distributed across multiple virtual machines (VMs). At least one VM must be on a separate server.
At a briefing today Komprise COO, President and and co-founder Krishna Subramanian said the index will not be used to access files for read and write operations. This is performed through the normal file:folder structure.
Krishna Subramanian
Instead the index is used to search through the file estate to find a subset using any combination of metadata items. For example, find every file created by Tom Blenkinsop in the last five years with “credit” as a keyword.
Actions for this subset include moving to Hadoop for analytics, migration to another data centre or transfer to a public cloud for archiving.
Komprise is developing the software with some of the cash it scooped up last month in a $24m C-round. Seven customers are beta-testing this deep analytics function.
Pure Storage has enlisted the help of Thales to build an end-to-end (E2) encryption facility with no deduplication blocking for the company’s FlashArray//X.
The technology, called Vormetric Transparent Encryption (VTE), was introduced at the 2019 RSA Conference. It is “transparent” in the sense that encryption takes place on the host and is invisible to users or the application.
VTE resolves the difficulty of encrypting data on deduplicating storage arrays.Normally a deduplicating storage array is baffled when a stream of encrypted data comes its way. Both compression and deduplication can be rendered ineffective such that there are few or no space savings.
So how does VTE perform? A FlashArray//X was asked to store the the publicly available 5.3GB Enron email corpus. The array reduced that 79.1 per cent to 1.11GB, a 4.8:1 reduction ratio. It was then encrypted using VTE and stored on a volume in the array with no VTE integration. Result: no data reduction at all.
Th data was then written to the Pure array with VTE integration, and reduced to 1.11GB again – the same 4.8:1 reduction ratio.
How is it done?
The Vormetric File System agent is installed on a LINUX host
The host checks out an encryption key from the Vormetric Data Security Manager (DSM)
The FlashArray registers as a KMIP client with the DSM and checks out the host encryption key
The host writes encrypted data to the FlashArray
The FlashArray decrypts the data using the host key, reduces it, and re-encrypts it with the FlashArray key before writing it to flash. The un-encryption of data with the host key is an added step introduced with the integration.
When the host reads the data, the FlashArray decrypts the data using the FlashArray key and re-encrypts with the host key prior to sending the data to the host. The re-encryption of data is an added step introduced with the integration.
Note there are two added storage steps, which will add some time to operations.
VTE integration also provides granular access control, privileged user access policies and audit logs.
FlashArray//X requires v.5.3 of the Purity OS for all this to work. The upgrade is coming soon.