Home Blog Page 226

Akamai launches managed database service

Akamai graphic
Akamai graphic

Security and content delivery network provider Akamai is launching a managed database service using Linode technology.

Akamai, which supplies high-availability services to enterprises, bought Linode in February this year for $900 million. Linode is a cloud hosting supplier with 11 datacenters in the US, Europe, and the Far East.

It provides a compute (virtual servers) and storage public cloud, with the storage component including Linode Block Services, Linode Backup, and S3-compatible object storage. Linode had a managed database service in beta at the time of the Akamai buy and this is now coming to market.

Will Charnock, Senior Director of Engineering, Akamai, Compute line of business, said: “Every web application needs a database. Being able to automate aspects of database management is critical for applications that need to be scalable, highly performant, and resilient. 

“With the click of a button, developers can have a fully managed database deployed and ready to be populated.” It’s billed as a cloud operations (CloudOps) type service. 

Akamai Linode portfolio

This Linode Managed Database (LMDB) service is the first product launch in Akamai’s compute line of business that was built around Linode’s portfolio. It supports MySQL now and will add PostgreSQL, Redis, and MongoDB by the end of June. LMDB offers flat-rate pricing, security and recovery measures, various deployment options, and high-availability cluster options.

Akamai claims users can move common database deployment and maintenance tasks to Linode and save management time and effort. They can select high-availability configurations to ensure that database performance and uptime are never affected by server failure, Akamai added. Less customer admin management expertise is needed to deploy applications and there is a decreased risk of downtime compared to customers setting up and deploying a highly available database themselves, the company claimed.

Akamai delivers content to edge sites and can now deliver database services to the same.

It envisages e-commerce applications doing dynamic personalization using the managed DB along with Kubernetes and FaaS (Function-as-a-Service). Another use case Akamai talked of is healthcare with trainers archiving surgery videos to help train new doctors. The object storage service would play a part here.

Akamai intends to expand into new managed services using Linode. There is obvious potential for adding a Backup-as-a-Service offer and also, perhaps one for file services.

SingleStore will measure your app’s ‘data intensity’

SingleStore is introducing a “Data Intensity Index” to help companies understand how data-intensive their RDBMS applications are and how SingleStore’s database can cope with high data intensity levels.

The company provides a relational SQL database that can handle transaction and analytical workloads, and can run on-premises or in the public cloud. Akamai, Comcast, Royal Bank of Canada, and Uber are customers. When a customer requests a ride during surge periods, such as New Year’s Eve, Uber presents a real-time surge price within milliseconds using SingleStore. 

A blog by SingleStore principal solutions engineer Ian Gershuny discusses the firm’s data intensity concept and says: “The truth is, a lot of your existing applications should be data intensive,” but they are not because “we limit our designs around bottlenecks. We don’t bring in too much data so our load process doesn’t choke. We report on stale data since our processes are batched to our analytics and reporting databases. We limit access so as not to overload the database.”

Gershuny says: “Even if you’re able to work around these limitations, our need for data-intensive applications is in our future.” An example is delivery trucking: “Five years ago it would have been inconceivable to track a UPS truck driving through a neighborhood. Now, I watch it right on my phone.”

This data intensity concept is a neat marketing way to encapsulate SingleStore’s message; its combined transaction and analytics architecture can cope with more data intensity than separate databases. The Data Intensity Index is a SingleStore notion for an index value based on five variables: data size for an application, ingest speed, query complexity, query latency (completion time), and concurrency (number of users).

SingleStore Data Intensity website
SingleStore Data Intensity website

Companies can enter their application’s values for the variables into a SingleStore website page by answering 10 questions:

  • Dataset size (1TB, 1-10TB, 10-50TB, 50-100TB, >100TB)
  • Data growth rate (<10%, 10-30%, 30-60%, 60-100%, >100%)
  • Data ingest speed in rows/sec  (<1k, 1k-10k, 10k-100k, 100k-1m, >1m)
  • Query complexity in joins (1-2, 3-5, >6)
  • Query completion time need (Minutes, 1 – 10 secs, 100ms – 1sec, 10 – 100ms, 0 – 10ms)
  • Number of users (<100, <1,000, <10,000, <100,000, >100,000)

The assessment is reckoned to take three minutes, and results in an index score between zero and 100. The higher the score the more data-intensive your application, but the score boundary between a high-intensity app and not-so-high-intensity ones or even low-intensity ones has not been provided by SingleStore. It’s all relative.

The report also provides an assessment of what kind of data infrastructure SingleStore reckons an application will need to deliver the best user experience.

Read Gershuny’s blog for a look at SingleStore’s concept and see if it resonates with you. A downloadable white paper is available here. Note that  the assessment is not live yet, and will be available on May 3.

Erasure Coding

Erasure Coding – This is data coding added to a data set to enable a storage system to recover from the loss (erasure) of parts of the data set caused by, for example, a storage drive failure. A piece of data is split into sectors or chunks. These have codes generated from them and added to them with the now redundant data sectors stored across different drives. Should a drive fail the remaining data sectors, with the added erasure coded data, can be used to recover the lost (erased) data. 

Erasure coding requires less overhead (added redundant data codes) in general than a RAID data protection scheme. Erasure Coding schemes can be described as 4+2P (4 data and 2 parity), 4+3P, 8+2P, and 8+3P, etc.

ePMR

ePMR – Enhanced Perpendicular Magnetic Recording. A Western Digital disk drive data recording technology which applies an electrical current to the main pole of the write head throughout the write operation, a bias current. This current generates an additional magnetic field which creates a preferred path for the magnetisation flip of media bits. This, in turn, produces a more consistent write signal, significantly reducing jitter.

The need for such a bias current is that disk recording heads can provide an inconsistent magnetic field to bits because their write currents can be distorted – so-called “jitter”. This effect makes bit value signal recognition more difficult and worsens as bits decrease in size and are placed closer together. 

Endurance

Endurance – The working life of an SSD before its flash cells wear out. The wear out rate is related directly to the number of write cycles a cell has to endure. The smaller a flash cell is physically then the fewer electrons it contains to indicate its charge and hence bit value. The more bits per cell then the same relationship exists. That means that as flash cells become smaller and/or their bit count increases, their working life shortens. 

Here is an indicative table of representative raw endurance numbers for flash with different cell levels and different process sizes:

These are pre-3D NAND flash technology numbers. An SSD’s endurance can be extended by over-provisioning; having spare cells ready to be used to replace existing cells as they wear out.

Low-endurance flash can be used for read-centric applications such as archiving, where data, once written, is not changed.

For sale: Toshiba considering a buyout

Toshiba logo
Toshiba logo

Japanese conglomerate Toshiba is actively soliciting proposals from potential investors and sponsors about the future of the company, including taking the business private in order to increase its corporate value.

The board’s April 21 communication represents a capitulation to demands from large hedge fund investors to explore options – up to and including a company sale – to realise a greater value from their investment.

Toshiba has endured difficulties ever since its Westinghouse Electric nuclear power station building division ran into trouble, and then headed into bankruptcy in 2017. One of Toshiba’s actions to reduce the loss was to sell off part of the Memory Business to a Bain Capital-led consortium. The Bain Group then reorganized the business, which had a joint-venture foundry operation with Western Digital, and it became Kioxia in October 2019.

Kioxia is 40 percent owned by Toshiba and realizing the value of that stake is important to Toshiba’s future.

Chairman Osamu Nagayama was voted out by shareholders in June 2021 after a corporate governance scandal. Nagayama had already rejected suggestions he resign after major investor Effissimo, which had alleged irregularities with appointments, said Toshiba’s removal of two of the three members of its audit committee was not enough.

Toshiba’s management, run by interim CEO Satoshi Tsunakawa, then proposed dividing Toshiba into three separate companies in November last year, with Kioxia being sold and the cash given to shareholders. 

Three way split

This tripartite idea was actually put forward by a Strategic Review Committee (SRC), appointed in May 2021. This followed Toshiba rejecting a takeover bid  by CVC, another private equity business.

But the three-way split was rejected for not going far enough. Toshiba’s management subsequently proposed separating Toshiba into two business entities in March this year. Effissimo rejected that idea as well, as did Toshiba’s second largest shareholder, 3D Investment Partners.

Tsunakawa resigned in March and corporate SVP Taro Shimada was appointed as Toshiba CEO. Earlier this month Effissimo agreed to sell its stake to Bain Capital, were Bain to make a formal takeover bid for Toshiba. 

Faced with unrelenting opposition to by the largest shareholders, Toshiba management are considering their optins. Toshiba has retained Nomura Securities as its financial advisor and wants to receive confidentiality pledges from “potential investors and sponsors as our potential partners.”

The business will then provide “detailed company financial and business information“ and “hold discussions in a timely manner with a view to receiving non-binding proposals on strategic alternatives.”

Toshiba will publicly announce the number of non-binding proposals received and the structures of the deals offered ahead of a Toshiba Annual General Meeting scheduled for the second half of June. The best offer will be revealed after the AGM.

Should the business be broken up, Toshiba’s disk drive business could be attractive both to Western Digital, Kioxia’s NAND fab joint-venture partner, and Seagate. And Toshiba’s share in the Kioxia NAND business could be sold in part through a Kioxia IPO. If Western Digital could find the cash, buying Toshiba’s share of Kioxia would be in its long-term flash chip supply interests.

Kasten founders involved in another startup – sources

Blocks & Files’ sources in the storage industry say that Kasten’s founders are brewing a startup connected to data management, just over a year after Veeam acquired their container data protection operation.

Update. Alcion.ai is the Kasten founders’ new company. 25 August 2022.

Niraj Tolia and Vaibhav Kamra co-founded Kasten in January 2017 to provide data protection and disaster recovery to Kubernetes-orchestrated cloud-native applications. That startup was bought by Veeam in October 2020. Business results appear to have been good; the Kasten business unit said it grew bookings 900 percent year-on-year in the final 2021 quarter, albeit without revealing the base number of bookings.

B&F has heard they are clambering aboard the startup train again, as Veeam works to scale Kasten. Seemingly they haven’t lost their appetite for risk, multi-year buildouts, and having to sell a new concept.

Kasten founders Niraj Tolia and Vaibhav Kamra
Niraj Tolia (left) and Vaibhav Kamra

We don’t know the precise technology focus of their new venture, but we’ve heard it involves data management, security, software-as-a-service (SaaS), machine learning (ML), and the public cloud. This could include current all-in public cloud users and hybrid on-premises/public cloud users.

Their new company is called Alcion and it has a website.

Before Kasten, Tolia and Kamra were instrumental in the success of Maginatics, a startup that built distributed file systems and was subsequently acquired by Dell EMC. The founders transformed the Maginatics product into the CloudBoost product family that became the public cloud enabler for several of EMC’s data protection products.

BEOL

BEOL – Back End of Line. The final stages of a semi-conductor process in which a wafer is built with component chips on it. ReRAM developers Intrinsic Semiconductor and Weebit Nano say their ReRAM technologies are BEOL-compatible and can be added to microcontroller semiconductor fabrication processes to provide embedded ReRAM memory on the microcontroller chips. This is an alternative to having a separately fabricated external memory module which would be more expensive and slower to access than on-chip memory.

Ei

Ei – Exbibyte or 1,024 pebibytes. See Decimal and Binary Prefix entry.

ECC

ECC – Error checking and Correction. ECC is a mechanism used to detect and correct errors in memory data due to environmental interference and physical defects. Most memory errors are single (1-bit) errors caused by soft errors (eg. cosmic rays, alpha rays, electromagnetic interference) but some can be due to hardware faults (eg. row hammer fault). Single bit errors can be corrected by ECC memory systems. Multi-bit errors, may also be detected and/or corrected, depending on the number of symbols in error.

ECC is implemented by generating and storing an encrypted, parity-like code used to not only identify the bit in error but correct it as well. This implementation-dependent ECC code is generated and stored on writes, and verified on reads. The most common implementations use Hamming codes for single-bit correction and double-bit detection. Hamming codes define parity bits which cover a pre-defined set of data bits. Typically, an 8-bit hamming code is used to protect 64-bit data. Hamming codes can detect one-bit and two-bit errors, or correct one-bit errors without detection of uncorrected errors.

ECC memory is a type of DRAM used in workstations and servers. ECC memory differs from non-ECC memory as it has nine memory chips instead of the usual eight, with the ninth chip being used for error detection and correction among the other eight memory chips.

EB

EB – Exabyte – one thousand petabytes. See Decimal and Binary Prefix entry.

EAMR

EAMR – Energy-Assisted Magnetic Recording – a disk drive recording technology in which energy, in the form of microwaves or heat, is applied to a magnetic recording medium which is resistant to having its polarity changed at room temperature. This is known as having high coercivity. The applied energy lowers the coercivity while the energy is applied, making it easier to write data to the medium, change its polarity in other words. Once the energy is withdrawn the coercivity returns to its previous high state and the polarity, and hence bit value, remains stable. See MAMR and HAMR.