Google has announced databases can run faster in its cloud via the Memcached protocol.
Memorystore for Memcached beta launch is available in major Google regions across the US, Asia and Europe and rolls out globally in coming weeks.
Google already supports the Redis in-memory caching system, which it suggests is applicable for use cases such as session stores, gaming leaderboards, stream analytics, API rate limiting, and threat detection.
Both caching systems are popular and so Google has announced Memorystore for Memcached as a fully-managed service. On-premises apps accessing Memcached can also use the service in Google Cloud Platform. Google is responsible for deployment, scaling, managing node configuration on the client, setting up monitoring and patching the Memcached code.
Memcached is popular for database caching. It provides an in-memory key:value store and is multi-threaded, enabling a single system to scale up. The Redis in-memory caching system is single-threaded and scales by adding nodes in a cluster.
However, strings are the only data type supported by Memcached whereas Redis supports several kinds of data structures such as lists, sets, sorted sets, hyperloglogs, bitmaps and geospatial indexes. Redis also has more features. For example, Memcached evicts old data from its cache via a Least Recently used algorithm. Redis has six different eviction policies to choose from, allowing finer-grained control.
Memorystore for Memcached can be accessed from applications running on Google’s Compute Engine, Google Kubernetes Engine (GKE), App Engine Flex, App Engine Standard, and Cloud Functions.
Memcached instances can be scaled up and down to optimise the cache-hit ratio and price. Detailed open source Memcached monitoring metrics to help decision making are available in a dashboard. The maximum instance size is a hefty 5TB.
You can read a quick start guide and other documentation on the Google Cloud website.