This is the second part of our introduction to LucidLink Filespaces technology and where it’s going. Part one gave an account of how and why its product was built. This second part takes a look at egress charges, IBM COS, direct object access and Lucid’s competition.
LucidLink can work with a customer’s existing public cloud storage – AWS, Azure or Google Cloud Platform backends. They all charge for data egress. CEO Peter Thompson said: “We believe that egress is evil.” His view about what Filespaces is doing when remote users get file data from the public cloud is this: “We’re not actually taking the data out. We’re providing access to it, but keeping it in your system.”
“The biggest problem we found with egress was not so much the cost. More importantly, it was the fact that it was unpredictable. And it was charged after the fact.” LucidLink wants to try and eliminate or reduce egress charges.
Thompson describes how: “The first thing we did is we pooled all of our customers and created almost an insurance model, where we could pass on artificially low levels of egress to the customer. That was pretty well received.”
LucidLInk talked to the IBM cloud people and negotiated a zero egress charge deal with IBM COS, which thereby gained “access into large accounts owned by some of the other hyperscalers.”
Thompson wants to do more, though. “The next thing that we want to do – again, in partnership with IBM – is to allow for different tiers of storage.” As a project ages so the access rate of its data goes down and it can be moved to lower-cost storage. He thinks Glacier archives take too long to access. “If they put it in Glacier or put it on tape or something like that, they may never get it back. We’d say that’s where data goes to die.”
He wants LucidLink to keep the cooled data in its file system, in a nearline tier, and be presented as live – but actually stored in a lower-cost storage tier, albeit still with good access. “It’s not as cheap as Glacier. But it would be much, much more cost-effective for them to be able to go back and visit it … This is forward thinking – we haven’t implemented this just yet.”
But this nearline tier is “just absolutely on our roadmap.”
Direct object storage access
Customers tell LucidLink that “they’ve got these massive amounts of assets that are already in object storage. And to use LucidLink, of course, you have to copy them into our file system.”
This takes a transfer processing step. Thompson said: “They want to point and shoot. They say ‘I want to point LucidLink at this existing bucket and be able to utilize those assets’.”
That means LucidLink accesses the customer’s object buckets, scans them and ingests the metadata. According to Thompson: “Then it would be a read-only copy. But in the world of video editing, your raw assets aren’t changed anyway. You create a project file, and then you render it to a new file. And through that rendering process in that editing process, you reference existing assets. But those are WORM files.”
“We’re referring to this as our native layout feature, where we would be able to read the S3 native layout in a LucidLink file system.”
“What we’re trying to do is prevent having to move data around, and move files around, because that’s where you get killed. So if we can access those, and provide provide access to those in place within your file system, then we’ve managed to stop that.”
We asked Thompson how he would position LucidLink against CTERA, Panzura and Nasuni. He said: “We classify those more as a cloud gateway device – a cloud gateway caching device where the intent originally was to tie together branch offices.” They would use a cloud server to bridge between the cloud and on-premises. Home workers would then VPN into the office and they’re still remote.
“We’re actually taking the technology and the caching and moving it out further, we’re actually putting it on the user’s device – whether that’s their laptop, or workstation, at home, or wherever. That’s really the difference.”
With the cloud gateway technology moving files to branch offices – say 10 or 20 – Thompson said: “All of those have to remain in sync, and that’s a pretty high overhead. If I make a change to a file, that change may not show to all the other users for an hour, until it propagates through the entire thing.”
“Whereas with LucidLInk that will be instantaneous. And that’s just because the source of truth is in the cloud. We all have metadata. That immediately gets updated on the files that we’re working on. That’s really the difference between all three of those, and then on the file sync and share side of things, your Dropbox and Google Drive and those types of technologies.”
How about Quantum? “We replace them a lot.”
He says his customers aren’t typically CTERA, Nasuni, Panzura customers. “When we talked to investors, one of the first questions they’d asked us was, who is your competitor? Is it one of CTERA, Nasuni, Panzura? Is it Dropbox? And we’d chuckle and say, well, actually what we find is that our true competitor is FedEx. Because that’s what they’re doing – they’re putting datasets on drives and shipping it out to their people. And then they have to wait for them to do their work and then ship it back and then reintegrate all those assets back into the program. What we hear is these types of of workflows can go from days to hours.”
The cloud-based file collaboration supplier space has several suppliers: Box, CTERA, Dropbox, Egnyte, Hammerspace, LucidLink, Nasuni and Panzura. This two-part account of Lucid’s technology and market positioning shows that it has a firm grasp on what its users need to make their large file and remote worker-based workflows operate more efficiently. Now we’ll see how it capitalizes on its pandemic boost to grow further.