How is self-hosted SaaS backup service business Keepit going to back up hundreds of different SaaS apps by 2028, starting from just seven this year?
We asked its CTO, Jakob Østergaard, three questions to find out more, and this is what he said:

Blocks & Files: Could Keepit discuss how its SaaS app connector production concept differs from that of HYCU (based around R-Cloud)
Jakob Østergaard: While we lack detailed insight into the concrete mechanics of how HYCU is adding workload support to its RCloud, we can at least offer some perspective into how Keepit approaches the problem.
In the early days, everyone started out the same way, implementing direct support for each workload using traditional software development methodologies; writing one line of code at a time in their general purpose programming language of choice. We believe HYCU, Keepit and others in the industry started very similarly in this respect.
HYCU announced a push in 2023 to support new workloads with Generative AI. From an engineering standpoint this is an interesting idea. If this can be made to work, it would potentially be a major productivity boost, allowing the vendor to more quickly add new workloads.
However, the real-world challenges of supporting a new workload go far beyond the (potentially AI-supported) implementation of API interactions. A vendor will need to understand the workload’s ecosystem and, more importantly, the workload’s users.
To back up, say, Miro, merely interacting with the Miro API is only a small piece of the puzzle. One needs to understand how an enterprise uses Miro in order to build a solution that properly addresses the customers’ needs. This, and many other equally complex deliberations, are not easily solved with AI today and therefore while this is an interesting idea, the reality is more complicated.
At Keepit, we have been focusing on improving the “developer ergonomics” of workload creation – so that in the future, we could allow second- or third- parties to develop new workload support. Our focus is on removing the need for creating complicated code, rather than automating its creation.
To illustrate the approach Keepit has taken, it is perhaps most useful to compare it to SQL. A relatively simple SQL statement is developed and sent to the database server – the advanced query planner in the SQL server then devises an actual executable plan, a piece of software if you will, that will produce the results described in the original SQL statement.
The benefit of this approach is that the amount of code that needs to be maintained (the “SQL statement” in the example) is minimal, and that the execution engine (the “query planner” in the example) can be upgraded and improved without the need to rewrite the workload code.
It is clear that creating a workload can neither be fully automated, nor built with zero code. No matter the approach, there is no completely free lunch when considering adding workload support. There are many possible ways to improve how more workloads can be supported by a platform, HYCU and Keepit have picked two of them
The future is certainly interesting – we will be watching which strategies players in the industry undertake in the future to broaden workload support. Keepit has been following its own strategy to more effectively add serious support for a broader set of workloads.
Blocks & Files: Who produces the SaaS app connectors using DSL – Keepit or the SaaS app supplier?
Jakob Østergaard: With our DSL [Domain-Specific Language] technology, Keepit is currently responsible for the development of new connectors. As the technology and tooling around it matures, there is a lot of future potential in allowing second- or third-party development of connectors, and there are a number of interesting business models that could support this. For the time being, however, Keepit does the development.
Blocks & Files: What’s involved in writing a DSL-based SaaS app connector?
Jakob Østergaard: Where “classical” connector development involves writing a lot of code in a general purpose programming language, the DSL being a “domain specific language”, can lend itself better to the specific job of connector development.
For example, where typical programming languages (like C++ or Python) are strictly imperative (“if A then do B”), and some other languages (like SQL or Prolog) are declarative (“find all solutions satisfying X criteria”), we have been able to mix and match the paradigms we needed the most into our DSL.
Therefore, there are places where we wish to describe relationships (in a declarative fashion) and have the system “infer” the appropriate actions to take during backup, and there are other places where we write more classic imperative code. Having a language that naturally caters to the problem domain at hand, has the potential to offer significant productivity benefits in developing new connectors.
This has been pioneering work that we started working on more than a year ago. New technology takes time to mature and we are currently getting ready to release the first workloads built using this new technology. This technology will help us on our journey to support the hundreds of workloads that the average business today is already using in the cloud, and we are very excited to launch the first of these new workloads a little later this year.