If you look at the trending topics in information technology there is a tendency to spot: Isolation!

What is the problem we are trying to solve?

While camping this year, I rebuilt the process of how the documentation for a particular software can be written. Instead of the bulky non-comparable proprietary binary file, I wanted to write in markdown and manage changes with git along with the source code.

So here is what I came up with:

  • A script, that holds knowledge about directories representing chapters, merges using cat, and calls…
  • Pandoc to convert from markdown to PDF using…
  • LaTeX utilizing…
  • various TeX packages.

Works like charm, if you have each component in place with the correct version, otherwise everything just implodes.

Having all the dependencies and their versions defined, reproducible to deploy, and isolated to run, would take a lot of pain.

Containerization

One way to achieve isolation is containers. Instead of a Virtual machine (VM), that additionally isolates the operating system itself, a container can be seen as the lightweight version, which only virtualizes a process. But there is more. With containers comes a registry, which holds a certain baseline, like the operating system (OS) or a some scratched distroless files. This allows to describe a container with a plain text file. When run, the container image is pulled of the repository, if not in place yet, dependencies are installed, and the desired application - just another dependency to the container - is started.

Wikipedia describes it as:

…operating system-level virtualization or application-level virtualization over multiple network resources so that software applications can run in isolated user spaces called containers in any cloud or non-cloud environment, regardless of type or vendor…

Wikipedia: Containerization_(computing)

But that is not the only frontier that aims to solve this issue.

Self contained applications

I guess inspired by Go, for instance dotnet, for a while now allows to build a self contained single executable. This means that the application and all its dependencies are packaged into a single executable.

And even before that the classic .NET framework shipped with abilities for dependency isolation. The runtime shipped with the base class library which mostly got installed into the Global Assembly Cache (GAC). Dependencies could be deployed there for global usage.On the opposite you could just deploy dependencies along with your, namely the bin directory. If an assembly was found in the local bin directory and the GAC, the local one was preferred and loaded.

Because x-copy deployment is and was a thing and becomes much easier, the less files there are, even back in the day there were approaches to make a single executable by just embedding all dependencies as resources.

Ahead-of-time compilation (AOT)

…is the act of compiling an (often) higher-level programming language into an (often) lower-level language before execution of a program, usually at build-time, to reduce the amount of work needed to be performed at run time…

Wikipedia: Ahead-of-time compilation

As of 2023 with Ahead-of-time compilation (AoT), dotnet even offers to compile to platform native assembler code, including linked dependencies.

Even more isolation and in comparison to a JIT compile of course performance advantages.

Flipping the coin

I you followed the information security news over the last weeks, there were two event that you did not miss.

  1. CVE-2023-4863, a heap buffer overflow in libwebp, a library to encode and decode images in WebP format, used by Google Chrome, all other chromium based browsers (so all but Mozilla?), all electron based applications (yes, that is 1Password, Signal, Postman, Slack, Microsoft Teams, Microsoft Visual Studio Code, Logitech Options+ and so on… ), and a few handful other apps.
  2. CVE-2023-38545, a heap overflow in the SOCKS5 implementation of the almost used everywhere and by everyone curl.

Happily the curl bug ist in an area, where most users weren’t affected - SOCKS5 is definitely rarely used than HTTP.

But both show how wide spread the impact of such security event is. Millions of applications and devices need to be patched, fast.

In 2021 the 10.0 critical CVE-2021-44228 (Log4Shell) was discovered in Apache Log4j2 and has taught us, that patching takes time. A lot of time. Maybe everlasting?

Log4j is one of the most serious software vulnerabilities in history. Log4j is an “endemic vulnerability” and unpatched versions of Log4j will remain in systems for years to come, perhaps a decade or longer. The event is not over. Risk remains and network defenders must stay vigilant

U.S. Department of Homeland Security

They really call it an “endemic vulnerability”!

Reduced patching effort - shorter time vulnerable

If an application does not live in isolation (we discussed the issues and risks of doing so) and uses a shared library, multiple applications running on the same host or components running in the same container can profit from patching just in on place.

  • As an ISV you could avoid to create a new version, just for shipping a dependency you rely on with no changes to your code base, and ship around the ceremony that comes with releasing a new build. An informational e-mail to customers running your software with the advise to update the library will probably do.

  • Being a customer this means most likely shorter times of vulnerability, as you must not wait until a vendor has built and shipped its update - you could update the dependency yourself.

Sounds like some pro’s on the list…

There is always two sides to the same medal! Nothing is free. There is no on fits all solution. Think about which toll you are willing to pay and why.