What’s CDN

CDN is a service dedicated to receive and store CDS’s logs and artifacts.

CDN stores the list of all known log or artifact items in a Postgres database and communicates with storage backends to store the contents of those items. These backends are call units and there are two types of units in CDN:

When logs or file are received by CDN from a cds worker, it will first store these items in its buffer. Then, when the item is fully received, it will be moved to one of the configured storage units. If the CDN service is configured with multiple storage units, each unit periodically checks for missing items and synchronizes these items from other units.

CDS UI and CLI communicate with CDN to get entire logs, or stream them.

Supported units


Like any other CDS service, CDN requires to be authenticated with a consumer. The required scopes are Service, Worker and RunExecution.

You can generate a configuration file with the engine binary:

$ engine config new cdn > cds-configuration.toml

You must have at least one storage unit, one file buffer and one log buffer to be able to run CDN.

CDN artifact configuration

Storage Unit Buffer

You must have a storageUnits.buffers , one for the type log, another for the type file.

Type log:

        bufferType = "log"

          host = "aaa@instance0,instance1,instance2"
          password = "your-password"

Type file:


        # it can be 'log' to receive logs or 'file' to receive artifacts
        bufferType = "file"

          path = "/var/lib/cds-engine/cdn-buffer"

To multi-instantiate the cdn service, you can use a NFS for the bufferType file, example:

        bufferType = "file"
          host = "w.x.y.z"
          targetPartition = "/zpool-partition/cdn"
          userID = 0
          groupID = 0
            Cipher = "aes-gcm"
            Identifier = "nfs-buffer-id"
            ## enter a key here, 32 length
            Key = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
            Sealed = false   

Storage Units Storage

The storage unit ‘storage’ store the artifacts. You can use Local, Swift, S3, Webdav

Example of storage unit local:



        # flag to disabled backend synchronization
        disableSync = false

        # global bandwidth shared by the sync processes (in Mb)
        syncBandwidth = 128

        # number of parallel sync processes
        syncParallel = 2

          path = "/tmp/cds/local-storage"

            Cipher = "aes-gcm"
            Identifier = "cdn-storage-local"
            LocatorSalt = "xxxxxxxxx"
            SecretValue = "xxxxxxxxxxxxxxxxx"
            Timestamp = 0

Example of storage unit swift:


        syncParallel = 6
        syncBandwidth = 1000

          address = "https://xxx.yyy.zzz/v3"
          username = "foo"
          password = "your-password-here"
          tenant = "your-tenant-here"
          domain = "Default"
          region = "XXX"
          containerPrefix = "prod"

            Cipher = "aes-gcm"
            Identifier = "swift-backend-id"
            LocatorSalt = "XXXXXXXX"
            SecretValue = "XXXXXXXXXXXXXXXX"