The introduction of pre-integration checks which we got from the migration to Gitlab was a big step forward for our CI infrastructure. There’s still a weak spot though, monitoring the state of a large set of repositories. Here’s a possible way around that.

CI status overview

One thing we lost in the move away from Jenkins are the CI status overview dashboards for entire repository or product groups. Those were quite handy for release management, but also for mass edits or CI environment changes as for example needed during the transition to Qt 6.

For the Qt 6 work we therefore ended up with various tracker tasks with large manually maintained tables containing information about what is already building and in which state the unit tests are. That’s far from ideal.

Gitlab API access

Faced with an annoying manual task the solution is of course obvious: automate it!

Gitlab has an extensive REST API, and there’s an easily available Python module implementing all that already. Listing all projects, filtering to the ones I care about, listing their respective pipeline status and test reports and printing out a summary of that is done in less than 30 lines of code.

That takes a bit to run but produces exactly what I need, much faster and more reliably than going through the almost 80 KDE Frameworks repositories manually. I then proudly showed my work to Ben, who wasn’t exactly impressed…

Be kind to the infrastructure

So, before we continue: just because something is technically possible doesn’t mean it’s also a good idea to do it. Automated access to KDE’s infrastructure in any form can cause unwanted and unexpected costs or side-effects, please always consult with the KDE sysadmin team before doing something like that.

If you produce 100s of REST requests or a query that takes half a minute to complete, that is a strong indicator of something you shouldn’t be doing, but there are less obvious things as well causing unreasonable infrastructure load, like the raw file access.

So, always ask first.

Gitlab GraphQL access

Ben however pointed me to an interesting alternative to the REST interface, Gitlab’s GraphQL API. GraphQL lets you formulate elaborate queries for the server to run, and specify exactly what data you want in the result. That’s perfect for what we need here, and ends up in just a single request to the server, and overall much faster results.

Let’s look at a basic example to list the status of the last CI pipeline run for an entire repository group in the release branch. You can run this directly in Gitlab’s built-in GraphQL explorer.

query {
  group(fullPath:"games") {
    projects {
      nodes {
        name
        fullPath
        pipelines(ref: "release/22.04", first: 1) {
          nodes { status }
        }
      }
    }
  }
}

This is actually not as hard to come up with as it might look on first sight:

  • The GraphQL explorer lists all functions and data types in the “Docs” panel on the right.
  • Properties you can query for a given object (like name and fullPath for a project object) are offered via Ctrl+Space auto-completion. Same goes for the sub-object iteration boilerplate (nodes).
  • Looking at the Gitlab API documentation can be useful to get an overview of Gitlab’s data model. That’s the same for both REST and GraphQL access.

The result of running this is somewhat deeply nested JSON, which follows the same structure as specified in the query. Iterating over that and transforming it into the desired output format can then be done by anything able to process JSON.

Putting this to use

Reality is slightly more complicated than this example though (e.g. skipped or still running pipelines have no status), you’ll find more complete scripts for this here.

These scripts aren’t meant as universal and ready-to-use tools, they merely solve two specific use-cases I have:

  • Identify repositories that have/don’t have a specific CI job set up, which is useful to find repositories with(out) Qt 6 support.
  • Find build or test failures in entire repository groups, which is useful to monitor the impact of mass or environment changes.

If you have similar needs, this might provide some ideas or starting points at least. Nevertheless, the above warning still applies, this is for individual and manual use only, not for automated short interval access or mass deployment.