@Dhruv Agarwal I'm actually super glad you're real and not just slack spam (which has been hitting a number of slacks I'm in in clearly automated ways) because the project you're working on is one I care a lot about.
Are there ways to add additional scoring metrics?
The DORA metrics are obviously the right place to start, but I find having some additional team- and organization-focused metrics gives me a better understanding of where to look to improve my DORA metrics:
(1) Mean time (or distribution of times) between raising a PR and starting code review on the PR.
In my experience, healthy teams talk internally about what they are doing, and are aware of what PRs are coming and are ready to jump on them. Unhealthy teams neither know nor care about each other's PRs.
(2) Mean time (or distribution of times) between code review approved and first deployment to customer.
Having watched a lot of companies, latency that happens downstream from engineering is often huge and poorly tracked. Healthy organizations get merged code into customer hands quickly (even if that's a beta or early-access branch). Unhealthy organizations put working code through slow, human, interdisciplinary processes before it can reach a customer (eg. when marketing, customer support, documentation, etc. teams don't start thinking about the deployment until engineering is "done").
(3) Fraction of commits that never PR and fraction of commits that never reach a customer.
We're all supposed to be doing lots of experiments and learning from customer feedback and pivoting rapidly, so it's OK (in my experience) to have code that doesn't deploy to the true production branch. That said, in my experience organizations where lots of code never makes it to a customer aren't actually doing real experiments, they're just waffling and failing to make decisions, wasting resources in the process.
The main thing I like about these metrics is they are 100% team health/organizational health measures, and if you try to "game the numbers" you're almost certainly going to end up genuinely improving the health of your team and organization.
The challenge with them is they can be tricky to measure because you often need to add additional tags or whatever to have a way to measure when some of these events happen.
Add in some of the statistical "high performing teams/low performing teams" industry-wide numbers like LinearB has been starting to publish and you've got some really useful insights into team and organization health, which I find is the best way to think about making improvements to DORA metrics.