The recent Docker security issues have made us think twice about use of centralised repositories. Although we have tools in place to screen the use of open source packages and are unaffected, it made us reflect back on how we have chosen and adopted them in the past.
In the open source world, sites like GitHub, NPM, Packagist etc. have made these metrics very visible. You can’t help but notice at the top of the page the number of commits, releases, stars, or downloads etc. A higher number, when picking the library, seems like a good thing for each of them but when you think about it these numbers aren’t right.
However, they are all vanity metrics, because:
- They don’t decrease: `The total number of commits against a package can never go down.
- They don’t evolve: Rates of growth (or deprecation) are not obvious – you just see the current number.
- They can be gamed: Test scripts and build processes may amplify a libraries download count dramatically, especially on more popular packages.
- They are subjective: Whilst “stars” seem positive, people may use them for different reasons e.g. as virtual bookmarks vs. signalling quality.
And yet, metrics and data are incredibly important in guiding better decisions. How does your team decide which libraries to use? Stars, downloads, a combination of?