Skip to main content

R Validation Hub’s Risk Metric Application and Risk Score – Mini Series Part 1

By September 8, 2023Blog

The R Validation Hub – a working group established within the R Consortium to support the adoption of R within a biopharmaceutical regulatory setting – held a two-part mini-series about their {riskmetric} package and {riskassessment} application. 

The full talk is available here. Part 2 is available here.

In Part 1, the R Validation team talked about defining risk in software quality. Equally important is understanding the intended use of the software. The {riskmetric} package fulfills the crucial need to assess the quality of R packages, ensuring they adhere to the highest standards.

{riskmetric} isn’t just a tool; it’s a comprehensive system. For users, it provides a well-defined workflow and offers insights into the package’s internals, aiding in understanding its functioning better.

Mapping the Future – Roadmap:

The {riskmetric} package is being actively worked on and improved. The major features in the upcoming roadmap include:

  • Ease of Use: The focus is on enhancing user experience. A more intuitive interface coupled with informative messages and functions to generate straightforward reports is on the horizon.
  • Metric Completion: The goal is to provide many metrics from various package metadata sources.
  • Optional Third-party Metric Inclusion: An API that supports metrics reliant on additional packages, giving users a choice to use them.
  • Cohorts: Evaluating the risk associated with a group of packages, treating them as a unified entity.

Metrics aren’t just about numbers; they’re about quality and relevance. In the talk, the team shed light on the guidelines and best practices for proposing or designing package metrics, complemented with examples for clarity.

Introduction of {riskscore} 

The team introduced {riskscore}, a repository that stores the results of riskmetric runs on CRAN. It is envisioned as a community resource with multiple aims:

  • Contextual Scoring: Helping users decipher scores, distinguishing between what’s deemed “good” or “bad.”
  • Benchmarking: Enabling development teams to benchmark scoring weight algorithms with historical results.
  • Trend Analysis: create an interesting dataset for package quality/risks analysis.