Project

General

Profile

Actions

Bug #20998

closed

Compliance percentage computation in ComplianceLevel is not correct, and performance is not correct

Added by Nicolas CHARLES over 2 years ago. Updated over 2 years ago.

Status:
Released
Priority:
N/A
Category:
Web - Compliance & node report
Target version:
Severity:
UX impact:
User visibility:
Effort required:
Priority:
0
Name check:
To do
Fix check:
Checked
Regression:

Description

Compliance percentage computation in ComplianceLevel is not correct, which causes several surprising results in the Rules or API results
A rule with 200 compliant, 199 repaired, 1 error, 1 non compliant, and 1 missing will result in 47% success, 50% repaired, and 1% for each (with a precision of 0 digits after the comma)
As we can see here ( https://stackoverflow.com/questions/13483430/how-to-make-rounded-percentages-add-up-to-100 ) it is a complex topic

Our constraints

We must not remove small values (one error in 10000 should show up). So we need a minimal value based on precision
We want to keep the order in the compliance: showing that we have more repaired than success when it's the opposite is really surprising
The result must sum to 100

It seems there are no best solutions for that; maybe a variant on https://stackoverflow.com/questions/13483430/how-to-make-rounded-percentages-add-up-to-100#34959983 would do the trick

Side note on performance

Current implementation makes many small objects
Most of the time, in API request (so for the UI as well), we do withoutPending.computePercent().compliance
withoutPending copy the object
computePercent creates lists, a CompliancePercent
and finally compliance gets only success+repaired+notApplicable+compliant+auditNotApplicable

we could skip the middle men and do the computation directly on the percents, with 0ing the pending, and not creating the percents, but in the end it doesn't makes it much faster
Indeed, computing compliance for 6500 compliance level takes a whooping 149ms, cutting the middle men removes nearly nothing.
Tuning a bit around the BigDecimal (having a default of precision 2 that we don't recomputes) saves nearly 30%

So in the end, it's really the BigDecimal that is expensive

Actions

Also available in: Atom PDF