Project

General

Profile

Actions

Architecture #12621

closed

Explore alternative format for compliance table

Added by François ARMAND over 6 years ago. Updated over 6 years ago.

Status:
Released
Priority:
N/A
Category:
Performance and scalability
Target version:
Effort required:
Name check:
Fix check:
Regression:

Description

Today, we use a `nodecompliance` table where the compliance details is a `text` type which contains actual json in it. The idea was to store the biggest quantity of information in a format that very amendable to evolutions.

Now that we actually need to use that data, we see three problems:

- 1/ the least one is that we store intermediate compliance in `CompliancePercent` + total number of reports. The idea was that we would be able to reconstruct the source data because we store everything till cmoponents values and message, so it was better to have a preprocessed format, usable as is and human readable in the middle. In fact, it's better to have precise data at each level, else if you want to aggregate several compliance, either you need to process the whole data structure (inefficient and against optimisation / server side aggregation) or you need to make approximation from percent and total number of nodes.

- 2/ the second one is that compliance details takes a lot of space. In the range of 100-500kB per entry. A node generate around 300 entry (288) per day, so for a week on 100 nodes (quite small) and 150kB per compliance, we have to deal with 3Go of datas. It takes a lot of place, and moving so much data between postgresql and rudder for processing take a lot of time (even if it was only bounded by network).

- 3/ we are stuck with Postgres 9.2 by default for quite some time, since it's what comes with centos/rhel 7. And there is no simple way to update to a more recent postgres without (simple as in "no need to hand migrate datas/config file/etc and have everything works in rudder)

All these consideration let us think that we may need to change the data format of node compliance. Perhaps having two table, one for "archiving" (the current one with all datas). And one for querying, in a different format. Both would have separatly configurable clean-up periode.

These problems are related to performance bootlneck discovered in "reporting" plugin (for ex: #12620 and related ticket).


Subtasks 2 (0 open2 closed)

Architecture #12648: Add migration script for table nodecompliancelevelsReleasedFrançois ARMANDActions
Bug #12661: Rudder fails to boot because cleaning TTL property is added after rudder reboot by migration scriptReleasedVincent MEMBRÉActions

Related issues 1 (0 open1 closed)

Related to Rudder - Bug #14007: Indexes on nodecompliancelevels table are not valid and table ArchivedReportsExecution is never usedReleasedFrançois ARMANDActions
Actions #2

Updated by François ARMAND over 6 years ago

So, I explored two alternatives to see what could float.

1/ The first one is just a variation of JSON, and would allow to perhaps keep just one table. In place of JSON details, I used a new data type:

```
CREATE TYPE Compliance AS (
ruleId text
, directiveId text
, complianceLevel int[]
);

CREATE TABLE nodeComplianceComposite (
nodeId text NOT NULL CHECK (nodeId <> '')
, runTimestamp timestamp with time zone NOT NULL
, details Compliance[] NOT NULL
, PRIMARY KEY (nodeId, runTimestamp)
);
```
So here, we have an array of [ruleId, directiveId, [compliance level in an int[]] for details. It's much more compact than JSON, and the perf are in the domain of "not amazing but we will be able to do something".
But there is some caveat:

- JDBC does not know about array of custom type. So we need to expand the custom type (with `unnest`) to make data go throught JDBC driver. That's cumbersome and remove a lot of the appealing simplicity of the solution.
- I wasn't able to unnest a data with a custom data in it, meaning I was forced to used a int[] where a `ComplianceLevel` data type would have been better.
- Everything in SQL request, even with postgresql private functions, cries "don't do things like that". Any request grows quickly quite complicated, with a lot of subqueries.

2/ The second solution just used a ol'normalized table, with everything expanded:

```
CREATE TABLE nodecompliancedata (
nodeId text NOT NULL CHECK (nodeId <> '')
, runTimestamp timestamp with time zone NOT NULL
, ruleId text NOT NULL CHECK (nodeId <> '')
, directiveId text NOT NULL CHECK (nodeId <> '')
, pending int DEFAULT 0
, success int DEFAULT 0
, repaired int DEFAULT 0
, error int DEFAULT 0
, unexpected int DEFAULT 0
, missing int DEFAULT 0
, noAnswer int DEFAULT 0
, notApplicable int DEFAULT 0
, reportsDisabled int DEFAULT 0
, compliant int DEFAULT 0
, auditNotApplicable int DEFAULT 0
, nonCompliant int DEFAULT 0
, auditError int DEFAULT 0
, badPolicyMode int DEFAULT 0
, PRIMARY KEY (nodeId, runTimestamp, ruleId, directiveId)
);
```

That not pretty, and any query/update is an horror in the number of fields. Plus, we are going to have a lot of lines - again. Update for one node compliance are slower, to.
On the bright side, that format is very well suited to be able to do the whole calculus postgre-side, and so we just get back in Rudder 30 Compliance level. That's quick, even in a slow network. And the whole query is quite fast (in the order of 100 ~ 400 ms for 100 runs on 100 nodes).

With that solution, we will also need for sure a dedicated table for that. We could keep the other (current) nodecompliance one to get detailled information of for a given node/run (ie to "zoom in" to understand some compliance reported in the aggregation graphe).

Actions #3

Updated by Vincent MEMBRÉ over 6 years ago

First remark, in both ideas, I think you can remove the directiveId since we only filter on Rules for compliance graph (this would reduce the amount of array to n Rules instead of n Rules * x directives)

In first solution, where we would replace current details with a more efficient one, you lose the details of the compliance which goes to component and value ( Rule > Directive > Component > value ), so i don't think we should choose that solution.

I would be ok to do this if this was a another column ( "details" would still be a huge json, and we would have a new column "summary" which contains only the compliance level with an array of type Compliance.

But as you said above, this solution has quite some drawbacks and may not be very interesting.

Second solution is fine to be, but I would not make a column for each compliance level but either an array of int like in solution one, or keeping a text containing the json array of compliance we could directly sent to the interface.

So i would say a table like this:

```
CREATE TABLE nodecompliancedata (
nodeId text NOT NULL CHECK (nodeId <> '')
, runTimestamp timestamp with time zone NOT NULL
, ruleId text NOT NULL CHECK (nodeId <> '')
, compliance int[] or text
, PRIMARY KEY (nodeId, runTimestamp, ruleId, directiveId)
);
```

Actions #4

Updated by Vincent MEMBRÉ over 6 years ago

One other question, should we have a batch that initialize the table by translating data from nodeCompliance to that new table

Actions #5

Updated by François ARMAND over 6 years ago

Vincent MEMBRÉ wrote:

First remark, in both ideas, I think you can remove the directiveId since we only filter on Rules for compliance graph (this would reduce the amount of array to n Rules instead of n Rules * x directives)

I don't think so. Wanting a report on a specific directive seems something everybody would like to have, because for most people, especially with technique from technique editor, a unit of "work" is the directive. Rule is a little too big in most cases.
As you said, there is a tradeoff, and we are growing the table by a heuge factor. So I'm interested by feedbacks here (@Benoit ?)

In first solution, where we would replace current details with a more efficient one, you lose the details of the compliance which goes to component and value ( Rule > Directive > Component > value ), so i don't think we should choose that solution.

I would be ok to do this if this was a another column ( "details" would still be a huge json, and we would have a new column "summary" which contains only the compliance level with an array of type Compliance.

Do you see a reason to keep that level of details? In the general case and in reporting plugin case? For now, I'm not I know appart for the specific case where someone want to look at the details of a past compliance result (for debug most likely). But perhaps you see use case in reporting plugin ? Again, Benoit I welcome feedbacks here :)

But as you said above, this solution has quite some drawbacks and may not be very interesting.

Second solution is fine to be, but I would not make a column for each compliance level but either an array of int like in solution one, or keeping a text containing the json array of compliance we could directly sent to the interface.

So i would say a table like this: [...]

I did that at first (the array), but I thought that it will becomes quite hard to add/remove a field at some point without messing up things (quite easy to switch the order of elements for ex).
Moreover, it becomes harder to run aggregate function on the array. It may possible to do it (either via native postgres function or unnest(array)), but the it's just more tedious.
I also though to use a custom data type (as long as it's not in a array, it should be ok for jdbc driver).

But in fact, in each cases, I'm not sure of the gain compared to plain old standard columns. OK, that's a lot of columns, but internally that doesn't change anything for postgres (it's really just the same thing for custom data type, and almost the same thing for array, but with less optimisation because of the dynamic lenght that we don't even use). So why not sticking with SQL conventionnal way?

The text / json solution is not possible, because it forbids any postgres-side processing before at least 9.4, so not before centos8.

Actions #6

Updated by François ARMAND over 6 years ago

With some more discussion on the subject, we will:

- try to minimize change in 4.1 (of course)
- keep nodecompliance table as it is, with the same logic and all,
- create a new table, nodecomplianceSOMETHING (data? levels?) with the interesting data in it,
- create a dedicated cleaner for that table appart from the other compliance one, so that people are able to keep 100 days of nodecomplianceSOMETHING but only 1 of details.

For now, we don't aggregate anything.

Actions #7

Updated by Nicolas CHARLES over 6 years ago

I'm a bit concerned by the size of the entry you mention: 100-500kB per node per run
It seems too big; postgresql compress these entries (in the table size, it appears as TOAST, and should not be very big). If compression cannot really compress the compliance, then we have a real issue: this table is significantly larger than ruddersysevents (which is around 150kB/node/directive), and we should fix this

Actions #8

Updated by François ARMAND over 6 years ago

Nicolas CHARLES wrote:

I'm a bit concerned by the size of the entry you mention: 100-500kB per node per run
It seems too big; postgresql compress these entries (in the table size, it appears as TOAST, and should not be very big).

It's not the size on disk, it's the size that goes throught the network and arrive as text to Rudder to be parsed as JSON. I don't know the size on disk (but we already know it's big).

Actions #9

Updated by Nicolas CHARLES over 6 years ago

François ARMAND wrote:

Nicolas CHARLES wrote:

I'm a bit concerned by the size of the entry you mention: 100-500kB per node per run
It seems too big; postgresql compress these entries (in the table size, it appears as TOAST, and should not be very big).

It's not the size on disk, it's the size that goes throught the network and arrive as text to Rudder to be parsed as JSON. I don't know the size on disk (but we already know it's big).

Ha, yes, the data over network is awfully large
poor man solution: we could reduce the text size of the json
Excerpt from an existing entry

   "rules":[
     {
       "ruleId":"hasPolicyServer-root",
       "serial":13,
       "compliance":{
         "success":87.5,
         "notApplicable":12.5
       },
       "numberReports":8,
       "directives":[
         {
           "directiveId":"common-root",
           "compliance":{
             "success":87.5,
             "notApplicable":12.5
           },
           "numberReports":8,
           "components":[
             {
               "componentName":"Log system for reports",
               "compliance":{
                 "success":100.0
               },
               "numberReports":1,
               "values":[
                 {
                   "value":"None",
                   "compliance":{
                     "success":100.0
                   },
                   "numberReports":1,
                   "unexpanded":"None",
                   "messages":[
                     {
                       "message":"Logging system for report centralization is already correctly configured",
                       "type":"Success" 
                     }
                   ]
                 }
               ]
             },
             {
               "componentName":"CRON Daemon",
               "compliance":{
                 "success":100.0
               },
               "numberReports":1,
               "values":[
                 {
                   "value":"None",
                   "compliance":{
                     "success":100.0
                   },
                   "numberReports":1,
                   "unexpanded":"None",
                   "messages":[
                     {
                       "message":"Cron daemon status was correct",
                       "type":"Success" 
                     }
                   ]
                 }

None of the keys in the json need to be so long, and we do not need spacing (well at least computer wouldn't need them) - i suspect they make up for more than 80% of the data sent

Actions #11

Updated by François ARMAND over 6 years ago

Some quantification of the table:

- (3 uuids + date + 14 int) * 20 runs by hours * 24 hours = 60kB per day.
- for 100 nodes * 15 rules with 10 directives = 900MB
- for 1 month = 26GB

If we aggregate by hour, with need ~1.3GB by month. And by day, 55MB per month.

If we had decided to not keep directive level (only rule granularity), all numbers would be divised by 10 (so 90MB per day for 100 nodes, 2.6GB for a month).

Actions #12

Updated by François ARMAND over 6 years ago

  • Status changed from In progress to Pending technical review
  • Assignee changed from François ARMAND to Vincent MEMBRÉ
  • Pull Request set to https://github.com/Normation/rudder/pull/1934
Actions #13

Updated by Rudder Quality Assistant over 6 years ago

  • Assignee changed from Vincent MEMBRÉ to François ARMAND
Actions #14

Updated by François ARMAND over 6 years ago

  • Status changed from Pending technical review to Pending release
Actions #16

Updated by Benoît PECCATTE over 6 years ago

  • Status changed from Pending release to Released

This bug has been fixed in Rudder 4.1.12, 4.2.6 and 4.3.1 which were released today.

Actions #17

Updated by Nicolas CHARLES almost 6 years ago

  • Related to Bug #14007: Indexes on nodecompliancelevels table are not valid and table ArchivedReportsExecution is never used added
Actions

Also available in: Atom PDF