Bug #5965
closedLDAP configuration is not optimized for Rudder use case
Description
LDAP index where done a long time ago. We need to check that they are still valid compared to their use.
Here, we want to check what configuration we should use to get the best performance with OpenLDAP.
Updated by Vincent MEMBRÉ almost 10 years ago
- Target version changed from 2.10.8 to 2.10.9
Updated by François ARMAND almost 10 years ago
- Assignee changed from François ARMAND to Benoît PECCATTE
Updated by Benoît PECCATTE almost 10 years ago
- Status changed from 8 to Pending technical review
- Assignee changed from Benoît PECCATTE to François ARMAND
- Pull Request set to https://github.com/Normation/ldap-inventory/pull/54
Updated by Benoît PECCATTE almost 10 years ago
More info :
I found 2 indexes that can be frequently used (on each group) and that are not defined : osName and nodeHostName
Locks can limit the ability to insert in the db : https://web.stanford.edu/class/cs276a/projects/docs/berkeleydb/ref/lock/max.html
Since its downside is only the memory consumption, I tried using 4k, 40k, 400k and 4M locks. With 4k and 40k, memory consumption increase is not measurable. With 400k it goes up and with 4M thete is a 400MB memory consumption increase. Giving an approximative measure of lock memory consumption: 100B.
Thus I recommend using 40K locks.
Updated by Benoît PECCATTE almost 10 years ago
Slapd.conf entries specific to performance.
All performance related entries are specific to the backend. We are using the hdb backend. mdb (or lmdb) seems to be more performant in every case, without configuration.
For hdb, here are performance related values that can be tuned ( http://linux.die.net/man/5/slapd-bdb ). Note that we already have a 1GB cache for hdb engine.
- cachesize : on a db with 1000 machines we have 15k entries in ldap, ideally this should be something like 15k or more.
- cachefree : can be increased to reduce cache invalidation time, but not to high values, we mays use 10
- checkpoint : the impact is not clear, it allow to flush binary log and seems to be recommended
- dirtyread : we do not cancel transactions, so we won't have inconsistencies and this may improve performances
- idlcachesize : use 3 times cachesize, especially since we can do frequent ldap searches
- linearindex : not necessary since database fits in engine cache
- shm_key : no, in linux it's better to use mmap that shm
Some more informations in http://www.zytrax.com/books/ldap/ch6/bdb.html
Updated by Benoît PECCATTE almost 10 years ago
With cachesize 15k I got faster node listing ain new node listing.
With idlcache size I couldn't measure performance difference.
However with both, there was no difference in memory consumption.
Updated by François ARMAND almost 10 years ago
- Assignee changed from François ARMAND to Jonathan CLARKE
Jon, I would like you to have a sight on that one. Everything seems fine, but you have more background on that subject.
Also, this seems like it can go in 2.10, as revert is trivial and benefits may be huge. Do you agree ?
Updated by François ARMAND almost 10 years ago
- Subject changed from LDAP index are not optimized for Rudder use case to LDAP configuration is not optimized for Rudder use case
- Description updated (diff)
Updated by François ARMAND almost 10 years ago
For information, with ~2000 nodes / 30 directives / 30 rules / 10 groupes, we get ~55000 entries, the extremely vast majority being for inventories and nodes (54000).
Notice that in my example, software are highly consistant between nodes, but we could have ~500 entries for node only for softwares.
Updated by François ARMAND almost 10 years ago
With a lot of testing, it seems that in our context, indexes are nefast to performance.
That seems quite counter intuitive, but looking for nodes by modificationTimestamp goes from ~500ms to ~1.1s with an index on that attribute.
Along these lines, removing all indexes (safe objectClass) gives alike performances than with indexes.
Of course, keeping indexes does have bad consequence: write are less efficient, and RAM is needed to keep the indexes at hand.
On the other hand, as explain before, our directory is tiny - 60k entry for 2 thousand nodes.
Even with large overestiming, for 10 000 nodes (rules/directives/groups are negligible), we get: 5000*500 = 2 500 000 entries, so even with 1kB by entry (it's HUGE), we need 2.5Go of RAM to have EVERYTHING in RAM.
So the best optimisation we can have is to just have everything in LDAP cache by setting cachesize to, say by default (up to 1000 nodes) 1 000 000 (and idlcachesize to 3 000 000) and remove all indexes, and document requirement for big installation (>1000: set openLDAP on it's own server, have at least 1Go + 500Mo by 1000 nodes, set cachesize to 1 000 000 x (number of thousand of nodes).
Updated by François ARMAND almost 10 years ago
Some more data to be able to correclty choose the default cachesize:
On our test with hightly homogenous software, we have:
- 2000 nodes, 55k entries => ratio = 27.5
User 1 (server park);
- 278 nodes, 53k entries => ratio = 190
User 2 (desktop park):
- 87 nodes, 26 k entries => ratio = 300
So by counting 500 entries by node, we are on the safe side.
Moreover, we don't want to have a default that consume huge memory for nothing (OpenLDAP preallocate memory for it's cache at boot, something like 256B by entry)
So, I would propose by default: cachesize = 250 000 (x3 for idlcachesize), so that we can certainly handle 500 nodes without changing any config, and with a good confidence up to 1000/1500 - the kind of size where the user will look about what he should do to keep good performances.
Of course, all that should be documented in Rudder documentation.
With such a default, OpenLDAP will consume around 250Mo at boot time and up to what it need (in my tests, around 550Mo for 55k entries - but it does not mean that we have to cound 10kB by entry, the computation is not as simple here).
Updated by François ARMAND almost 10 years ago
- Pull Request changed from https://github.com/Normation/ldap-inventory/pull/54 to https://github.com/Normation/ldap-inventory/pull/59
The proposed, updated PR is here: https://github.com/Normation/ldap-inventory/pull/59
Updated by François ARMAND almost 10 years ago
Some more reflexion: we really want to have a small install of Rudder that fit in 1Go of RAM. So perhaps we need to use cachesize: 100000 and explain user how to scale.
Updated by Nicolas CHARLES almost 10 years ago
On a relatively large install (150 nodes), with large inventories, and a long history, changing to
set_lk_max_locks 40000 set_lk_max_lockers 40000 set_lk_max_objects 40000
improves significantly the use of Rudder. Nodes page display instantly, rather than after a couple seconds, which greatly improves the quality of use
Updated by François ARMAND almost 10 years ago
- Status changed from Pending technical review to Discussion
- Target version changed from 2.10.9 to 2.11.6
Some more thinking/discusison on that one:
- we are going to only merge it in 2.11.
- we need something to automatically adapt the cache size to what is available. Currently, we are going to end up with user having a lot of free RAM not used.
- at least, the defaults should allow to have up to 1000 nodes without wondering about cache optimization.
Updated by Jonathan CLARKE almost 10 years ago
We think we should auto-adjust OpenLDAP's cachesize to use about 10% of total RAM on each machine.
This gives us a non-intrusive consumption of RAM, and allows for most use cases to work just fine. We estimate that entries depend mostly on number of different OS types/versions (on a parc with all nodes installed on the same OS and version, this is 1) (approx 500 entries per OS) + number of nodes (approx 100 entries per node).
Some examples:
1 GB of RAM > cachesize about 130 000 entries > with 10 different OS types/versions we could have roughly up to 1290 nodes
2 GB of RAM > cachesize about 260 000 entries > with 10 different OS types/versions we could have roughly up to 2600 nodes
4 GB of RAM > cachesize about 530 000 entries > with 10 different OS types/versions we could have roughly up to 5300 nodes
8 GB of RAM > cachesize about 1 000 000 entries > with 10 different OS types/versions we could have roughly up to 10500 nodes
According to our calculations, this is sufficient for most use cases.
We propose to add this automatical calculation into the rudder-slapd init script, so that it is checked everytime slapd is (re)started.
We could use a calculation similar to this (this deliberately uses integer division to round down):
echo $(($(cat /proc/meminfo | grep MemTotal | sed "s/[^0-9]//g") * 1024 / 800 / 100000 * 10000 ))
To keep things configurable, I propose ot add this into /etc/default/rudder-slapd:
# Specify cachesize to set on the Rudder database for OpenLDAP # "auto" means choose the best value depending on number of entries and total machine RAM # "noauto" means don't touch it # a number means use this value RUDDER_CACHESIZE="auto"
And then of course implement in rudder-slapd init script to:
if RUDDER_CACHESIZE != "noauto" if RUDDER_CACHESIZE != "auto" # set the value provided CACHESIZE=${RUDDER_CACHESIZE} else # calculate value we want CACHESIZE=$(($(cat /proc/meminfo | grep MemTotal | sed "s/[^0-9]//g") * 1024 / 800 / 100000 * 10000)) fi # set the cachesize sed -i "s/cachesize[ \t]+.*$/cachesize ${CACHESIZE}/" /opt/rudder/etc/openldap/slapd.conf sed -i "s/idlcachesize[ \t]+.*$/idlcachesize $((${CACHESIZE}*3))/" /opt/rudder/etc/openldap/slapd.conf fi
Obviously, this needs testing (I just wrote this in the ticket...) but it's a start.
Updated by François ARMAND almost 10 years ago
This ticket only manage DB_CONFIG and slapd.conf
Updated by Benoît PECCATTE almost 10 years ago
- Status changed from Discussion to In progress
- Assignee changed from Jonathan CLARKE to Benoît PECCATTE
Updated by Benoît PECCATTE almost 10 years ago
- Status changed from In progress to Pending technical review
- Assignee changed from Benoît PECCATTE to Matthieu CERDA
- Pull Request changed from https://github.com/Normation/ldap-inventory/pull/59 to https://github.com/Normation/rudder-packages/pull/600
Updated by Benoît PECCATTE almost 10 years ago
- Status changed from Pending technical review to Pending release
- % Done changed from 80 to 100
Applied in changeset packages:rudder-packages|commit:329c6af6e014471ea776ca94c1b01aa619bade29.
Updated by Matthieu CERDA almost 10 years ago
Applied in changeset packages:rudder-packages|commit:33f60e8f85bf5fa6c5cef48fa4109f03e7adf286.
Updated by Vincent MEMBRÉ almost 10 years ago
- Target version changed from 2.11.6 to 2.11.7
Updated by Vincent MEMBRÉ almost 10 years ago
- Status changed from Pending release to Released
This bug has been fixed in Rudder 2.11.7, which was released these days.
- Announcement 2.11
- Changelog 2.11
- Download information: https://www.rudder-project.org/site/get-rudder/downloads/
Updated by Florian Heigl about 9 years ago
Adjusting the cache size is so far not documented! :)
I'd looked at the hitrate, which is still 100% ish but according to your numbers it's not to be sufficient or at least has little head room (1GB @ 5300)
Updated by Jonathan CLARKE almost 9 years ago
- Related to Bug #7295: slapd core dumps on 1TB RAM added
Updated by Vincent MEMBRÉ over 8 years ago
- Related to User story #6106: Missing documentation about openldap performance added
Updated by Nicolas CHARLES about 8 years ago
- Related to Bug #6197: Spurious slapd.confe in folder /opt/rudder/etc/openldap/ added
Updated by Nicolas CHARLES over 4 years ago
- Related to Architecture #17128: review index for LDAP added