Bug #8420
closedrudder-slapd leaks memory on RedHat systems
Description
On RedHat system (at least), rudder-slapd leaks memory
18:37:01 root 30372 1 2.9 7694764 22-09:38:07 15:51:51 ? /opt/rudder/libexec/slapd -h ldap://0.0.0.0:389 -n rudder-slapd -f /opt/rudder/etc/openldap/slapd.conf 20:37:02 root 30372 1 2.9 8284588 24-11:38:08 17:31:56 ? /opt/rudder/libexec/slapd -h ldap://0.0.0.0:389 -n rudder-slapd -f /opt/rudder/etc/openldap/slapd.conf 07:22:01 root 30372 1 3.1 9071444 28-22:23:07 21:32:07 ? /opt/rudder/libexec/slapd -h ldap://0.0.0.0:389 -n rudder-slapd -f /opt/rudder/etc/openldap/slapd.conf
it doesn't seem to occur on Debian nor SLES
Happens on 2.11.19
Updated by Nicolas CHARLES over 8 years ago
- Translation missing: en.field_tag_list set to Sponsored
Updated by Nicolas CHARLES over 8 years ago
Updated by Vincent MEMBRÉ over 8 years ago
- Target version changed from 2.11.22 to 2.11.23
Updated by Nicolas CHARLES over 8 years ago
Causes of issue are:
- Compilation option: we compile with RPM default + some extra. Maybe removing extras would solve the issue ?
- versions of the lib used in dependency
Dependencies on an impacted system are:
ldd /opt/rudder/libexec/slapd linux-vdso.so.1 => (0x00007fff319ff000) libldap_r-2.4.so.2 => /opt/rudder/lib/ldap/libldap_r-2.4.so.2 (0x00007f072f47d000) liblber-2.4.so.2 => /opt/rudder/lib/ldap/liblber-2.4.so.2 (0x00007f072f26e000) libltdl.so.7 => /usr/lib64/libltdl.so.7 (0x00007f072f05b000) libdb-5.1.so => /opt/rudder/lib/libdb-5.1.so (0x00007f072ecdb000) libpthread.so.0 => /lib64/libpthread.so.0 (0x0000003e8da00000) libssl.so.10 => /usr/lib64/libssl.so.10 (0x0000003e94a00000) libcrypto.so.10 => /usr/lib64/libcrypto.so.10 (0x0000003e93a00000) libresolv.so.2 => /lib64/libresolv.so.2 (0x0000003e8fa00000) libc.so.6 => /lib64/libc.so.6 (0x0000003e8d600000) libdl.so.2 => /lib64/libdl.so.2 (0x0000003e8d200000) /lib64/ld-linux-x86-64.so.2 (0x0000003e8ce00000) libgssapi_krb5.so.2 => /lib64/libgssapi_krb5.so.2 (0x0000003e92600000) libkrb5.so.3 => /lib64/libkrb5.so.3 (0x0000003e93e00000) libcom_err.so.2 => /lib64/libcom_err.so.2 (0x0000003e8f200000) libk5crypto.so.3 => /lib64/libk5crypto.so.3 (0x0000003e93600000) libz.so.1 => /lib64/libz.so.1 (0x0000003e8e600000) libkrb5support.so.0 => /lib64/libkrb5support.so.0 (0x0000003e93200000) libkeyutils.so.1 => /lib64/libkeyutils.so.1 (0x0000003e92e00000) libselinux.so.1 => /lib64/libselinux.so.1 (0x0000003e8ee00000)
Updated by Vincent MEMBRÉ over 8 years ago
- Target version changed from 2.11.23 to 2.11.24
Updated by Vincent MEMBRÉ about 8 years ago
- Target version changed from 2.11.24 to 308
Updated by Vincent MEMBRÉ about 8 years ago
- Target version changed from 308 to 3.1.14
Updated by Vincent MEMBRÉ about 8 years ago
- Target version changed from 3.1.14 to 3.1.15
Updated by Vincent MEMBRÉ about 8 years ago
- Target version changed from 3.1.15 to 3.1.16
Updated by Vincent MEMBRÉ about 8 years ago
- Target version changed from 3.1.16 to 3.1.17
Updated by Vincent MEMBRÉ almost 8 years ago
- Target version changed from 3.1.17 to 3.1.18
Updated by Vincent MEMBRÉ almost 8 years ago
- Target version changed from 3.1.18 to 3.1.19
Updated by Benoît PECCATTE over 7 years ago
- Severity set to Minor - inconvenience | misleading | easy workaround
- User visibility set to Infrequent - complex configurations | third party integrations
Updated by Vincent MEMBRÉ over 7 years ago
- Target version changed from 3.1.19 to 3.1.20
Updated by Jonathan CLARKE over 7 years ago
- Status changed from New to Rejected
- Priority changed from 27 to 26
We have tried to reproduce this, including on a real Red Hat Enterprise Linux, but no success - there was no memory leak.
I can't figure out the cause of this without seeing it in action. Since we have since changed the backend code from hdb to mdb (in 4.1), it is highly likely this no longer happens - but not certain.
Since there are no actions possible on this for now, and we have had no further reports since almost a year, I'm going to close this as it's not reproducable. I don't doubt the bug exists, I just can't see a way forward without a machine to reproduce it on, and the original reporter for this problem can't give us access to the machine where it used to happen (and it no longer happens) for us to investigate.
If ever this bug reappears, please reopen this ticket and we will do our best to investigate.