Project

General

Profile

Actions

Bug #6287

closed

Rudder memory usage for a small installation must hold in a 4G server

Added by Dennis Cabooter almost 10 years ago. Updated about 8 years ago.

Status:
Rejected
Priority:
N/A
Category:
Performance and scalability
Target version:
Severity:
UX impact:
User visibility:
Effort required:
Priority:
Name check:
Fix check:
Regression:

Description

I have 4 GB RAM configured. Postgres is configured (on the advice of ncharles) with 800M shared_buffers.

The output of top for mem usage is:

# top -b -n 1 | grep -i postgres
 5870 postgres  20   0  953372  27496  27232 S   0.0  0.7   1:22.79 postgres
 5872 postgres  20   0  954036 796240 795264 S   0.0 19.7  16:33.60 postgres
 5873 postgres  20   0  953776 755616 754952 S   0.0 18.7   4:00.80 postgres
 5874 postgres  20   0  953776   8892   8660 S   0.0  0.2   3:36.56 postgres
 5875 postgres  20   0  954596   1480    992 S   0.0  0.0   0:54.97 postgres
 5876 postgres  20   0  102420    836    360 S   0.0  0.0   5:55.41 postgres
 6687 postgres  20   0  954968 801124 799644 S   0.0 19.8 175:37.15 postgres
18559 postgres  20   0  956636 816872 813944 S   0.0 20.2  16:33.29 postgres
18562 postgres  20   0  957768 828240 823896 S   0.0 20.5  19:53.75 postgres
18563 postgres  20   0  957732 791764 787456 S   0.0 19.6   8:28.75 postgres
18615 postgres  20   0  961124 817096 810044 S   0.0 20.2   9:46.15 postgres
18616 postgres  20   0  960116 828488 822224 S   0.0 20.5  25:43.11 postgres
# top -b -n 1 | grep -i slapd
5834 root      20   0 2669604 292584  17892 S   0.0  7.2  32:43.12 slapd
# top -b -n 1 | grep -i java
18190 root      20   0 2596388 1.226g   5048 S   0.0 31.8 242:36.77 java
# top -b -n 1 | grep -i rsyslog
 1074 syslog    20   0  769460   7280   1924 S  31.7  0.2 160:58.41 rsyslogd
# top -b -n 1 | grep -i cf-serverd
 2583 root      20   0  310312   9460   1532 S   0.0  0.2   6:35.29 cf-serverd

Edit (FAR): for more context, the installation is small, with around 100 nodes managed.


Related issues 1 (0 open1 closed)

Related to Rudder - Bug #6294: Slapd uses 259888kB swapRejectedActions
Actions #1

Updated by Vincent MEMBRÉ almost 10 years ago

  • Target version changed from 3.0.1 to 3.0.2
Actions #2

Updated by François ARMAND almost 10 years ago

  • Subject changed from Rudder memory usage to Rudder memory usage for a small installation must hold in a 4Go server
  • Description updated (diff)
Actions #3

Updated by Nicolas CHARLES almost 10 years ago

Dennis,

You said that your system is swaping, but is the swap consumming I/O, or is it only cached memory - memory that never changes, and is stored in the swap so that it could be freed more quickly from memory in case of necessity ?

As a remark, 800M of shared_buffer is quite generous, as default is 32 Mo, so if this cause an issue, this number could and should be lowered

Actions #4

Updated by Dennis Cabooter almost 10 years ago

  • Subject changed from Rudder memory usage for a small installation must hold in a 4Go server to Rudder memory usage for a small installation must hold in a 4G server

AFAIK the swap is not consuming I/O. I have 512M configured as shared_buffers for PostGres.

Actions #5

Updated by Vincent MEMBRÉ almost 10 years ago

  • Target version changed from 3.0.2 to 3.0.3
Actions #6

Updated by Vincent MEMBRÉ almost 10 years ago

  • Target version changed from 3.0.3 to 3.0.4
Actions #7

Updated by Nicolas CHARLES over 9 years ago

Dennis, since you changed the postgres config, could you output the result of the "free" command ?
Thank you !

Actions #8

Updated by Vincent MEMBRÉ over 9 years ago

  • Target version changed from 3.0.4 to 3.0.5
Actions #9

Updated by Vincent MEMBRÉ over 9 years ago

  • Target version changed from 3.0.5 to 3.0.6
Actions #10

Updated by Vincent MEMBRÉ over 9 years ago

  • Target version changed from 3.0.6 to 3.0.7
Actions #11

Updated by Vincent MEMBRÉ over 9 years ago

  • Target version changed from 3.0.7 to 3.0.8
Actions #12

Updated by Vincent MEMBRÉ over 9 years ago

  • Target version changed from 3.0.8 to 3.0.9
Actions #13

Updated by Vincent MEMBRÉ about 9 years ago

  • Target version changed from 3.0.9 to 3.0.10
Actions #14

Updated by Vincent MEMBRÉ about 9 years ago

  • Target version changed from 3.0.10 to 3.0.11
Actions #15

Updated by Vincent MEMBRÉ about 9 years ago

  • Target version changed from 3.0.11 to 3.0.12
Actions #16

Updated by Vincent MEMBRÉ about 9 years ago

  • Target version changed from 3.0.12 to 3.0.13
Actions #17

Updated by Vincent MEMBRÉ almost 9 years ago

  • Target version changed from 3.0.13 to 3.0.14
Actions #18

Updated by Vincent MEMBRÉ almost 9 years ago

  • Target version changed from 3.0.14 to 3.0.15
Actions #19

Updated by Vincent MEMBRÉ over 8 years ago

  • Target version changed from 3.0.15 to 3.0.16
Actions #20

Updated by Vincent MEMBRÉ over 8 years ago

  • Target version changed from 3.0.16 to 3.0.17
Actions #21

Updated by Vincent MEMBRÉ over 8 years ago

  • Target version changed from 3.0.17 to 302
Actions #22

Updated by Alexis Mousset over 8 years ago

  • Target version changed from 302 to 3.1.12
Actions #23

Updated by Vincent MEMBRÉ over 8 years ago

  • Target version changed from 3.1.12 to 3.1.13
Actions #24

Updated by Vincent MEMBRÉ over 8 years ago

  • Target version changed from 3.1.13 to 3.1.14
Actions #25

Updated by Vincent MEMBRÉ about 8 years ago

  • Target version changed from 3.1.14 to 3.1.15
Actions #26

Updated by Vincent MEMBRÉ about 8 years ago

  • Target version changed from 3.1.15 to 3.1.16
Actions #27

Updated by Vincent MEMBRÉ about 8 years ago

  • Target version changed from 3.1.16 to 3.1.17
Actions #28

Updated by François ARMAND about 8 years ago

  • Status changed from New to Rejected

This ticket didn't moved for a long time, and we have testing env with that range of memory.
So, perhaps we became better in RAM management ?

All in all, if you want to provide the details needed, please feel free to reopen the ticket.

Actions

Also available in: Atom PDF