Bug #3909
closedWith OpenVZ, cf-agent on the host see all other cf-agent execution and kills them
Description
In OpenVZ, the host see all guest processes, and in particular cf-agent execution. So its cf-agent believe it goes wild, and kills all the other cf-agent.
Information: http://openvz.org/Processes_scope_and_visibility
Updated by Nicolas CHARLES over 11 years ago
The problem is more global, as promises relying on processes checks are probably not returning what we expect (example, services management)
Updated by Nicolas CHARLES over 11 years ago
This is a real ol' bug : https://cfengine.com/bugtracker/view.php?id=921 .. which haven't been fixed :(
Updated by Nicolas CHARLES over 11 years ago
We could add a script to use the proper values, and have CFEngine use it
However, I'm not sure how to identify the local system ... Olivier, do you know if we must use 0, or a specific number per node ?
Updated by Andrew Cranson over 11 years ago
Hi Nicolas,
If this is the same as Virtuozzo (which is based on OpenVZ), you can run this to see node-level processes:
vzps -E 0
for example:
vzps -E 0 aux
-E flag specifies to show processes for a specific container ID (CTID), and CTID=0 is always the node on Virtuozzo/OpenVZ.
Hope this helps. For any Virtuozzo questions in future feel free to ping me.
Updated by Olivier Mauras over 11 years ago
While 0 is the host, vzprocps tools are not necessarily installed by default, so you may not have official tools to check
Here's some examples to work this out without tools.
Detect an openvz host: filexists("/proc/vz/version")
This file should only exist on a host...
- egrep '(Name|envID)' /proc/215706/status
Name: rsyslogd
envID: 0
- egrep '(Name|envID)' /proc/930944/status
Name: httpd
envID: 11
Everything else than envID: 0 should be ignored
Updated by Nicolas CHARLES over 11 years ago
I've opened bugs on Cfengine bug tracker:
https://cfengine.com/dev/issues/3395
https://cfengine.com/dev/issues/3394
Updated by Nicolas CHARLES over 11 years ago
I've dug within the CFEngine code. Unfortunately, there isn't even a detection of openVZ within, nor hard classes for this
So we'd need to change:- in sysinfo.c : OSClasses(void)
- classes.c to add new hard classes, and command per os
Updated by Nicolas CHARLES over 11 years ago
Ok, there's a pull request on CFEngine to detect OpenVZ :
https://github.com/cfengine/core/pull/582
Updated by Nicolas CHARLES over 11 years ago
Chef & Puppet don't seems to have something to detect if process run on the container or the host ( http://projects.puppetlabs.com/issues/2390 )
BTW, the prefered method to detect the openVZ env seems to be :- if there is /proc/bc/0, then it's the container
- if there is a /proc/vz, then it's host
(ref https://github.com/opscode/ohai/pull/39/files )
Updated by Andrew Cranson over 11 years ago
Hi Nicolas,
BTW, the prefered method to detect the openVZ env seems to be :
- if there is /proc/bc/0, then it's the container
- if there is a /proc/vz, then it's host
You've got this the wrong way around.
+if File.exists?("/proc/bc/0") + virtualization[:system] = "openvz" + virtualization[:role] = "host" +elsif File.exists?("/proc/vz") + virtualization[:system] = "openvz" + virtualization[:role] = "guest"
/proc/bc/0 directory exists on the node-level (host) on OpenVZ/Virtuozzo/Parallels Cloud Server so that's correct.
The elseif is also correct, /proc/vz directory does exist at container-level (guest) on OpenVZ/Virtuozzo/Parallels Cloud Server and we already established it's not the host.
Thanks
Updated by Nicolas CHARLES over 11 years ago
I'm thinking of a quick and dirty solution
If the file /proc/bc/0 exists, then we move /bin/ps to /bin/distrib_ps, and define /bin/ps to be
#! /bin/bash /bin/distrib_ps $* -p $(grep -l "^envID:[[:space:]]*0\$" /proc/[0-9]*/status | sed -e 's=/proc/\([0-9]*\)/.*=\1=')
What do you think of it ?
Updated by Jonathan CLARKE over 11 years ago
Nicolas CHARLES wrote:
I'm thinking of a quick and dirty solution
If the file /proc/bc/0 exists, then we move /bin/ps to /bin/distrib_ps, and define /bin/ps to be
[...]What do you think of it ?
No no no no! We are certainly not going to modify/move or anything like that /bin/ps. That is the most un-good-citizen-like thing I can think of :)
Updated by Nicolas CHARLES over 11 years ago
Ha, I didn't search properly the internet; somebody did implement that before us
https://groups.google.com/forum/#!topic/help-cfengine/h098EgAusoA
http://pastebin.com/ipTeh1Mk
We could improve on that, with Virtuozzo and OpenVZ, and use /bin/vzps only if this file exists, and we are on the host
Updated by Nicolas CHARLES over 11 years ago
I've submited a pull request for CFEngine
Updated by Nicolas CHARLES over 11 years ago
- Status changed from 8 to In progress
Updated by Jonathan CLARKE over 11 years ago
Nicolas CHARLES wrote:
I've submited a pull request for CFEngine
Great stuff. Thanks Nico.
Let's give the CFEngine guys the weekend + 1 or 2 days to respond and guide us away from any obvious blunders or conflicts with upcoming changes their side, then patch our build of rudder-agent to include this.
Updated by Nicolas CHARLES over 11 years ago
PR against master of CFEngine
https://github.com/cfengine/core/pull/956
Updated by François ARMAND over 11 years ago
Just a quick remark: in my test (done by hand, so not scientific at all), time on vzps and the "poor man vzps (fast version)" on http://openvz.org/Processes_scope_and_visibility are almost identical (and both are 4 times what vanilla ps takes)
So I'm wondering if we really should rely on vzps, which is a new package to install, seems broken every even weeks, and is not available on all system.
What do you think?
Updated by Andrew Cranson over 11 years ago
I've tested this on a few production servers, and the results are similar to this every time:
Poor man's vzps (fast version):
real 0m0.458s
user 0m0.096s
sys 0m0.365s
vzps -E 0:
real 0m0.034s
user 0m0.008s
sys 0m0.026s
It's roughly 10-15x faster to use vzps every time from what I've seen (Parallels Cloud Server 64-bit).
Updated by François ARMAND over 11 years ago
Yes, Nicolas explained to me that there is several implementation of vzps, at least one being in perl (roughtly as fast as the shell script) and one in C (10-15x faster than the shell script). I wasn't aware of the different version.
It seems that Virtuozzo comes with the fast C vzps version.
Updated by Nicolas CHARLES over 11 years ago
Francois, it's not a problem to point to vzps, as we could detect in Rudder if the file is there, and if not, put in place the "poor man vzps".
Unless you have another solution ?
Updated by Nicolas CHARLES over 11 years ago
- Status changed from In progress to Pending technical review
- Assignee changed from Nicolas CHARLES to Jonathan CLARKE
- Target version changed from 2.4.9 to 2.6.6
- Pull Request set to https://github.com/Normation/rudder-packages/pull/123
The pull request is there https://github.com/Normation/rudder-packages/pull/123
Please note that the PR hasn't been accepted yet by CFEngine, so the naming convention may change
Updated by Nicolas CHARLES over 11 years ago
- Status changed from Pending technical review to In progress
- Assignee changed from Jonathan CLARKE to Nicolas CHARLES
Ha, there are some correction to do, as the CFEngine team had some remarks.
Meanwhile, Andrew, as you have a "real" vzps, could you tell me what's the output of the next command ?
/bin/vzps -E 0 -o user,pid,ppid,pgid,pcpu,pmem,vsz,ni,rss,nlwp,stime,time,args
I'd like to be sure of the compatibility level of this patch ...
Updated by Andrew Cranson over 11 years ago
This works fine on Virtuozzo + Cloud Server. Sample output from a test server:
USER PID PPID PGID %CPU %MEM VSZ NI RSS NLWP STIME TIME COMMAND root 1 0 1 0.0 0.0 19360 0 1152 1 May08 00:00:02 /sbin/init root 2 1 0 0.0 0.0 0 0 0 1 May08 00:00:00 [kthreadd] root 3 2 0 0.0 0.0 0 - 0 1 May08 00:00:04 [migration/0] root 4 2 0 0.2 0.0 0 0 0 1 May08 08:17:45 [ksoftirqd/0] root 5 2 0 0.0 0.0 0 - 0 1 May08 00:00:00 [migration/0] root 6 2 0 0.0 0.0 0 - 0 1 May08 00:00:07 [watchdog/0] root 7 2 0 0.0 0.0 0 - 0 1 May08 00:00:33 [migration/1] root 8 2 0 0.0 0.0 0 - 0 1 May08 00:00:00 [migration/1] root 9 2 0 0.0 0.0 0 0 0 1 May08 01:28:11 [ksoftirqd/1] root 10 2 0 0.0 0.0 0 - 0 1 May08 00:00:04 [watchdog/1] root 11 2 0 0.0 0.0 0 - 0 1 May08 00:00:05 [migration/2] root 12 2 0 0.0 0.0 0 - 0 1 May08 00:00:00 [migration/2] root 13 2 0 0.1 0.0 0 0 0 1 May08 05:02:34 [ksoftirqd/2] root 14 2 0 0.0 0.0 0 - 0 1 May08 00:00:06 [watchdog/2] root 15 2 0 0.0 0.0 0 - 0 1 May08 00:00:25 [migration/3] root 16 2 0 0.0 0.0 0 - 0 1 May08 00:00:00 [migration/3] root 17 2 0 0.0 0.0 0 0 0 1 May08 00:37:52 [ksoftirqd/3] root 18 2 0 0.0 0.0 0 - 0 1 May08 00:00:05 [watchdog/3] root 19 2 0 0.0 0.0 0 0 0 1 May08 00:04:31 [events/0] root 20 2 0 0.0 0.0 0 0 0 1 May08 00:14:33 [events/1] root 21 2 0 0.0 0.0 0 0 0 1 May08 00:04:18 [events/2] root 22 2 0 0.0 0.0 0 0 0 1 May08 00:07:35 [events/3] root 23 2 0 0.0 0.0 0 0 0 1 May08 00:00:00 [cgroup] root 24 2 0 0.0 0.0 0 0 0 1 May08 00:00:00 [khelper] root 25 2 0 0.0 0.0 0 0 0 1 May08 00:00:00 [netns] root 26 2 0 0.0 0.0 0 0 0 1 May08 00:00:00 [async/mgr] root 27 2 0 0.0 0.0 0 0 0 1 May08 00:00:00 [pm] root 28 2 0 0.0 0.0 0 0 0 1 May08 00:00:27 [sync_supers] root 29 2 0 0.0 0.0 0 0 0 1 May08 00:00:26 [bdi-default] root 30 2 0 0.0 0.0 0 0 0 1 May08 00:00:00 [kintegrityd/0] root 31 2 0 0.0 0.0 0 0 0 1 May08 00:00:00 [kintegrityd/1] root 32 2 0 0.0 0.0 0 0 0 1 May08 00:00:00 [kintegrityd/2] root 33 2 0 0.0 0.0 0 0 0 1 May08 00:00:00 [kintegrityd/3] root 34 2 0 0.0 0.0 0 0 0 1 May08 00:01:18 [kblockd/0] root 35 2 0 0.0 0.0 0 0 0 1 May08 00:00:03 [kblockd/1] root 36 2 0 0.0 0.0 0 0 0 1 May08 00:00:05 [kblockd/2] root 37 2 0 0.0 0.0 0 0 0 1 May08 00:01:19 [kblockd/3] root 38 2 0 0.0 0.0 0 0 0 1 May08 00:00:00 [kacpid] root 39 2 0 0.0 0.0 0 0 0 1 May08 00:00:00 [kacpi_notify] root 40 2 0 0.0 0.0 0 0 0 1 May08 00:00:00 [kacpi_hotplug] root 41 2 0 0.0 0.0 0 0 0 1 May08 00:00:00 [ata/0] root 42 2 0 0.0 0.0 0 0 0 1 May08 00:00:00 [ata/1] root 43 2 0 0.0 0.0 0 0 0 1 May08 00:00:00 [ata/2] root 44 2 0 0.0 0.0 0 0 0 1 May08 00:00:00 [ata/3] root 45 2 0 0.0 0.0 0 0 0 1 May08 00:00:00 [ata_aux] root 46 2 0 0.0 0.0 0 0 0 1 May08 00:00:00 [ksuspend_usbd] root 47 2 0 0.0 0.0 0 0 0 1 May08 00:00:00 [khubd] root 48 2 0 0.0 0.0 0 0 0 1 May08 00:00:00 [kseriod] root 49 2 0 0.0 0.0 0 0 0 1 May08 00:00:00 [md/0] root 50 2 0 0.0 0.0 0 0 0 1 May08 00:00:00 [md/1] root 51 2 0 0.0 0.0 0 0 0 1 May08 00:00:00 [md/2] root 52 2 0 0.0 0.0 0 0 0 1 May08 00:00:00 [md/3] root 53 2 0 0.0 0.0 0 0 0 1 May08 00:00:00 [md_misc/0] root 54 2 0 0.0 0.0 0 0 0 1 May08 00:00:00 [md_misc/1] root 55 2 0 0.0 0.0 0 0 0 1 May08 00:00:00 [md_misc/2] root 56 2 0 0.0 0.0 0 0 0 1 May08 00:00:00 [md_misc/3] root 57 2 0 0.0 0.0 0 0 0 1 May08 00:00:00 [ubstatd] root 58 2 0 0.0 0.0 0 0 0 1 May08 00:00:04 [khungtaskd] root 59 2 0 0.0 0.0 0 0 0 1 May08 00:04:59 [kswapd0] root 60 2 0 0.0 0.0 0 5 0 1 May08 00:00:00 [ksmd] root 61 2 0 0.0 0.0 0 0 0 1 May08 00:00:00 [aio/0] root 62 2 0 0.0 0.0 0 0 0 1 May08 00:00:00 [aio/1] root 63 2 0 0.0 0.0 0 0 0 1 May08 00:00:00 [aio/2] root 64 2 0 0.0 0.0 0 0 0 1 May08 00:00:00 [aio/3] root 65 2 0 0.0 0.0 0 0 0 1 May08 00:00:00 [crypto/0] root 66 2 0 0.0 0.0 0 0 0 1 May08 00:00:00 [crypto/1] root 67 2 0 0.0 0.0 0 0 0 1 May08 00:00:00 [crypto/2] root 68 2 0 0.0 0.0 0 0 0 1 May08 00:00:00 [crypto/3] root 73 2 0 0.0 0.0 0 0 0 1 May08 00:00:00 [kthrotld/0] root 74 2 0 0.0 0.0 0 0 0 1 May08 00:00:00 [kthrotld/1] root 75 2 0 0.0 0.0 0 0 0 1 May08 00:00:00 [kthrotld/2] root 76 2 0 0.0 0.0 0 0 0 1 May08 00:00:00 [kthrotld/3] root 77 2 0 0.0 0.0 0 0 0 1 May08 00:00:00 [pciehpd] root 79 2 0 0.0 0.0 0 0 0 1 May08 00:00:00 [kpsmoused] root 80 2 0 0.0 0.0 0 0 0 1 May08 00:00:00 [usbhid_resumer] root 81 2 0 0.0 0.0 0 0 0 1 May08 00:00:00 [ubcleand] root 111 2 0 0.0 0.0 0 0 0 1 May08 00:00:00 [kstriped] root 173 2 0 0.0 0.0 0 0 0 1 May08 00:00:00 [ttm_swap] root 180 2 0 0.0 0.0 0 -5 0 1 May08 00:00:05 [kslowd000] root 181 2 0 0.0 0.0 0 -5 0 1 May08 00:00:05 [kslowd001] root 286 2 0 0.0 0.0 0 0 0 1 May08 00:01:27 [mpt_poll_0] root 287 2 0 0.0 0.0 0 0 0 1 May08 00:00:00 [mpt/0] root 288 2 0 0.0 0.0 0 0 0 1 May08 00:00:00 [scsi_eh_0] root 299 2 0 0.0 0.0 0 0 0 1 May08 00:00:00 [scsi_eh_1] root 300 2 0 0.0 0.0 0 0 0 1 May08 00:00:00 [scsi_eh_2] root 345 2 0 0.0 0.0 0 0 0 1 May08 00:01:03 [jbd2/sda1-8] root 346 2 0 0.0 0.0 0 0 0 1 May08 00:00:00 [ext4-dio-unwrit] root 347 2 0 0.0 0.0 0 0 0 1 May08 00:00:00 [ext4-dio-unwrit] root 348 2 0 0.0 0.0 0 0 0 1 May08 00:00:00 [ext4-dio-unwrit] root 349 2 0 0.0 0.0 0 0 0 1 May08 00:00:00 [ext4-dio-unwrit] root 397 2 0 0.0 0.0 0 0 0 1 May08 00:01:48 [flush-8:0] root 398 2 0 0.0 0.0 0 0 0 1 May08 00:00:14 [kauditd] root 443 1 443 0.0 0.0 11196 -4 260 1 May08 00:00:00 /sbin/udevd -d root 643 2 0 0.0 0.0 0 0 0 1 May08 00:02:31 [edac-poller] root 10070 2 0 0.0 0.0 0 0 0 1 May08 00:03:04 [jbd2/sda3-8] root 10071 2 0 0.0 0.0 0 0 0 1 May08 00:00:00 [ext4-dio-unwrit] root 10072 2 0 0.0 0.0 0 0 0 1 May08 00:00:00 [ext4-dio-unwrit] root 10073 2 0 0.0 0.0 0 0 0 1 May08 00:00:00 [ext4-dio-unwrit] root 10074 2 0 0.0 0.0 0 0 0 1 May08 00:00:00 [ext4-dio-unwrit] root 10961 1 10961 0.0 0.0 6160 0 496 1 May08 00:00:00 /sbin/portreserve root 11032 1 11032 0.0 0.0 9148 0 508 1 May08 00:08:41 irqbalance dbus 11117 1 11117 0.0 0.0 21404 0 720 1 May08 00:00:04 dbus-daemon --system root 11142 1 11142 0.0 0.0 4080 0 512 1 May08 00:00:00 /usr/sbin/acpid 68 11151 1 11151 0.0 0.0 25184 0 1668 1 May08 00:00:42 hald root 11152 11151 11151 0.0 0.0 18108 0 668 1 May08 00:00:00 hald-runner root 11180 11152 11151 0.0 0.0 20224 0 620 1 May08 00:00:00 hald-addon-input: Listening on /dev/input/event0 68 11193 11152 11151 0.0 0.0 17808 0 728 1 May08 00:00:00 hald-addon-acpi: listening on acpid socket /var/run/acpid.socket root 11208 1 11208 0.0 0.0 64076 0 560 1 May08 00:00:17 /usr/sbin/sshd root 11295 1 11295 0.0 0.1 78680 0 2204 1 May08 00:00:31 /usr/libexec/postfix/master postfix 11311 11295 11295 0.0 0.1 78932 0 2364 1 May08 00:00:05 qmgr -l -t fifo -u root 11319 1 11319 0.0 0.0 110176 0 632 1 May08 00:00:00 /usr/sbin/abrtd root 11327 1 11327 0.0 0.0 108076 0 644 1 May08 00:00:00 abrt-dump-oops -d /var/spool/abrt -rwx /var/log/messages root 11335 1 11335 0.0 0.0 117244 0 704 1 May08 00:00:59 crond root 11347 1 11340 0.0 0.8 150588 0 16372 1 May08 00:21:31 /bin/bash /usr/sbin/vzlmond root 11369 1 11369 0.0 0.0 21456 0 320 1 May08 00:00:00 /usr/sbin/atd root 11577 2 0 0.0 0.0 0 0 0 1 May08 00:00:00 [vzmond] root 11578 2 0 0.0 0.0 0 0 0 1 May08 00:02:25 [vzstat] root 11585 2 0 0.0 0.0 0 0 0 1 May08 00:00:01 [vzmond/vzlist] root 11617 2 0 0.0 0.0 0 0 0 1 May08 00:00:00 [rpciod/0] root 11618 2 0 0.0 0.0 0 0 0 1 May08 00:00:00 [rpciod/1] root 11619 2 0 0.0 0.0 0 0 0 1 May08 00:00:00 [rpciod/2] root 11620 2 0 0.0 0.0 0 0 0 1 May08 00:00:00 [rpciod/3] root 11622 2 0 0.0 0.0 0 0 0 1 May08 00:00:00 [nfsiod] root 11623 2 0 0.0 0.0 0 0 0 1 May08 00:00:01 [vzmond/vzfs] root 11732 1 11732 0.0 26.4 524184 0 505820 1 May08 00:26:03 vzlicmonitor root 11967 1 11967 0.0 0.0 32268 0 528 1 May08 00:02:13 /usr/sbin/r1soft/bin/cdp -s -c /usr/sbin/r1soft/conf/agent_config root 11976 1 11976 0.0 0.0 4064 0 472 1 May08 00:00:00 /sbin/mingetty /dev/tty1 root 11978 1 11978 0.0 0.0 4064 0 472 1 May08 00:00:00 /sbin/mingetty /dev/tty2 root 11980 1 11980 0.0 0.0 4064 0 472 1 May08 00:00:00 /sbin/mingetty /dev/tty3 root 11982 1 11982 0.0 0.0 4064 0 472 1 May08 00:00:00 /sbin/mingetty /dev/tty4 root 11984 1 11984 0.0 0.0 4064 0 472 1 May08 00:00:00 /sbin/mingetty /dev/tty5 root 11988 1 11988 0.0 0.0 4064 0 472 1 May08 00:00:00 /sbin/mingetty /dev/tty6 root 11989 443 443 0.0 0.0 11192 -2 240 1 May08 00:00:00 /sbin/udevd -d root 12013 1 12013 0.0 0.0 93200 -4 700 2 May08 00:00:50 auditd root 29511 443 443 0.0 0.0 11192 -2 168 1 Jul17 00:00:00 /sbin/udevd -d root 1853 1 1831 0.0 0.3 256132 0 6132 4 Aug29 00:00:12 /sbin/rsyslogd -i /var/run/syslogd.pid -c 5 root 59481 1 59480 0.0 0.0 74384 0 1084 1 Sep03 00:00:00 /usr/sbin/zabbix_agentd root 59483 59481 59480 0.0 0.0 74384 0 1188 1 Sep03 00:03:22 /usr/sbin/zabbix_agentd root 59484 59481 59480 0.0 0.0 74384 0 1276 1 Sep03 00:05:02 /usr/sbin/zabbix_agentd root 59485 59481 59480 0.0 0.0 74384 0 1248 1 Sep03 00:05:07 /usr/sbin/zabbix_agentd root 59486 59481 59480 0.0 0.0 74384 0 1276 1 Sep03 00:05:05 /usr/sbin/zabbix_agentd root 59487 59481 59480 0.0 0.0 74396 0 1092 1 Sep03 00:00:30 /usr/sbin/zabbix_agentd root 30763 1 30763 0.0 0.0 37912 0 1816 1 05:00 00:00:00 /var/rudder/cfengine-community/bin/cf-serverd root 30769 1 30769 0.0 0.1 105452 0 2344 1 05:00 00:00:00 /var/rudder/cfengine-community/bin/cf-execd postfix 44622 11295 11295 0.0 0.1 78760 0 3284 1 10:44 00:00:00 pickup -l -t fifo -u root 46581 11347 11340 0.0 0.0 6128 0 516 1 11:33 00:00:00 vmstat 480 -n 2 root 46582 11347 11340 0.0 0.8 150588 0 15760 1 11:33 00:00:00 /bin/bash /usr/sbin/vzlmond root 46583 46582 11340 0.0 0.0 105956 0 1028 1 11:33 00:00:00 awk -v columns=r,b,w?,swpd,free,buff?,cache?,si,so,bi,bo,in,cs,us,sy,id ??BEGIN {???split(columns, cols, ",")???for (var in root 46801 11208 46801 0.1 0.2 97824 0 3908 1 11:39 00:00:00 sshd: root@pts/0 root 46803 46801 46803 0.0 0.0 108472 0 1852 1 11:39 00:00:00 -bash root 46820 46803 46820 0.0 0.0 9064 0 948 1 11:40 00:00:00 /bin/vzps -E 0 -o user,pid,ppid,pgid,pcpu,pmem,vsz,ni,rss,nlwp,stime,time,args real 0m0.022s user 0m0.005s sys 0m0.007s
Updated by Nicolas CHARLES over 11 years ago
- Status changed from In progress to Pending technical review
- Assignee changed from Nicolas CHARLES to Jonathan CLARKE
awesome, thank you Andrew !
I've updated the patch to match the PR on CFEngine
Updated by Jonathan CLARKE over 11 years ago
As an exception to our standard bug fixing policy, despite the fact that this bug is also present in Rudder 2.4.* branch which is still maintained until January 2014, I am going to accept this fix only into the Rudder 2.6.* branch for now. The reason for this is that developing the fix on the 2.4.* branch would take considerable extra work, and it is urgent to provide a fixed version on the current stable branch, so I don't want to delay that any longer.
This does not mean this bug should not be fixed on the 2.4.* branch, just that it will be fixed first on 2.6.* branch. I will create a separate ticket to track the back-porting effort.
Updated by Nicolas CHARLES over 11 years ago
- Status changed from Pending technical review to Pending release
- % Done changed from 0 to 100
Applied in changeset commit:7752a33fecc1a78cb927bb73fe7022723b2aead1.
Updated by Jonathan CLARKE over 11 years ago
Applied in changeset commit:3a9858665380a46cb67968f42fb53a20ba9ee51a.
Updated by Jonathan CLARKE over 11 years ago
- % Done changed from 100 to 0
This bug has been fixed in the Rudder code repositories. Nightly builds with version numbers from 201309210000 onwards will include the fix. Please test with caution if using nightly builds. This fix requires upgrading rudder-agent on all OpenVZ (or similar) host nodes.
Updated by Nicolas PERRON about 11 years ago
- Status changed from Pending release to Released
Updated by Benoît PECCATTE almost 10 years ago
- Project changed from 34 to Rudder
- Category set to Packaging