Just to be sure that the bug is correctly understood:
- we have a file, /opt/rudder/etc/uuid.hive, with the node ID.
- when a node is accepted, it starts getting ITS promises. These promises contains at several point the node id (for the URL where the node should get its future promises, the identification of reports, etc).
So, if somebody change the content of /opt/rudder/etc/uuid.hive on an accepted node, the two set of files will be unsynchronized. So at the next inventory, Rudder will get the new node id, don't know what to do with that, and the node won't be able to go get it's promises because of authorisation.
So, the problem seems to be that changing a node ID is NOT a trivial action. It's an important decision, that bears consequences. Your are changing the ID of the node in Rudder. It's not the same node anymore. Most likelly, it will get OTHER promises. It can even be managed by totally other people, for a totally different role.
So we should fails immediatelly if we are finding that somehow, the node id in /opt/rudder/etc/uuid.hive is not coherent witht the node id for which the promises were produced.
And to make the uuid update easier (for example, when cloning VMs), we should explain what to do to actually having a new node in Rudder, with new, dedicated promises, once accepted:
- add the script corresponding to "rudder agent reinit" in earlier version of Rudder,
- add an error message explaining how to change the UUID when an inconssitancy is found between promises and conig file,
- in the /opt/rudder/etc/uuid.hive config file, add comments eplaining how to update the uuid (NOT by editing the file directly).
What do you thing ?