Bug #20338
closedBroken webapp when receiving windows inventory
Description
I installed an agent on a windows node, and sending its first inventory did break my Rudder server.
Also note that I could not found the inventory in the server /var/rudder/inventories
folder.
[2021-11-30 14:40:49+0100] INFO application - registered property: user.name="root" [2021-11-30 14:40:49+0100] INFO application - registered property: user.timezone="Europe/Paris" [2021-11-30 14:40:49+0100] INFO application - registered property: waiting.inventory.queue.size="5" [2021-11-30 14:40:49+0100] INFO application - Plugin's license directory: '/opt/rudder/etc/plugins/licenses/' [2021-11-30 14:40:50+0100] INFO bootchecks - Check LDAP and PostgreSQL connection [2021-11-30 14:40:50+0100] INFO bootchecks - LDAP and PostgreSQL connection are OK [2021-11-30 14:40:50+0100] INFO bootchecks - Check mandatory DIT entries [2021-11-30 14:40:50+0100] INFO bootchecks - All the required DIT entries are present in the LDAP directory [2021-11-30 14:40:50+0100] INFO bootchecks - Check initialization of User Technique Library [2021-11-30 14:40:50+0100] INFO bootchecks - The Active Technique library is not marked as being initialized: adding all policies from reference library... Fiber failed. An unchecked error was produced. java.lang.AssertionError: assertion failed at scala.Predef$.assert(Predef.scala:267) at com.normation.inventory.domain.Certificate.<init>(DataTypes.scala:155) at com.normation.inventory.domain.AgentInfoSerialisation$.parseSecurityToken(AgentTypes.scala:207) at com.normation.inventory.domain.AgentInfoSerialisation$.$anonfun$parseJson$5(AgentTypes.scala:246) at zio.internal.FiberContext.evaluateNow(FiberContext.scala:916) at zio.internal.FiberContext.$anonfun$evaluateLater$1(FiberContext.scala:778) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.base/java.lang.Thread.run(Thread.java:829) Fiber:Id(1638279649299,44) was supposed to continue to: a future continuation at com.normation.inventory.domain.AgentInfoSerialisation$.parseCompatNonJson(AgentTypes.scala:269) a future continuation at zio.ZIO$.foreach(ZIO.scala:2808) a future continuation at zio.ZIO$.foreach(ZIO.scala:2809) a future continuation at com.normation.rudder.repository.ldap.LDAPEntityMapper.parseAgentInfo(LDAPEntityMapper.scala:280) a future continuation at com.normation.rudder.repository.ldap.LDAPEntityMapper.inventoryEntriesToNodeInfos(LDAPEntityMapper.scala:314) a future continuation at com.normation.rudder.repository.ldap.LDAPEntityMapper.convertEntriesToNodeInfos(LDAPEntityMapper.scala:240) a future continuation at com.normation.rudder.services.nodes.NodeInfoServiceCached.withUpToDateCache(NodeInfoService.scala:673) a future continuation at zio.ZIO$.foreach(ZIO.scala:2808) a future continuation at zio.ZIO$.foreach(ZIO.scala:2809) a future continuation at com.normation.rudder.services.nodes.NodeInfoServiceCached.withUpToDateCache(NodeInfoService.scala:676) a future continuation at com.normation.rudder.services.nodes.NodeInfoServiceCached.withUpToDateCache(NodeInfoService.scala:687) a future continuation at com.normation.rudder.services.nodes.NodeInfoServiceCached.withUpToDateCache(NodeInfoService.scala:687) a future continuation at com.normation.rudder.services.nodes.NodeInfoServiceCached.withUpToDateCache(NodeInfoService.scala:710) a future continuation at com.normation.rudder.services.nodes.NodeInfoServiceCached.withUpToDateCache(NodeInfoService.scala:717) a future continuation at com.normation.rudder.services.nodes.NodeInfoServiceCached.withUpToDateCache(NodeInfoService.scala:752) a future continuation at zio.ZIO.run(ZIO.scala:1640) a future continuation at zio.Semaphore.withPermits(Semaphore.scala:61) a future continuation at com.normation.rudder.services.nodes.NodeInfoServiceCached.withUpToDateCache(NodeInfoService.scala:700) a future continuation at com.normation.rudder.services.nodes.NodeInfoServiceCached.withUpToDateCache(NodeInfoService.scala:700) a future continuation at zio.ZIO$.zio$ZIO$$_succeedRight(ZIO.scala:4121) a future continuation at zio.ZIO.run(ZIO.scala:1640) a future continuation at zio.ZIO.bracket_(ZIO.scala:256) a future continuation at zio.ZIO.run(ZIO.scala:1640) a future continuation at zio.ZIO.bracket_(ZIO.scala:256) a future continuation at com.normation.zio$ZioRuntime$.runNow(ZioCommons.scala:382) Fiber:Id(1638279649299,44) execution trace: at com.normation.inventory.domain.AgentInfoSerialisation$.parseJson(AgentTypes.scala:236) at com.normation.inventory.domain.AgentInfoSerialisation$.parseJson(AgentTypes.scala:236) at zio.ZIO$.fromEither(ZIO.scala:3098) at <unknown>.<unknown>(ZioCommons.scala:0) at com.normation.inventory.domain.AgentInfoSerialisation$.parseJson(AgentTypes.scala:236) at com.normation.inventory.domain.AgentInfoSerialisation$.parseJson(AgentTypes.scala:235) at com.normation.inventory.domain.AgentInfoSerialisation$.parseJson(AgentTypes.scala:235) at zio.ZIO.zipWith(ZIO.scala:2137) at zio.ZIO$.foreach(ZIO.scala:2807) at com.normation.rudder.repository.ldap.LDAPEntityMapper.inventoryEntriesToNodeInfos(LDAPEntityMapper.scala:312) at zio.ZIO$.foreach(ZIO.scala:2819) at zio.ZIO$.foreach(ZIO.scala:2809) at zio.ZIO$.foreach(ZIO.scala:2807) at com.normation.rudder.repository.ldap.LDAPEntityMapper.inventoryEntriesToNodeInfos(LDAPEntityMapper.scala:296) at com.normation.rudder.repository.ldap.LDAPEntityMapper.inventoryEntriesToNodeInfos(LDAPEntityMapper.scala:296) at zio.ZIO$.fromEither(ZIO.scala:3098) at <unknown>.<unknown>(ZioCommons.scala:0) at com.normation.rudder.repository.ldap.LDAPEntityMapper.inventoryEntriesToNodeInfos(LDAPEntityMapper.scala:296) at zio.syntax$ToZio.succeed(syntax.scala:9) at com.normation.rudder.repository.ldap.LDAPEntityMapper.convertEntriesToNodeInfos(LDAPEntityMapper.scala:238) at zio.ZIO$.effectSuspendTotal(ZIO.scala:2640) at com.normation.rudder.repository.ldap.LDAPEntityMapper.convertEntriesToNodeInfos(LDAPEntityMapper.scala:235) at zio.ZIO$.fromEither(ZIO.scala:3098) at <unknown>.<unknown>(ZioCommons.scala:0) at zio.ZIO.zipWith(ZIO.scala:2137) at zio.ZIO$.foreach(ZIO.scala:2807) at com.normation.rudder.services.nodes.NodeInfoServiceCached.withUpToDateCache(NodeInfoService.scala:661) at zio.ZIO$.effectSuspendTotal(ZIO.scala:2640) at com.normation.rudder.services.nodes.NodeInfoServiceCached.withUpToDateCache(NodeInfoService.scala:661) at zio.syntax$ToZio.succeed(syntax.scala:9) at com.normation.rudder.services.nodes.NodeInfoServiceCached.withUpToDateCache(NodeInfoService.scala:652) at zio.ZIO$.effectSuspendTotal(ZIO.scala:2640) at com.normation.rudder.services.nodes.NodeInfoServiceCached.withUpToDateCache(NodeInfoService.scala:652) at zio.syntax$ToZio.succeed(syntax.scala:9) at com.normation.rudder.services.nodes.NodeInfoServiceCached.withUpToDateCache(NodeInfoService.scala:644) at zio.ZIO$.effectSuspendTotal(ZIO.scala:2640) at com.normation.rudder.services.nodes.NodeInfoServiceCached.withUpToDateCache(NodeInfoService.scala:644) at com.normation.rudder.services.nodes.NodeInfoServiceCached$.constructNodesFromAllEntries(NodeInfoService.scala:465) at zio.ZRef$Atomic.get(ZRef.scala:201) at com.normation.rudder.services.nodes.NodeInfoServiceCached$.constructNodesFromAllEntries(NodeInfoService.scala:457) at zio.ZIO$WhenM$.apply(ZIO.scala:4059) at zio.ZRef$Atomic.get(ZRef.scala:201) at com.normation.rudder.services.nodes.NodeInfoServiceCached$.constructNodesFromAllEntries(NodeInfoService.scala:436) at zio.ZIO$.foreach(ZIO.scala:2809) at zio.ZIO$.foreach(ZIO.scala:2808) at zio.ZIO.unit(ZIO.scala:1994) at zio.ZRef$Atomic.set(ZRef.scala:219) at zio.ZIO$.effectSuspendTotal(ZIO.scala:2640) at com.normation.rudder.services.nodes.NodeInfoServiceCached$.constructNodesFromAllEntries(NodeInfoService.scala:452) at zio.ZRef$Atomic.update(ZRef.scala:337) at zio.ZIO.zipWith(ZIO.scala:2137) at zio.ZIO$.foreach(ZIO.scala:2807) at com.normation.rudder.services.nodes.NodeInfoServiceCached$.constructNodesFromAllEntries(NodeInfoService.scala:436) at zio.ZRef$.make(ZRef.scala:749) at com.normation.rudder.services.nodes.NodeInfoServiceCached$.constructNodesFromAllEntries(NodeInfoService.scala:434) at zio.ZRef$.make(ZRef.scala:749) at com.normation.rudder.services.nodes.NodeInfoServiceCached.withUpToDateCache(NodeInfoService.scala:687) at zio.internal.FiberContext$InterruptExit$.apply(FiberContext.scala:161) at zio.internal.FiberContext$InterruptExit$.apply(FiberContext.scala:154) at zio.ZIO$.effectSuspendTotal(ZIO.scala:2640) at zio.ZIO$.bracketExit(ZIO.scala:2273) at zio.internal.FiberContext.evaluateNow(FiberContext.scala:559) at zio.ZIO.bracket_(ZIO.scala:256) at zio.ZIO.run(ZIO.scala:1640) at zio.internal.FiberContext$InterruptExit$.apply(FiberContext.scala:154) at zio.internal.FiberContext$InterruptExit$.apply(FiberContext.scala:161) at zio.internal.FiberContext$InterruptExit$.apply(FiberContext.scala:154) at zio.ZIO$.effectSuspendTotal(ZIO.scala:2640) at zio.ZIO$.bracketExit(ZIO.scala:2273) at com.normation.ldap.sdk.PooledConnectionProvider.releaseInternalConnection(LDAPConnectionProvider.scala:254) at com.normation.ldap.sdk.LDAPConnectionProvider.withCon(LDAPConnectionProvider.scala:117) at zio.ZIO.run(ZIO.scala:1640) at zio.internal.FiberContext$InterruptExit$.apply(FiberContext.scala:154) at zio.internal.FiberContext$InterruptExit$.apply(FiberContext.scala:161) at zio.internal.FiberContext$InterruptExit$.apply(FiberContext.scala:154) at zio.ZIO$.effectSuspendTotal(ZIO.scala:2640) at zio.ZIO$.bracketExit(ZIO.scala:2273) at zio.internal.FiberContext.evaluateNow(FiberContext.scala:559) at zio.ZIO.bracket_(ZIO.scala:256) at zio.ZIO.run(ZIO.scala:1640) at zio.internal.FiberContext$InterruptExit$.apply(FiberContext.scala:154) at com.normation.rudder.services.nodes.NodeInfoServiceCached.withUpToDateCache(NodeInfoService.scala:623) at zio.internal.FiberContext$InterruptExit$.apply(FiberContext.scala:161) at zio.internal.FiberContext$InterruptExit$.apply(FiberContext.scala:154) at zio.ZIO$.effectSuspendTotal(ZIO.scala:2640) at zio.ZIO$.bracketExit(ZIO.scala:2273) at zio.internal.FiberContext.evaluateNow(FiberContext.scala:559) at zio.ZIO.bracket_(ZIO.scala:256) at zio.ZIO.run(ZIO.scala:1640) at zio.internal.FiberContext$InterruptExit$.apply(FiberContext.scala:154) at zio.clock.package$Clock$Service$$anon$1.currentTime(package.scala:41) at zio.clock.package$Clock$Service$$anon$1.currentTime(package.scala:41) at com.normation.zio$.currentTimeMillis(ZioCommons.scala:351) at zio.ZIO.bracket_(ZIO.scala:256) at zio.internal.FiberContext.evaluateNow(FiberContext.scala:555) at com.normation.rudder.services.nodes.NodeInfoServiceCached.withUpToDateCache(NodeInfoService.scala:623) at com.normation.ldap.sdk.RoLDAPConnection.search(LDAPConnection.scala:322) at zio.internal.FiberContext$InterruptExit$.apply(FiberContext.scala:161) at zio.internal.FiberContext$InterruptExit$.apply(FiberContext.scala:154) at zio.ZIO$.effectSuspendTotal(ZIO.scala:2640) Fiber:Id(1638279649299,44) was spawned by: <empty trace> [2021-11-30 14:40:50+0100] ERROR net.liftweb.actor.ActorLogger - Actor threw an exception java.lang.AssertionError: assertion failed at scala.Predef$.assert(Predef.scala:267) at com.normation.inventory.domain.Certificate.<init>(DataTypes.scala:155) at com.normation.inventory.domain.AgentInfoSerialisation$.parseSecurityToken(AgentTypes.scala:207) at com.normation.inventory.domain.AgentInfoSerialisation$.$anonfun$parseJson$5(AgentTypes.scala:246) at zio.internal.FiberContext.evaluateNow(FiberContext.scala:916) at zio.internal.FiberContext.$anonfun$evaluateLater$1(FiberContext.scala:778) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.base/java.lang.Thread.run(Thread.java:829) [2021-11-30T14:40:50.820+01:00] ERROR FATAL Rudder JVM caught an unhandled fatal exception. Rudder will now stop to prevent further inconsistant behavior. This is likely a bug, please contact Rudder developers. You can configure the list of fatal exception in /opt/rudder/etc/rudder-web.properties -> rudder.jvm.fatal.exceptions [2021-11-30T14:40:50.820+01:00] ERROR FATAL exception in thread 'pool-2-thread-5' (in threadgroup 'main'): 'java.lang.AssertionError': 'assertion failed' java.lang.AssertionError: assertion failed at scala.Predef$.assert(Predef.scala:267) at com.normation.inventory.domain.Certificate.<init>(DataTypes.scala:155) at com.normation.inventory.domain.AgentInfoSerialisation$.parseSecurityToken(AgentTypes.scala:207) at com.normation.inventory.domain.AgentInfoSerialisation$.$anonfun$parseJson$5(AgentTypes.scala:246) at zio.internal.FiberContext.evaluateNow(FiberContext.scala:916) at zio.internal.FiberContext.$anonfun$evaluateLater$1(FiberContext.scala:778) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.base/java.lang.Thread.run(Thread.java:829)
Files
Updated by Félix DALLIDET about 3 years ago
Updated by François ARMAND about 3 years ago
So, the trace says that at some point, the certificate value was empty, which caused an assert
to fail. It is not empty in the provided inventory, and everything is as it should: parsing is ok, etc.
Moreover, the assert
is triggered on a node info cache update, so after the node inventory was parsed, accepted, etc. So the empty value should have been caught before that. There is something fishy going on.
I propose to remove the assert (see #20341) to avoid crashing the whole rudder while only one node has a problem, and try to get more info on said problem.
Updated by Vincent MEMBRÉ about 3 years ago
- Target version changed from 7.0.0~rc2 to 7.0.0~rc3
Updated by Vincent MEMBRÉ almost 3 years ago
- Target version changed from 7.0.0~rc3 to 7.0.0
- Priority changed from 124 to 122
Updated by Vincent MEMBRÉ almost 3 years ago
- Target version changed from 7.0.0 to 7.0.1
Updated by Vincent MEMBRÉ almost 3 years ago
- Target version changed from 7.0.1 to 7.0.2
- Priority changed from 122 to 121
Updated by Vincent MEMBRÉ over 2 years ago
- Target version changed from 7.0.2 to 7.0.3
- Priority changed from 121 to 117
Updated by Vincent MEMBRÉ over 2 years ago
- Target version changed from 7.0.3 to 7.0.4
- Priority changed from 117 to 115
Updated by Vincent MEMBRÉ over 2 years ago
- Target version changed from 7.0.4 to 7.0.5
- Priority changed from 115 to 113
Updated by François ARMAND over 2 years ago
- Status changed from New to Resolved
- Priority changed from 113 to 111
The problem was "solved" by the solution in linled ticket. If it occures again, we will open a new ticket.