Actions
Bug #10646
closed"SocketException(message='Socket closed'" error at the end of generation with 500 nodes
Status:
Released
Priority:
N/A
Assignee:
Category:
Web - Config management
Target version:
Pull Request:
Severity:
Critical - prevents main use of Rudder | no workaround | data loss | security
UX impact:
User visibility:
Infrequent - complex configurations | third party integrations
Effort required:
Priority:
54
Name check:
Fix check:
Regression:
Description
On a fairly large install (2000 nodes, 500 nodes accepted at once), I got the following error at the end of generation
⇨ Can not save (modify) 'cn=Nodes Configuration,ou=Rudder,cn=rudder-configuration': An error occurred while attempting to send the LDAP message to server localhost:389: SocketException(message='Socket closed', trace='socketWrite(SocketOutputStream.java:118) / write(SocketOutputStream.java:155) / write(BufferedOutputStream.java:122) / write(ByteStringBuffer.java:1664) / writeTo(ASN1Buffer.java:851) / sendMessage(LDAPConnectionInternals.java:525) / sendMessage(LDAPConnection.java:4249) / processAsync(ModifyRequest.java:745) / process(ModifyRequest.java:650) / modify(LDAPConnection.java:2745) / $anonfun$modifyAction$3(LDAPConnection.scala:541) / apply(Exception.scala:224) / $anonfun$modifyAction$1(LDAPConnection.scala:541) / applyMod(LDAPConnection.scala:489) / $anonfun$applyModify$2(LDAPConnection.scala:552) / save(LDAPConnection.scala:609) / $anonfun$save$8(NodeConfigurationCacheRepository.scala:451) / flatMap(Box.scala:808) / $anonfun$save$4(NodeConfigurationCacheRepository.scala:447) / flatMap(Box.scala:808) / $anonfun$save$3(NodeConfigurationCacheRepository.scala:446) / $anonfun$flatMap$1(LDAPConnectionProvider.scala:117) / withCon(LDAPConnectionProvider.scala:154) / withCon$(LDAPConnectionProvider.scala:143) / withCon(LDAPConnectionProvider.scala:385) / flatMap(LDAPConnectionProvider.scala:116) / flatMap$(LDAPConnectionProvider.scala:115) / flatMap(LDAPConnectionProvider.scala:385) / save(NodeConfigurationCacheRepository.scala:445) / cacheNodeConfiguration(NodeConfigurationServiceImpl.scala:72) / $anonfun$writeNodeConfigurations$4(DeploymentService.scala:1010) / flatMap(Box.scala:808) / writeNodeConfigurations(DeploymentService.scala:1004) / writeNodeConfigurations$(DeploymentService.scala:992) / writeNodeConfigurations(DeploymentService.scala:480) / $anonfun$deploy$71(DeploymentService.scala:232) / flatMap(Box.scala:808) / $anonfun$deploy$65(DeploymentService.scala:221) / flatMap(Box.scala:808) / $anonfun$deploy$61(DeploymentService.scala:215) / flatMap(Box.scala:808) / $anonfun$deploy$58(DeploymentService.scala:214) / flatMap(Box.scala:808) / $anonfun$deploy$54(DeploymentService.scala:209) / flatMap(Box.scala:808) / $anonfun$deploy$50(DeploymentService.scala:204) / flatMap(Box.scala:808) / $anonfun$deploy$47(DeploymentService.scala:197) / flatMap(Box.scala:808) / $anonfun$deploy$43(DeploymentService.scala:188) / flatMap(Box.scala:808) / $anonfun$deploy$38(DeploymentService.scala:174) / flatMap(Box.scala:808) / $anonfun$deploy$36(DeploymentService.scala:173) / flatMap(Box.scala:808) / $anonfun$deploy$34(DeploymentService.scala:172) / flatMap(Box.scala:808) / $anonfun$deploy$33(DeploymentService.scala:171) / flatMap(Box.scala:808) / $anonfun$deploy$31(DeploymentService.scala:170) / flatMap(Box.scala:808) / $anonfun$deploy$28(DeploymentService.scala:167) / flatMap(Box.scala:808) / $anonfun$deploy$24(DeploymentService.scala:164) / flatMap(Box.scala:808) / $anonfun$deploy$20(DeploymentService.scala:161) / flatMap(Box.scala:808) / $anonfun$deploy$16(DeploymentService.scala:158) / flatMap(Box.scala:808) / $anonfun$deploy$12(DeploymentService.scala:155) / flatMap(Box.scala:808) / $anonfun$deploy$8(DeploymentService.scala:152) / flatMap(Box.scala:808) / $anonfun$deploy$5(DeploymentService.scala:147) / flatMap(Box.scala:808) / $anonfun$deploy$2(DeploymentService.scala:142) / flatMap(Box.scala:808) / deploy(DeploymentService.scala:141) / deploy$(DeploymentService.scala:127) / deploy(DeploymentService.scala:480) / applyOrElse(AsyncDeploymentAgent.scala:350) / apply(AbstractPartialFunction.scala:34) / execTranslate(LiftActor.scala:504) / execTranslate$(LiftActor.scala:496) / execTranslate(AsyncDeploymentAgent.scala:342) / proc2(LiftActor.scala:305) / $anonfun$processMailbox$1(LiftActor.scala:227) / apply(JFunction0$mcV$sp.java:12) / around(LiftActor.scala:241) / around$(LiftActor.scala:240) / around(AsyncDeploymentAgent.scala:342) / processMailbox(LiftActor.scala:227) / $anonfun$$bang$3(LiftActor.scala:190) / run(LiftActor.scala:76) / runWorker(ThreadPoolExecutor.java:1142) / run(ThreadPoolExecutor.java:617) / run(Thread.java:745)', revision=24201)
no error on the ldap side in the logs
Updated by Nicolas CHARLES almost 8 years ago
- Related to Bug #10645: Proxy error when accepting 500 inventories at the same time added
Updated by Jonathan CLARKE almost 8 years ago
- Subject changed from error at the end of generation, after accepting 500 nodes to Error at the end of generation after accepting 500 nodes
- Severity set to Critical - prevents main use of Rudder | no workaround | data loss | security
- User visibility changed from Infrequent - complex configurations | third party integrations to Operational - other Techniques | Technique editor | Rudder settings
- Priority changed from 0 to 60
Updated by Jonathan CLARKE almost 8 years ago
- User visibility changed from Operational - other Techniques | Technique editor | Rudder settings to Infrequent - complex configurations | third party integrations
- Priority changed from 60 to 49
Updated by Nicolas CHARLES almost 8 years ago
- Severity deleted (
Critical - prevents main use of Rudder | no workaround | data loss | security) - Priority changed from 49 to 0
Updated by Nicolas CHARLES almost 8 years ago
- Severity set to Critical - prevents main use of Rudder | no workaround | data loss | security
- Priority changed from 0 to 49
Updated by Alexis Mousset almost 8 years ago
- Target version changed from 4.1.5 to 4.1.6
- Priority changed from 49 to 48
Updated by Vincent MEMBRÉ over 7 years ago
- Target version changed from 4.1.6 to 4.1.7
- Priority changed from 64 to 62
Updated by Vincent MEMBRÉ over 7 years ago
- Target version changed from 4.1.8 to 4.1.9
- Priority changed from 62 to 60
Updated by Vincent MEMBRÉ over 7 years ago
- Target version changed from 4.1.9 to 4.1.10
- Priority changed from 59 to 58
Updated by Vincent MEMBRÉ about 7 years ago
- Target version changed from 4.1.10 to 4.1.11
- Priority changed from 58 to 56
Updated by Vincent MEMBRÉ about 7 years ago
- Target version changed from 4.1.11 to 4.1.12
- Priority changed from 56 to 54
Updated by François ARMAND over 6 years ago
- Status changed from In progress to Pending technical review
- Assignee changed from François ARMAND to Nicolas CHARLES
- Pull Request set to https://github.com/Normation/rudder-packages/pull/1626
Updated by Nicolas CHARLES over 6 years ago
- Status changed from Pending technical review to In progress
Updated by Nicolas CHARLES over 6 years ago
- Status changed from In progress to Pending technical review
- Assignee changed from Nicolas CHARLES to François ARMAND
- Pull Request changed from https://github.com/Normation/rudder-packages/pull/1626 to https://github.com/Normation/rudder-packages/pull/1627
Updated by Rudder Quality Assistant over 6 years ago
- Assignee changed from François ARMAND to Nicolas CHARLES
Updated by François ARMAND over 6 years ago
- Status changed from Pending technical review to Pending release
Updated by François ARMAND over 6 years ago
- Subject changed from Error at the end of generation after accepting 500 nodes to "SocketException(message='Socket closed'" error at the end of generation with 500 nodes
Updated by François ARMAND over 6 years ago
- Related to Bug #13256: LDAP IO error on generation with a lot of nodes added
Actions