kellogs . opened 1 decade ago
|
|
LE: this happens during a tsung load test that would target both servers and fire a maximum of 360 req/s (40 auth get + 40 auth set + 40 presence available + 160 custom iq's + 40 messages + 40 presence unavailable) / second. |
|
I am not 100% sure, as I have too little information but as far as I remember this problem has been fixed in version 5.2.1 and later. For sure in version 5.3.0. It could be also a misconfiguration of the cluster which causes an incorrect behavior. To be certain we would need some example/sample of the data with so many subelements for the XML element. |
|
Here are two cases:
Always followed by:
|
|
This is strange. Above data should not trigger this error. Could you try to reproduce the problem on our latest dev: 5.3.0? If it still happen we will investigate. |
|
built tigase-server 800c2460 from master branch and it almost fine when elements limit is at 2 million (had one DoS) but not so fine when at 700k where there were a bit more: 2014-09-23 22:53:31.579 [pool-8-thread-8] XMPPIOService.processSocketData() INFO: null, type: connect, Socket: nullSocket[addr=server25.domain2.com/192.168.101.25,port=5277,localport=33896], jid: null, Incorrect XML data: sess-man@server25.domain2.com+39111364@domain1.comtsungc2s@server25.domain2.com/192.168.101.25_1443_192.168.101.34_62708e76b6205-d044-4479-82e9-845d2b8a71182004sess-man@server25.domain2.com, stopping connection: null, exception: tigase.xmpp.XMPPParserException: Too many elements for staza, possible DoS attack.Current service class tigase.xmpp.XMPPIOService limit of elements: 700000 2014-09-23 22:57:40.542 [pool-8-thread-17] XMPPIOService.processSocketData() INFO: null, type: accept, Socket: nullSocket[addr=/192.168.101.26,port=33822,localport=5277], jid: null, Incorrect XML data: sess-man@server26.domain2.comsess-man@server26.domain2.comsess-man@server26.domain2.comhttps://server26/blah/blahblah/+39140815@domain1.com//profile/IMG491-051956.jpg?temp_url_sig=c0cddc9109f93b6be7a034ee8694fb69b1054e22&temp_url_expires=2357564260https://server26/blah/blahblah/+39140815@domain1.com//profile/IMG491-051956.jpg?temp_url_sig=fc671e68d12670db5ffd5292ac2855ea57bc5899&temp_url_expires=2357564260, stopping connection: null, exception: tigase.xmpp.XMPPParserException: Too many elements for staza, possible DoS attack.Current service class tigase.xmpp.XMPPIOService limit of elements: 700000 Oh, and no rosters for these runs; initial setup involved some dynamic rosters in place. |
|
To me, it looks like some problem with installation, configuration mistake or some custom code causing kind of a loop which bounces packet back and forth between cluster nodes. |
|
Hmm, tried out the initial tigase-server.jar we were testing with but this time without that custom component, just presence and message stanzas. And no more DoS. What the custom component causing DoS did is receiving some IQs, asynchronously processing them and then sending back the results from the processing threads (non-tigase threads) via a call to tigase.server.AbstractMessageReceiver.addOutPacket(Packet); Perhaps there is some different way of returning a result were the processing takes place asynchronously ? Thank you! |
|
I think, what you do is correct in principle. Most likely the problem is with incorrect addressing in either Packet object or stanza element. |
|
Not a bug in our code. |
Type |
Bug
|
Priority |
Normal
|
Assignee | |
RedmineID |
2295
|
Hi,
in a cluster setup where there are two quite powerful machines (16 CPUs each, 48 and respectively 72 GB RAM) I was surprised to see this warning happening. 10 million elements limit seems not enough (Lowered it from 100 million where the 48GB server couldn't keep up and mayhem broke loose). Would it not be better not to glue all the stanzas together in those mega-stanzas that the clusters exchange between them, but instead to place a limit on the maximum number of stanzas that can be glued together before sending through the cluster socket ? I think it would be a great RAM saver and a great plus for overall tigase server health.
Thank you!