Andrzej Wójcik (Tigase) opened 7 years ago
|
|
I've found this issue as a result of an email which I received from Wojciech about recent changes to fix an issue with subscribe presence being stored to offline storage even though it was delivered to the user. I think that change of behavior of
to:
as in case of negative presence priority for connection only I've prepared proper test cases to cover this issue as well as a test case for the issue for which fix caused this issue. |
|
Andrzej Wójcik wrote:
Yes, my change to SessionManager was a result of failing one of the TTS test cases (basically
So I decided to fix the
OK, but right now We could fix the implementations of %kobit - what do you think? |
|
I'm not sure if it was a bug or not, but it was working and now it is not. To be honest I do not see any point to have a method which is called only if none of the processors handled the particular packet. What would be a point for having such a method if in this case default packet handler is called few lines later? Also, the general idea for processors is that we need processors only if they need to do something and in another case we just forward packet to the destination. |
|
I guess this is the question to Artur… Also - it was also working just fine in 7.1.x so I guess logic that handles offline messages changed somewhere in 7.2.x line. |
|
Wojciech Kapcia wrote:
Ok, but... what is the question?
I am not sure if I correctly follow the discussion. Unfortunately it's been a long time since I looked in the source code, so I do not remember and do not understand all the logic details and implementation details. The implementation should follow the spec (RFC or XEP). If the spec is not specific enough, then use common sense, look at this from the end-user perspective from the service perspective, from the performance perspective, etc.... I am even OK, to modify some code specifically for testing reasons, as long as this is the last criterion and does not affect above. |
|
Artur Hefczyc wrote:
Whether:
OR
I guess if it works now and there wasn't anyone complaining about it before we can drop the requirement of "not being processed by processor" in case of PostProcessors (and reverse my recent "fix"). |
|
It is kind of confusing for me. The way I remember designing and using it was to have postprocessing for packets which are not handled by any processor. And the use case for this was storing messages into offline storage for users who are not online. From what you describe it looks like it is now the other way around. So, my answer is, we should think of uses cases for either of the logic and implement it the way that helps with the use-cases. I am not up to date enough with current code and implementation to make sensible decision. On the other hand, if post processing is always executed, regardless whether processing was done, what's the point behind the postprocessing at all? It is kind of processing then. |
|
Artur Hefczyc wrote:
That's what the API documentation say and what my fix does/brings back (not sure why it wasn't working like that before).
I don't know, but the above kinda contradicts the below. Andrzej Wójcik wrote:
%andrzej.wojcik - a bit silly question but - if those should be the same (from your point of view) then why put the code handling offline messages in post-processor? Or was it only like that because it was done this way in the past? Let me stress it again - I only adjusted the code to be in line with what was intended/described - if the packet was processed then it should not go through postprocessing. However - looking at this particular code of SM - the code responsible for this two step distinction (process/postProcess) was like that for a long time... But I think we are due to rethink how plugins/processors work (AFAIR there was already idea of a way to "chain" execution of the plugins, or better yet - define dependencies between plugins/processors -- it may be more feasible with Kernel in place?) |
|
%wojtek To be honest, offline processing code was always in Some time ago it could be forced usage of If we move this code to While doing this we should consider unification as Also, it would be very good, if we could create separate processor of offline messages so storage of offline messages would be possible to be done on the separate thread pool. Placing them on the same processing pool as AMP and message delivery might not be a good idea. Either way, we should fix this ASAP, as we are releasing SNAPSHOT builds which may drop offline messages. Wojtek, +1 for changes in processing to some kind of chaining or ordering of processors, but the question is performance. |
|
Andrzej Wójcik wrote:
I think that was the initial idea behind having separate postProcessor - so as the name suggests And now I think I grasp the idea better - both AMP and Right now the offline message logic is itsy-bitsy more complicated and intertwined...
For now I've reverted my "fix". Your proposal from #5692#note-2 should fix the problem at hand, but I'm not sure if it's ideal - as you suggested - we probably should revisit how offline messages are processed and handled. At any rate - I've went ahead and applied it. |
|
OK, so reverting the change and applying Andrzej's fix to offline messages resolved both issues. However there is a problem at hand commented in #5692#note-13 (and later):
%kobit %andrzej.wojcik %bmalkow ? |
|
%wojtek I would suggest to leave it as it is for 7.2.0 so we could get closer to the release of 7.2.0 and modify logic in 7.3.0 by creation of new processor for offline messages. |
|
I wasn't suggesting that we should deal with it in 7.2.0 - especially if we want bigger changes in Plugins. |
|
I'm closing this issue as reported issue was fixed. |
Type |
Bug
|
Priority |
Normal
|
Assignee | |
RedmineID |
5692
|
Version |
tigase-server-8.0.0
|
Spent time |
7h 30m
|
When messages are sent to the client with negative presence priority, messages should be stored to offline storage and delivered once presence priority is changed to non-negative value.
This issue is a result of change made in commit:86faf6606a95c6e1793f37696bcc3d20c9a41706