Projects tigase _server tigase-muc Issues #157
Respect history preferences on MUC join (#157)
Unknown opened 5 years ago

Describe the bug

history preferences are not respected on MUC joins

To Reproduce Steps to reproduce the behavior:

  1. join tigase@muc.tigase.org
<presence xmlns="jabber:client" to="tigase@muc.tigase.org/lovetox" id="5f6ce1f8-df8a-4b66-b340-082d6f4ff181" from="lovetox@temptatio.dev/gajim.GLEU3VID">
<c xmlns="http://jabber.org/protocol/caps" hash="sha-1" node="http://gajim.org" ver="qxfoxERhMvHS+QzDA/Q5OlnOavU=" />
<x xmlns="http://jabber.org/protocol/muc">
<history maxchars="0" />
</x>
</presence>
  1. Server still sends all MUC history
<!-- Incoming 05.10.2019 10:06:08 (lovetox@temptatio.dev) -->
<message xmlns="jabber:client" xml:lang="de" to="lovetox@temptatio.dev/gajim.GLEU3VID" from="tigase@muc.tigase.org/Holger" type="groupchat" id="350594463731">
<delay xmlns="urn:xmpp:delay" from="tigase@muc.tigase.org" stamp="2019-08-27T16:51:58.556Z" />
<body>patrik and me have seen issues with querying the OMEMO nodes of sure.im users.  But I didn't get to looking into details yet.  (Plus I was under the impression that the OMEMO code is still known not to be incomplete and wasn't sure whether bug reports are appreciated at this point.)</body>
</message>

<!-- Incoming 05.10.2019 10:06:08 (lovetox@temptatio.dev) -->
<message xmlns="jabber:client" xml:lang="en" to="lovetox@temptatio.dev/gajim.GLEU3VID" from="tigase@muc.tigase.org/hse" type="groupchat" id="0a945a75-c962-4a01-8075-409c1417f2b7">
<origin-id xmlns="urn:xmpp:sid:0" id="0a945a75-c962-4a01-8075-409c1417f2b7" />
<delay xmlns="urn:xmpp:delay" from="tigase@muc.tigase.org" stamp="2019-08-27T16:53:30.955Z" />
<body>...dismail.de is running Prosody</body>
</message>

<!-- Incoming 05.10.2019 10:06:08 (lovetox@temptatio.dev) -->
<message xmlns="jabber:client" xml:lang="en" to="lovetox@temptatio.dev/gajim.GLEU3VID" from="tigase@muc.tigase.org/hse" type="groupchat" id="d84bc982-ff9d-4fa1-93bd-45f6b98a5be5">
<origin-id xmlns="urn:xmpp:sid:0" id="d84bc982-ff9d-4fa1-93bd-45f6b98a5be5" />
<delay xmlns="urn:xmpp:delay" from="tigase@muc.tigase.org" stamp="2019-08-27T16:55:12.298Z" />
<body>I mean I read about Siskin to be OMEMO ready, so I startet some test. You know, IM is interessted for such Apple clients</body>
</message>
Unknown commented 5 years ago

#muc-124

Unknown commented 5 years ago

This issue is now fixed with support for requesting no history from the MUC room with maxchars attribute set to 0.

As for the maxchars attribute, I personally feel that it is not needed in current implementations and it should be no longer used. In a time when conversations in MUC rooms can be encrypted with OMEMO, limiting the number of history messages returned by the number of chars of the XML stanza is not in place. I think that clients should be using maxstanzas or since attributes to have behavior more suited to the current state of XMPP and MUC.

Unknown commented 5 years ago

Hi,

I dont see how encryption has anything to do with the way we request history.

That is the spec, and it especially asks for that behavior

see: https://xmpp.org/extensions/xep-0045.html#enter-managehistory

If the client wishes to receive no history, it MUST set the 'maxchars' attribute to a value of "0" (zero).

Unknown commented 5 years ago

I know the spec and I know what is in it. Changes in Tigase were made to match the XEP. I've just pointed out that in my opinion usage of maxchars is not a great idea in current times.

Looking from the client perspective and the XEP, what is the difference between maxchars set to 0 and maxstanzas set to 0? In both cases, you will get no messages from the room history.

Unknown commented 5 years ago

I think its not worth to spend and thought on that.

This is an almost 2 decade old spec, and it has much bigger problems than the attribute name that signales no history.

Anyway thanks for the fast fix :)

Unknown commented 1 year ago

@lovetox: It has been solved?

issue 1 of 1
Issue Votes (0)
Watchers (0)
Reference
tigase/_server/tigase-muc#157
Please wait...
Page is in error, reload to recover