Review and optimize SQL schema and queries used during message synchronization.
Andrzej Wójcik (Tigase) commented 3 years ago
After applying changes from BeagleIM and running profiler on SiskinIM, I've found out that message sync of 11000 messages (last month) took 1m 30s on tigase.org.
After applying changes in the indexes we gained 2s but CPU usage was lower on SiskinIM.
During both tests, SiskinIM reported periods of high CPU usage and periods of low or zero CPU usage. Initially, I assumed that this is caused by the long round trip time (ping over 120ms to the server), but low CPU usage times were rather large (over 500ms).
After reviewing the server configuration, I've decided to disable logging as it was set to FINEST and generated a lot of writes. With this change sync time was reduced to 1m 4s.
I've decided to increase no. of messages synced in the single batch (from 150 to 300) and that resulted in a speedup - sync took 49s.
While reviewing tigase.org behavior during message sync, I've noticed that it "hangs" during sync and it is less responsive - I was syncing data from a single account!. When checking server statistics, I've noticed that sync cased a lot of CPU usage (top reported over 100% during whole synchronization time). Due to that, I've decided to review MAM performance in Tigase XMPP Server, see mam-75.
Andrzej Wójcik (Tigase) commented 3 years ago
After changes from mam-75 were applied, I rerun tests, and 11000 messages synced (batch size 300) in 19s (down from 49s with the same batch size without those changes).
Assuming round trip time was still around 350ms, MAM message sync (retrieval from a database, processing on the server, and processing on the client-side) took only 5s (rest, 14s was the time needed for 39 round trips to the server).
With those results, I think it might be good to increase MAM batch size to 300 message (from 150) as it would speed up SiskinIM startup time (sync time) even if it is rarly used.
Review and optimize SQL schema and queries used during message synchronization.