Projects tigase _server tigase-pubsub Issues #102
Add publishing executor with rate limiting (#102)
Closed
wojciech.kapcia@tigase.net opened 4 years ago

Currently the only thing that somewhat limits publication rate is naive mechanism preventing GC, which pauses publication for a period of time if certain memory limit is reached. This at best only prevents OOM, but still can overwhelm the system with lots of notification being generate in period of time. What's more, incorrect configuration of current settings can lead to completely dysfunctional installation (low memory limit would effectively cause constant GC and constant paused-state of the JVM)

The idea is to have dedicated Publishing Executor, which would operate in dedicated threads and would have it's rate limit (derived from system/machine parameters). There are a couple of benefits of this solution:

  • even with nodes with lot's of subscribers (let's say 1 million) we will still be able to generate all notifications in a sane manner
  • having it operate in separate thread would not block current publication processing (this is especially important in case of REST API/ad-hoc publication)

@andrzej.wojcik a couple of things to consider:

  • we should think about cases where there is a constant pressure on the executer and it's impossible to flush all notifications (i.e. there would be constant buildup and in the end it would overwhelm the system) - maybe reject publications at that time?
  • maybe we could add configuration option to the executor, which would allow specify generated packet priority? (for example in some chat systems 1-1 chats would be more important than some public announcements done via pubsub)
wojciech.kapcia@tigase.net commented 4 years ago

@andrzej.wojcik you reworked somewhat pubsub and handling of the publication but this part hasn't been implement from what I saw (and you removed delay and forced GC altogether: https://github.com/tigase/tigase-pubsub/blob/stable/src/main/java/tigase/pubsub/modules/PublishItemModule.java#L559) - correct?

Andrzej Wójcik (Tigase) commented 4 years ago

@wojtek Yes, delay and force GC was having a very negative impact on some installations leading to performance issues and creation of bottlenecks caused by constant calls to GC, so it was removed already.

New code (delayed publication of notifications) is not implemented at this point.

Andrzej Wójcik (Tigase) commented 4 years ago

@wojtek I've added initial implementation of rate limiter but we would need to clarify a few things.

Things to discuss:

  1. Limiter works without any limits (limit is set to Integer.MAX_VALUE) as I do not know how to calculate "correct" value for the installation.
  2. Limiter has support for "detecting" overload and blocking new publications. Currently, value is set to Integer.MAX_VALUE as this is the capacity of a single queue which are used on the backend.
  3. Limiter works on a single thread as it was simple to implement and actions for sending publications are very simple (just Element.clone() and then addOutPacket(Packet.packetInstance())) and fast. I suppose that we could leave it as it is.
  4. Limiter supports "priorities" for notification publications ('low', 'normal', 'high') but we do not have any place to store this value. For now, I'm using normal for each notification, but I suppose this could be placed in PubSub node configuration as we already have it in the memory so we could easily retrieve it and it would give use required flexibility.
  5. I've created simple SameThreadExecutor which could be used instead (if someone would like to drop limiting at all).
  6. Provided "limiters" are actually "task" limiters and not "publication" limiters, which means that we could reuse them for queuing any tasks which need to be done (ie. cleanup of a database could be done this way so if we have 10 database we would clean one each 6 minutes instead of cleaning them every hour but in bulk, etc.). I suppose we could consider moving that to tigase-utils so we could reuse them after we confirm that they are working OK in here.
  7. I'm not checking anywhere heap usage. Should we add this check somewhere?

Please review the code and answer above questions. Then we will be able to move forward with this task.

Andrzej Wójcik (Tigase) commented 4 years ago

@wojtek Do we need to do anything more? or can we close it?

wojciech.kapcia@tigase.net commented 4 years ago

Limiter works without any limits (limit is set to Integer.MAX_VALUE) as I do not know how to calculate "correct" value for the installation. Limiter has support for "detecting" overload and blocking new publications. Currently, value is set to Integer.MAX_VALUE as this is the capacity of a single queue which are used on the backend.

Determining exact values would probably require running a load test. However, by the rule of thumb we always said that Tigase can process 10k packets per second on single core. So maybe we could calculate it based on that? Maybe we should log an INFO stating that publication rate was throttled? That way we would avoid hammering the installations and leaving the users in the dark with "why it's not performing fast enough".

Limiter works on a single thread as it was simple to implement and actions for sending publications are very simple (just Element.clone() and then addOutPacket(Packet.packetInstance())) and fast. I suppose that we could leave it as it is.

It should be enough for now. It would still be improvement over what we have now. In the future we could ponder utilising ExecutorService with fixed thread pool maybe.

Limiter supports "priorities" for notification publications ('low', 'normal', 'high') but we do not have any place to store this value. For now, I'm using normal for each notification, but I suppose this could be placed in PubSub node configuration as we already have it in the memory so we could easily retrieve it and it would give use required flexibility.

Sounds reasonable.

Provided "limiters" are actually "task" limiters and not "publication" limiters, which means that we could reuse them for queuing any tasks which need to be done (ie. cleanup of a database could be done this way so if we have 10 database we would clean one each 6 minutes instead of cleaning them every hour but in bulk, etc.). I suppose we could consider moving that to tigase-utils so we could reuse them after we confirm that they are working OK in here.

Agreed, though... somewhat similar to Executors in this aspect?

I'm not checking anywhere heap usage. Should we add this check somewhere?

I'd say that we should at least check heap usage (to avoid overloading the installation) but definitely we should avoid calls to GC. Rate limiting here will be more than enough.

A side comment: admin and development guide: how to use, how to configure (priority per pubsub node) and in dev - how to create own (though, extending javadoc here should be more than enough.

Btw. for bigger changes and code reviews please use dedicated Pull Requests - will make checking them easier.

Andrzej Wójcik (Tigase) commented 4 years ago

Determining exact values would probably require running a load test. However, by the rule of thumb we always said that Tigase can process 10k packets per second on single core. So maybe we could calculate it based on that?

I've decided to set the default to half of this value per core, so 5k packets per second per core. We need to keep in mind that PubSub publications may be stored in the database or sent via S2S, so using half of 10k should make it work better than the actual 10k.

Maybe we should log an INFO stating that publication rate was throttled? That way we would avoid hammering the installations and leaving the users in the dark with "why it's not performing fast enough".

I've added info about starting throtting (including publication rate and current queue size) and info about throttling ending.

In the future we could ponder utilising ExecutorService with fixed thread pool maybe.

We could, but I think that would be pointless. As I've stated, calling addOutPacket() is very fast and that is all that closures passed to this executor are doing. Moreover, having an executor which would behave in the manner that we expect (using standard JDK executors) can be impossible (or tricky at best - I've not found a way to do that).

Agreed, though... somewhat similar to Executors in this aspect?

Yes, as it executes stuff, but it has a priority queue while none of the executors have. Moreover, it has throttling, which is not available in any executor.

I'd say that we should at least check heap usage (to avoid overloading the installation) but definitely we should avoid calls to GC. Rate limiting here will be more than enough.

Sure, but what should be the correct value? As I remember we had a lot of issues based on checking percentage usage if admin decided not to set max memory usage as JDK was thinking it was using over 90% of memory (and started throttling) while JDK was still being able to resize heap.

A side comment: admin and development guide: how to use, how to configure (priority per pubsub node) and in dev - how to create own (though, extending javadoc here should be more than enough.

We do not have a way now to configure the priority of execution based on the node - that was just an idea for further development. But executor has support for that to make it possible (as I think it could be useful).

As for documentation, I've added a description of how to configure the publishing rate, but Javadoc is in place - see tigase.pubsub.utils.executors.Executor.

wojciech.kapcia@tigase.net commented 4 years ago

I'd say that we should at least check heap usage (to avoid overloading the installation) but definitely we should avoid calls to GC. Rate limiting here will be more than enough.

Sure, but what should be the correct value? As I remember we had a lot of issues based on checking percentage usage if admin decided not to set max memory usage as JDK was thinking it was using over 90% of memory (and started throttling) while JDK was still being able to resize heap.

AFAIR the issue was somewhat different - the mechanism with forced GC could hammer the system by almost constantly calling GC (which quite often couldn't free-up enough memory so with the subsequent check there would be another call to the GC). With the throttled published mechanism in case of forcing GC we would simply prevent publish more packets (hence avoiding running into OOM with packet flood).

I think that with the above addition this can be closed.

I created #issue #117 as a followup with the targed in next minor version.

Andrzej Wójcik (Tigase) commented 4 years ago

I've implemented memory throttling in the separate branch issue #102. This code needs to be reviewed before it will be merged to be sure that it works as expected.

I've added 2 memory usage marks:

  • high - throttling should increate (default 90.0)
  • critical - publications should stop (default 99.9)

With those entries, I've applied the following rules:

  • less than high - full throughput
  • above high but below half of high + critical - 2/3 of throughput
  • above half of high + critical but below critical - 1/3 of throughput
  • critical - full stop
Andrzej Wójcik (Tigase) commented 4 years ago

PR awaits for review and merge https://github.com/tigase/tigase-pubsub/pull/3

wojciech.kapcia@tigase.net commented 4 years ago

Thank you for the change, it looks OK. I added documentation of the limits and merged it.

issue 1 of 1
Type
New Feature
Priority
Normal
Assignee
Version
tigase-server-8.2.0
Spent time
13h
Issue Votes (0)
Watchers (0)
Reference
tigase/_server/tigase-pubsub#102
Please wait...
Page is in error, reload to recover