Projects tigase _server server-core Issues #1446
It seems the memory not GC. (#1446)
Unknown opened 4 years ago

I have a problem with… I use abtest for tigase with 12800 messages to tigase server meanwhile.Before the abtest,the memory use 3.1G with 12800 user already login,and after they send messages,the memory use 3.7G and it seems not GC.It is my configuration error?

Details (please complete the following information):

  • Tigase version: [e.g. 8.1.2]
  • JVM flavour and version [e.g. JDK11]
  • Operating system/distribution/version [e.g. Linux CentOS 7]
Unknown commented 4 years ago
  • Which GC configuration do you use?
  • How do you measure memory usage?
  • Have you try forcing GC (for example in VisualVM) and checking if the memory goes down?
Unknown commented 4 years ago

@woj-tek Thanks. I optimized the configuration and the memory increased slowly,but it still not go down.12800 messages increased 0.06GB memory.I want to know how to decrease the memory occupy.

Which GC configuration do you use? GC="-XX:+UseBiasedLocking -XX:NewRatio=2 -XX:-ReduceInitialCardMarks -XX:CMSInitiatingOccupancyFraction=70 -XX:+UseCMSInitiatingOccupancyOnly"

JAVA_XSS=" -Xss228k -Xms4096m -Xmx4096m -XX:+UseParallelGC"

How do you measure memory usage? Using centos command named 'htop' or using top.And calculate the interpolation of memory occupy between before sending messages and after sending messages.

Have you try forcing GC (for example in VisualVM) and checking if the memory goes down? No.It is the method in configuration file or system JVM?How can I set it?

Unknown commented 4 years ago

Update.I use tigase.conf about GC same as doc URL:https://docs.tigase.net/tigase-server/master-snapshot/Administration_Guide/html/#_vm_machine_8gb_of_ram_4_core_cpu_equivalent And I delete -XX:+UseParNewGC XX:+CMSIncrementalMode because they can not used at JDK11

However,the memory still not go down after sended 12800 messages.

Unknown commented 4 years ago

Using centos command named 'htop' or using top.And calculate the interpolation of memory occupy between before sending messages and after sending messages.

Please be aware that memory allocated by Java (and it's heap) seen by the operating system (top/htop) doesn't reflect actual memory usage (actually used JVM heap). Please use VisualVM to see actual usage of heap (and thus memory usage).

With -Xms4096m -Xmx4096m memory usage of the JVM will remain almost constant throughout.

Unknown commented 4 years ago

@woj-tek Hello,I use -Xms4096m -Xmx4096m and observe with VisualVM.Here is the result After Login 12800 users and before sending message.(With top the memory is 49.1%,Heap memory about 1GB) 02db8e591462310ddddd7d89dfc3ae2

After sending message.(With top the memory is 52.1%.Heap memory about 1.6GB) 9a01d2e3fa1ab0cb619b100f95fcd01

After 1.5 hour and the clients seem do not anything.(With top the memory is 63.3%.Heap memory about 1.8GB) image

However,the memory isn't going down.

Unknown commented 4 years ago

As you can see, heap doesn't change it size (orange line) and objects on the heap are correctly collected. There is non-heap memory that can be used thus top may see increase usage:

  1. Please also install and enable (in VisualVM) VisualVM-BufferMonitor
  2. please enable during startup Native Memory Tracking (https://www.baeldung.com/native-memory-tracking-in-jvm) and share output of jcmd <pid> VM.native_memory at the beginning and after 1,5hour

clients seem do not anything

Are you sure they don't anything? not even sending Ping messages? It seems there are disnting events happening causing surges in heap usage in roughly 10 minutes intervals.

Unknown commented 4 years ago

@woj-tek Hello,I add all result snapshot in the document because of it is too much content. tigase report 20210514.docx

and my tigase.conf is:

#osgiEnabled=(true|false)
#osgiEnabled=false
OSGI=${osgiEnabled}
ENC="-Dfile.encoding=UTF-8 -Dsun.jnu.encoding=UTF-8"
DRV="-Djdbc.drivers=com.mysql.jdbc.Driver:org.postgresql.Driver:org.apache.derby.jdbc.EmbeddedDriver"
#GC="-XX:+UseBiasedLocking -XX:NewRatio=2 -XX:-ReduceInitialCardMarks -XX:CMSInitiatingOccupancyFraction=70  -XX:+UseCMSInitiatingOccupancyOnly"
#EX="-XX:+OptimizeStringConcat -XX:+DoEscapeAnalysis -XX:+UseNUMA -XX:+UseCompressedOops "

#GC="-XX:+UseBiasedLocking -XX:+UseParallelGC  -XX:NewRatio=3 -XX:+PrintGCDetails  -XX:+UseAdaptiveSizePolicy -XX:ParallelGCThreads=10  -XX:+UseParallelOldGC"
GC="-XX:+UseBiasedLocking -XX:+UseParallelGC  -XX:NewRatio=3 -XX:+PrintGCDetails  -XX:+UseAdaptiveSizePolicy -XX:ParallelGCThreads=10 -XX:PermSize=4G -XX:MaxPermSize=4G -XX:+UseParallelOldGC -XX:NativeMemoryTracking=detail"

#GC="-XX:+UseBiasedLocking -XX:+UseConcMarkSweepGC -XX:NewRatio=2 -XX:-ReduceInitialCardMarks -XX:CMSInitiatingOccupancyFraction=70 -XX:+UseCMSInitiatingOccupancyOnly"
EX="-XX:+OptimizeStringConcat -XX:+DoEscapeAnalysis -XX:+UseNUMA"
#PRODUCTION_HEAP_SETTINGS=" -Xms3G -Xmx3G -Xmn2g -Xss140k"
#
PRODUCTION_HEAP_SETTINGS=" -Xms3G -Xmx3G "
#REMOTE_DEBUG=" -agentlib:jdwp=transport=dt_socket,server=y,address=8000,suspend=n "
#GC_DEBUG=" -XX:+PrintTenuringDistribution -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps -Xloggc:logs/jvm.log -verbose:gc "
#JVM_DEBUG=" -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/home/tigase/heapdump.bin "

#TLS_DEBUG=" -Djavax.net.debug=ssl:handshake:session:defaultctx "

## Note:Enabling NMT causes a 5% -10% performance overhead!
#JVM_MEMORY=" -XX:+UnlockDiagnosticVMOptions -XX:NativeMemoryTracking=summary -XX:+PrintNMTStatistics "

JMX_REMOTE_IP="-Djava.rmi.server.hostname=10.64.4.99 -Dcom.sun.management.jmxremote.port=18999 -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false"

## AWS hostname resolver
#export INTERNAL_IP="$(curl -s http://169.254.169.254/latest/meta-data/local-hostname)"
#export EXTERNAL_IP="$(curl -s http://169.254.169.254/latest/meta-data/public-hostname)"

JAVA_HOME="/usr/local/jdk-11.0.9"
CLASSPATH=""

#DISABLE_LOGGER_COLOR=" -Ddisable_logger_color=true "

## Possible memory allocation improvements on some CentOS/RHEL systems
## https://www.ibm.com/developerworks/community/blogs/kevgrig/entry/linux_glibc_2_10_rhel_6_malloc_may_show_excessive_virtual_memory_usage?lang=en
# export MALLOC_ARENA_MAX=4

## You should explicitly set Young Generation size only if you know what
## you are doing and only after running Load Tests that confirms the assumption!
#JAVA_YOUNG_GEN_EXPLICIT_SIZE=" -Xmn<young size>[g|m|k] "

## Heap memory settings should be adjusted on per deployment basis to utilize all resources!
## When configuring consider total of: Xmx + (Xss * number of threads)
#PRODUCTION_HEAP_SETTINGS=" -Xms5g -Xmx5g ${JAVA_YOUNG_GEN_EXPLICIT_SIZE} "

## Per-thread stack size on top of HEAP!
JAVA_XSS=" -Xss228k  -Xms4096m -Xmx4096m -XX:+UseParallelGC"
JAVA_DIRECT_MEMORY=" -XX:MaxDirectMemorySize=128m "
JAVA_METASPACE=" -XX:MaxMetaspaceSize=128m "

#JAVA_OPTIONS="${GC} ${GC_DEBUG} ${JVM_DEBUG} ${TLS_DEBUG} ${JVM_MEMORY} ${REMOTE_DEBUG} ${EX} ${ENC} ${DRV} ${JMX_REMOTE_IP} ${DISABLE_LOGGER_COLOR} -server ${PRODUCTION_HEAP_SETTINGS} ${JAVA_XSS} ${JAVA_DIRECT_MEMORY} ${JAVA_METASPACE} "
JAVA_OPTIONS="${GC} ${GC_DEBUG} ${EX} ${ENC} ${DRV} ${JMX_REMOTE_IP} -server ${PRODUCTION_HEAP_SETTINGS} ${DNS_RESOLVER} ${INTERNAL_IP} ${EXTERNAL_IP}  -XX:MaxDirectMemorySize=128m "
TIGASE_OPTIONS=" "
Unknown commented 4 years ago

As you can see, the actual heap size doesn't change and the slight change in JVM memory usage is caused by JVM internals (that's from jcmd output).

Regarding the connections and fluctuation in actual HEAP usage:

  1. do your clients stay connected after the test?
  2. could you share statistics from after the test (when the heap usage fluctuates)? You can gather the statistics files using Tigase configuraiton (https://docs.tigase.net/tigase-server/master-snapshot/Administration_Guide/html/#statLoggerConfig). You can also monitor the server using https://github.com/tigase/tigase-monitor

Please also share your current config.tdsl file.

PS. Please don't use docx format. It's better to add those files individually still.

Unknown commented 4 years ago

@woj-tek Thanks!And I will add files individually. 1.Yes,in real user will stay connected too.However,the memory won't go down when Jaxmpp clients disconnect.I use code of jaxmpp.disconnect() to disconnect the server. 2.OK I will share it after I abtest next time.

config.tdsl:

admins = [
    psi@abc.im.tigase.com
]
'config-type' = 'default'
debug = [ 'server' ]
'default-virtual-host' = 'abc.im.tigase.com'
dataSource () {
    default () {
        uri = 'jdbc:mysql://10.64.2.25:3307/tigasedb?user=im&password=123456&useSSL=false&useLegacyDatetimeCode=false&allowPublicKeyRetrieval=true&useUnicode=true&characterEncoding=UTF-8&noAccessToProcedureBodies=true'
    }
}
http () {
    setup () {
        'admin-password' = 'zzz@123456'
        'admin-user' = 'zzz'
    }
}

'cluster-mode' = true

cl-comp {
    connections {
        4250 {}
    }
}

pubsub () {
    trusted = [ 'http@{clusterNode}' ]
}

'muc' () {
	'defaultRoomConfig' {
	   	'tigase#presence_delivery_logic' = 'PREFERE_LAST'
	    'muc#roomconfig_persistentroom' = 'true'
	}
	'muc-logger' () {
	}
	'room-log-directory' = '/data1/logs/muc/'
	'muc-lock-new-room' = false
}

test(class: com.xxx.xxx.xxx.TestComponent) {}

httpServer (class: tigase.http.jetty.JettyStandaloneHttpServer) {
}

upload() {

    store {
        path = '/data1/shared/upload'
    }
}

'registration-throttling' () {
    limit = 100
}


'sess-man' {
    amp () {}
    message (active: true) {}
    msgoffline (active: true) {}
    'http://jabber.org/protocol/jingle' (class: tigase.xmpp.impl.Jingle,active: true) {
    	threadsNo = 1
    }
    'presence-offline' (class: tigase.xmpp.impl.PresenceOffline,
        active: false) {
        threadsNo = 1
    }
    'presence-state' (class: tigase.xmpp.impl.PresenceState) {
        threadsNo = 1
    }
    'presence-subscription' (class: tigase.xmpp.impl.PresenceSubscription) {
        threadsNo = 1
    }
    'jabber:iq:roster' {
        'auto-authorize' = 'true'
    }
    'jabber:iq:register' {
        captchaRequired = 'false'
	emailRequired = 'false'
    }
    'presence-subscription' () {
        'auto-authorize' = 'true'
    }
}

And the test component is just:

@Bean(name = "test", parent = Kernel.class, active = true)
public class TestComponent extends AbstractKernelBasedComponent {

    private static final Logger log = Logger.getLogger(TestComponent.class.getName());

    @Override
    public String getComponentVersion() {
        String version = this.getClass().getPackage().getImplementationVersion();
        return version == null ? "0.0.0" : version;
    }

    @Override
    public boolean isDiscoNonAdmin() {
        return false;
    }

    @Override
    protected void registerModules(Kernel kernel) {
        // here we need to register modules responsible for processing packets
        kernel.registerBean(DiscoveryModule.class).exec();
    }

    @Bean(name = "test", parent = TestComponent.class, active = true)
    public static class TestModule extends AbstractModule {

        private static final Logger log = Logger.getLogger(TestModule.class.getCanonicalName());

        private static final Criteria CRITERIA = ElementCriteria.name("message");

        @Override
        public Criteria getModuleCriteria() {
            return CRITERIA;
        }

        @Override
        public void process(Packet packet) throws ComponentException, TigaseStringprepException {
            System.out.println("Mypacket: " + packet.toString());
            
        }
    }


Unknown commented 4 years ago

@woj-tek This is result of tigase-monitor After 12800 user login Live View image Live Memory View image top and htop image

After 12800 send messages and all logout Live View image Live Memory View image top and htop image

It seems the memory usage fluctuates is likely Tenured usage.

Unknown commented 4 years ago

Thank you for the details.

Can you run one more test - when you finish the load test, please try forcing GC (for example in VisualVM in the "Monitor" tab click on button "Perform GC" in the top-right corner) - does this decrease Heap/Tenured (old gen) usage?

Please keep in mind that GC in JVM is automatic, and the GC of the tenured space may not kick in until certain percentage is reached (this depends on the actual GC used, but quite often is above 60-70%)

Unknown commented 4 years ago

@woj-tek After I click on button "Perform GC",the old gen goes down but the real memory still go up.The test process was same as before. Live View: 6128a61f0a8d6abb72afe1b86632224 Live Memory View: 69c2e60e6102be0a2a299b820eb7dcf top and htop 13653d2d8ca81d3eaee93a6218e2dae

PS:Before "Perform GC",The memory is 53.5% in top and 4.81G in htop.

Unknown commented 4 years ago

It seems that the memory will not increase on 80%.When the memory increase on 80%,it will go down a few. 89b5ea15a3e4c80f7f0f5b5e19362e4 d63df40f83f44069d58e2f5c9532df8

Unknown commented 4 years ago

This is just how the JVM works. It contains of HEAP (which you configure) + class metadata + stacktraces + direct memory. Actual JVM process memory (i.e. "real memory") will never equal configured HEAP size (in your case: 4G). Please see for example: https://medium.com/platform-engineer/understanding-java-memory-model-1d0863f6d973

What you can do is to change initial HEAP size ( -Xms, either don't configure it or make it smaller) - that way when the GC happens the JVM can de-allocate the memory responsible for the heap.

The VM.native_memory command gives you detailed overview what parts of the JVM contributes to the actual JVM process size - the heap size there is constant (both allocated and reserved size is the same). But there are others (https://www.baeldung.com/native-memory-tracking-in-jvm explains those).

The most important fact is that the memory used by the tigase-server (within HEAP) is actually correctly collected during Garbage Collection which means there is no memory leak.

Unknown commented 4 years ago

@woj-tek Thank you very much. I just hope the process can continuous operation on high physics memory.If the process doesn't crash,it's OK for high memory.I will test it continued

Unknown commented 4 years ago

It shouldn't go (significantly) above the baseline established after the load test.

Considering total Reserved size from VM.native_memory output should give you an indication what would be the final total size that the JVM would try to claim.

Unknown commented 4 years ago

@woj-tek OK,Thanks

issue 1 of 1
Type
Question
Issue Votes (0)
Watchers (0)
Reference
tigase/_server/server-core#1446
Please wait...
Page is in error, reload to recover