Out of memory errors (scalability of xwiki).

classic Classic list List threaded Threaded
18 messages Options
Reply | Threaded
Open this post in threaded view
|

Out of memory errors (scalability of xwiki).

kgardas

Folks,

I'm testing kind of scalability of XWiki by simple benchmark which
creates N pages in a loop (one page at a time, it's not parallel run!)
and then when this loop finishes, it again in another loop gets all the
pages from the server (again serially, one page at the time). For page
creation we're using REST API, for getting the pages we're using common
browseable URL (/xwiki/bin/view/...).
Now the problem is that if I try to attempt creation of 100k pages, then
I hit Java's out of memory errors and server is unresponsive from that
time. I've tested this on:

- xwiki-jetty-hsql-6.0.0
- xwiki-jetty-hsql-6.0.1
- xwiki-tomcat7-pgsql -- debian xwiki packages running on top of debian 7.5

Of course I know the way how to increase Java's memory space/heap space.
The problem is that this will not help here. Simply if I do so and then
create 100 millions of pages on one run I will still get to the same
issue just it'll take a lot longer.

I've googled a bit for memoryleaks issues on Java and found an
interesting recommendation to use parallel GC. So I've changed
start_xwiki.sh to include -XX:+UseParallelGC in XWIKI_OPTS

Anyway, the situation is still looking suspiciously. I've connected
JConsole to the xwiki java process and overall view looks:

https://app.box.com/s/udndu96pl2fvuz3igvor

this is whole run overview, but perhaps even more clear is it on last 2
hours view which is here:

https://app.box.com/s/deuix33fzejra4uur941

Sidenote this all is from debugging xwiki-jetty-hsql-6.0.1 distro.

Now, what worries me a lot is this bottom cap which is growing. You can
see that clearly in Heap Memory Usage from 15:15. In CPU usage you can
also see that around the same time the CPU consumption went up from ~15%
to ~45%

When I switch to Memory Tab in JConsole and click several times on
"Perform GC" button, the bottom cap is still there and I cannot get
lower in memory usage. With this going on I can also see server failing
after some time on OOM error.

Any help with this is highly appreciated here.

Thanks!
Karel
_______________________________________________
devs mailing list
[hidden email]
http://lists.xwiki.org/mailman/listinfo/devs
Reply | Threaded
Open this post in threaded view
|

Re: Out of memory errors (scalability of xwiki).

Thomas Mortagne
Administrator
You are mixing several things here. It's not because you hit Out of
memory errors that you have a memory leak, it can simply mean that you
have a document cache too big for the memory you allocated for
example. You can modify the documents cache size in xwiki.cfg.

How much memory did you allocated ? XWiki is not a small beast and it
require a minimum to work. See
http://platform.xwiki.org/xwiki/bin/view/AdminGuide/Performances#HMemory.

On Wed, May 28, 2014 at 4:58 PM, Karel Gardas <[hidden email]> wrote:

>
> Folks,
>
> I'm testing kind of scalability of XWiki by simple benchmark which creates N
> pages in a loop (one page at a time, it's not parallel run!) and then when
> this loop finishes, it again in another loop gets all the pages from the
> server (again serially, one page at the time). For page creation we're using
> REST API, for getting the pages we're using common browseable URL
> (/xwiki/bin/view/...).
> Now the problem is that if I try to attempt creation of 100k pages, then I
> hit Java's out of memory errors and server is unresponsive from that time.
> I've tested this on:
>
> - xwiki-jetty-hsql-6.0.0
> - xwiki-jetty-hsql-6.0.1
> - xwiki-tomcat7-pgsql -- debian xwiki packages running on top of debian 7.5
>
> Of course I know the way how to increase Java's memory space/heap space. The
> problem is that this will not help here. Simply if I do so and then create
> 100 millions of pages on one run I will still get to the same issue just
> it'll take a lot longer.
>
> I've googled a bit for memoryleaks issues on Java and found an interesting
> recommendation to use parallel GC. So I've changed start_xwiki.sh to include
> -XX:+UseParallelGC in XWIKI_OPTS
>
> Anyway, the situation is still looking suspiciously. I've connected JConsole
> to the xwiki java process and overall view looks:
>
> https://app.box.com/s/udndu96pl2fvuz3igvor
>
> this is whole run overview, but perhaps even more clear is it on last 2
> hours view which is here:
>
> https://app.box.com/s/deuix33fzejra4uur941
>
> Sidenote this all is from debugging xwiki-jetty-hsql-6.0.1 distro.
>
> Now, what worries me a lot is this bottom cap which is growing. You can see
> that clearly in Heap Memory Usage from 15:15. In CPU usage you can also see
> that around the same time the CPU consumption went up from ~15% to ~45%
>
> When I switch to Memory Tab in JConsole and click several times on "Perform
> GC" button, the bottom cap is still there and I cannot get lower in memory
> usage. With this going on I can also see server failing after some time on
> OOM error.
>
> Any help with this is highly appreciated here.
>
> Thanks!
> Karel
> _______________________________________________
> devs mailing list
> [hidden email]
> http://lists.xwiki.org/mailman/listinfo/devs



--
Thomas Mortagne
_______________________________________________
devs mailing list
[hidden email]
http://lists.xwiki.org/mailman/listinfo/devs
Reply | Threaded
Open this post in threaded view
|

Re: Out of memory errors (scalability of xwiki).

kgardas

Thomas,

thanks for your fast response. My comments are below.

On 05/28/14 05:27 PM, Thomas Mortagne wrote:
> You are mixing several things here. It's not because you hit Out of
> memory errors that you have a memory leak, it can simply mean that you
> have a document cache too big for the memory you allocated for
> example. You can modify the documents cache size in xwiki.cfg.

I see two caches in that file:

xwiki.store.cache
xwiki.render.cache

both seems to be commented out.

> How much memory did you allocated ? XWiki is not a small beast and it
> require a minimum to work. See
> http://platform.xwiki.org/xwiki/bin/view/AdminGuide/Performances#HMemory.

To prevent misunderstanding. I've not set any limit on memory etc. I
just simply use xwiki-enterprise-jetty-hsqldb-6.0.1.zip as distributed
on xwiki.org. This distro does have 512MB RAM cap which following your
link above should be good for medium installs.

The question is, if the value of caches above are commented out in
xwiki.cfg, then what are actual default values which are used in the
xwiki-enterprise-jetty-hsqldb-6.0.1.zip distro? Just to know from which
value I shall go lower...

Thanks!
Karel

>
> On Wed, May 28, 2014 at 4:58 PM, Karel Gardas<[hidden email]>  wrote:
>>
>> Folks,
>>
>> I'm testing kind of scalability of XWiki by simple benchmark which creates N
>> pages in a loop (one page at a time, it's not parallel run!) and then when
>> this loop finishes, it again in another loop gets all the pages from the
>> server (again serially, one page at the time). For page creation we're using
>> REST API, for getting the pages we're using common browseable URL
>> (/xwiki/bin/view/...).
>> Now the problem is that if I try to attempt creation of 100k pages, then I
>> hit Java's out of memory errors and server is unresponsive from that time.
>> I've tested this on:
>>
>> - xwiki-jetty-hsql-6.0.0
>> - xwiki-jetty-hsql-6.0.1
>> - xwiki-tomcat7-pgsql -- debian xwiki packages running on top of debian 7.5
>>
>> Of course I know the way how to increase Java's memory space/heap space. The
>> problem is that this will not help here. Simply if I do so and then create
>> 100 millions of pages on one run I will still get to the same issue just
>> it'll take a lot longer.
>>
>> I've googled a bit for memoryleaks issues on Java and found an interesting
>> recommendation to use parallel GC. So I've changed start_xwiki.sh to include
>> -XX:+UseParallelGC in XWIKI_OPTS
>>
>> Anyway, the situation is still looking suspiciously. I've connected JConsole
>> to the xwiki java process and overall view looks:
>>
>> https://app.box.com/s/udndu96pl2fvuz3igvor
>>
>> this is whole run overview, but perhaps even more clear is it on last 2
>> hours view which is here:
>>
>> https://app.box.com/s/deuix33fzejra4uur941
>>
>> Sidenote this all is from debugging xwiki-jetty-hsql-6.0.1 distro.
>>
>> Now, what worries me a lot is this bottom cap which is growing. You can see
>> that clearly in Heap Memory Usage from 15:15. In CPU usage you can also see
>> that around the same time the CPU consumption went up from ~15% to ~45%
>>
>> When I switch to Memory Tab in JConsole and click several times on "Perform
>> GC" button, the bottom cap is still there and I cannot get lower in memory
>> usage. With this going on I can also see server failing after some time on
>> OOM error.
>>
>> Any help with this is highly appreciated here.
>>
>> Thanks!
>> Karel
>> _______________________________________________
>> devs mailing list
>> [hidden email]
>> http://lists.xwiki.org/mailman/listinfo/devs
>
>
>

_______________________________________________
devs mailing list
[hidden email]
http://lists.xwiki.org/mailman/listinfo/devs
Reply | Threaded
Open this post in threaded view
|

Re: Out of memory errors (scalability of xwiki).

Sergiu Dumitriu-3
The standalone .zip package is not designed to hold many pages. It uses
an in-memory database that requires as much heap space as the amount of
data that you have (plus all the other memory that XWiki normally
requires). I thought there was a bigger warning on the download page
that clarified that the standalone package is only supposed to be used
for small tests...

The pgsql package should behave better, though, since it separates the
database from the live objects, except that you need to make sure
Tomcat's default memory is increased.

On 05/28/2014 12:02 PM, Karel Gardas wrote:

>
> Thomas,
>
> thanks for your fast response. My comments are below.
>
> On 05/28/14 05:27 PM, Thomas Mortagne wrote:
>> You are mixing several things here. It's not because you hit Out of
>> memory errors that you have a memory leak, it can simply mean that you
>> have a document cache too big for the memory you allocated for
>> example. You can modify the documents cache size in xwiki.cfg.
>
> I see two caches in that file:
>
> xwiki.store.cache
> xwiki.render.cache
>
> both seems to be commented out.
>
>> How much memory did you allocated ? XWiki is not a small beast and it
>> require a minimum to work. See
>> http://platform.xwiki.org/xwiki/bin/view/AdminGuide/Performances#HMemory.
>
> To prevent misunderstanding. I've not set any limit on memory etc. I
> just simply use xwiki-enterprise-jetty-hsqldb-6.0.1.zip as distributed
> on xwiki.org. This distro does have 512MB RAM cap which following your
> link above should be good for medium installs.
>
> The question is, if the value of caches above are commented out in
> xwiki.cfg, then what are actual default values which are used in the
> xwiki-enterprise-jetty-hsqldb-6.0.1.zip distro? Just to know from which
> value I shall go lower...
>
> Thanks!
> Karel
>
>>
>> On Wed, May 28, 2014 at 4:58 PM, Karel
>> Gardas<[hidden email]>  wrote:
>>>
>>> Folks,
>>>
>>> I'm testing kind of scalability of XWiki by simple benchmark which
>>> creates N
>>> pages in a loop (one page at a time, it's not parallel run!) and then
>>> when
>>> this loop finishes, it again in another loop gets all the pages from the
>>> server (again serially, one page at the time). For page creation
>>> we're using
>>> REST API, for getting the pages we're using common browseable URL
>>> (/xwiki/bin/view/...).
>>> Now the problem is that if I try to attempt creation of 100k pages,
>>> then I
>>> hit Java's out of memory errors and server is unresponsive from that
>>> time.
>>> I've tested this on:
>>>
>>> - xwiki-jetty-hsql-6.0.0
>>> - xwiki-jetty-hsql-6.0.1
>>> - xwiki-tomcat7-pgsql -- debian xwiki packages running on top of
>>> debian 7.5
>>>
>>> Of course I know the way how to increase Java's memory space/heap
>>> space. The
>>> problem is that this will not help here. Simply if I do so and then
>>> create
>>> 100 millions of pages on one run I will still get to the same issue just
>>> it'll take a lot longer.
>>>
>>> I've googled a bit for memoryleaks issues on Java and found an
>>> interesting
>>> recommendation to use parallel GC. So I've changed start_xwiki.sh to
>>> include
>>> -XX:+UseParallelGC in XWIKI_OPTS
>>>
>>> Anyway, the situation is still looking suspiciously. I've connected
>>> JConsole
>>> to the xwiki java process and overall view looks:
>>>
>>> https://app.box.com/s/udndu96pl2fvuz3igvor
>>>
>>> this is whole run overview, but perhaps even more clear is it on last 2
>>> hours view which is here:
>>>
>>> https://app.box.com/s/deuix33fzejra4uur941
>>>
>>> Sidenote this all is from debugging xwiki-jetty-hsql-6.0.1 distro.
>>>
>>> Now, what worries me a lot is this bottom cap which is growing. You
>>> can see
>>> that clearly in Heap Memory Usage from 15:15. In CPU usage you can
>>> also see
>>> that around the same time the CPU consumption went up from ~15% to ~45%
>>>
>>> When I switch to Memory Tab in JConsole and click several times on
>>> "Perform
>>> GC" button, the bottom cap is still there and I cannot get lower in
>>> memory
>>> usage. With this going on I can also see server failing after some
>>> time on
>>> OOM error.
>>>
>>> Any help with this is highly appreciated here.
>>>
>>> Thanks!
>>> Karel


--
Sergiu Dumitriu
http://purl.org/net/sergiu
_______________________________________________
devs mailing list
[hidden email]
http://lists.xwiki.org/mailman/listinfo/devs
Reply | Threaded
Open this post in threaded view
|

Re: Out of memory errors (scalability of xwiki).

Thomas Mortagne
Administrator
In reply to this post by kgardas
On Wed, May 28, 2014 at 6:02 PM, Karel Gardas <[hidden email]> wrote:

>
> Thomas,
>
> thanks for your fast response. My comments are below.
>
>
> On 05/28/14 05:27 PM, Thomas Mortagne wrote:
>>
>> You are mixing several things here. It's not because you hit Out of
>> memory errors that you have a memory leak, it can simply mean that you
>> have a document cache too big for the memory you allocated for
>> example. You can modify the documents cache size in xwiki.cfg.
>
>
> I see two caches in that file:
>
> xwiki.store.cache
> xwiki.render.cache
>
> both seems to be commented out.
>
>
>> How much memory did you allocated ? XWiki is not a small beast and it
>> require a minimum to work. See
>> http://platform.xwiki.org/xwiki/bin/view/AdminGuide/Performances#HMemory.
>
>
> To prevent misunderstanding. I've not set any limit on memory etc. I just
> simply use xwiki-enterprise-jetty-hsqldb-6.0.1.zip as distributed on
> xwiki.org. This distro does have 512MB RAM cap which following your link
> above should be good for medium installs.

Ok that explains it, this distribution is more for tests purpose and
the entire database is in memory so indeed the more pages you add the
more memory you take.

>
> The question is, if the value of caches above are commented out in
> xwiki.cfg, then what are actual default values which are used in the
> xwiki-enterprise-jetty-hsqldb-6.0.1.zip distro? Just to know from which
> value I shall go lower...
>
> Thanks!
> Karel
>
>
>>
>> On Wed, May 28, 2014 at 4:58 PM, Karel Gardas<[hidden email]>
>> wrote:
>>>
>>>
>>> Folks,
>>>
>>> I'm testing kind of scalability of XWiki by simple benchmark which
>>> creates N
>>> pages in a loop (one page at a time, it's not parallel run!) and then
>>> when
>>> this loop finishes, it again in another loop gets all the pages from the
>>> server (again serially, one page at the time). For page creation we're
>>> using
>>> REST API, for getting the pages we're using common browseable URL
>>> (/xwiki/bin/view/...).
>>> Now the problem is that if I try to attempt creation of 100k pages, then
>>> I
>>> hit Java's out of memory errors and server is unresponsive from that
>>> time.
>>> I've tested this on:
>>>
>>> - xwiki-jetty-hsql-6.0.0
>>> - xwiki-jetty-hsql-6.0.1
>>> - xwiki-tomcat7-pgsql -- debian xwiki packages running on top of debian
>>> 7.5
>>>
>>> Of course I know the way how to increase Java's memory space/heap space.
>>> The
>>> problem is that this will not help here. Simply if I do so and then
>>> create
>>> 100 millions of pages on one run I will still get to the same issue just
>>> it'll take a lot longer.
>>>
>>> I've googled a bit for memoryleaks issues on Java and found an
>>> interesting
>>> recommendation to use parallel GC. So I've changed start_xwiki.sh to
>>> include
>>> -XX:+UseParallelGC in XWIKI_OPTS
>>>
>>> Anyway, the situation is still looking suspiciously. I've connected
>>> JConsole
>>> to the xwiki java process and overall view looks:
>>>
>>> https://app.box.com/s/udndu96pl2fvuz3igvor
>>>
>>> this is whole run overview, but perhaps even more clear is it on last 2
>>> hours view which is here:
>>>
>>> https://app.box.com/s/deuix33fzejra4uur941
>>>
>>> Sidenote this all is from debugging xwiki-jetty-hsql-6.0.1 distro.
>>>
>>> Now, what worries me a lot is this bottom cap which is growing. You can
>>> see
>>> that clearly in Heap Memory Usage from 15:15. In CPU usage you can also
>>> see
>>> that around the same time the CPU consumption went up from ~15% to ~45%
>>>
>>> When I switch to Memory Tab in JConsole and click several times on
>>> "Perform
>>> GC" button, the bottom cap is still there and I cannot get lower in
>>> memory
>>> usage. With this going on I can also see server failing after some time
>>> on
>>> OOM error.
>>>
>>> Any help with this is highly appreciated here.
>>>
>>> Thanks!
>>> Karel
>>> _______________________________________________
>>> devs mailing list
>>> [hidden email]
>>> http://lists.xwiki.org/mailman/listinfo/devs
>>
>>
>>
>>
>
> _______________________________________________
> devs mailing list
> [hidden email]
> http://lists.xwiki.org/mailman/listinfo/devs



--
Thomas Mortagne
_______________________________________________
devs mailing list
[hidden email]
http://lists.xwiki.org/mailman/listinfo/devs
Reply | Threaded
Open this post in threaded view
|

Re: Out of memory errors (scalability of xwiki).

Paul Libbrecht-2
In reply to this post by Thomas Mortagne
>> Of course I know the way how to increase Java's memory space/heap space. The
>> problem is that this will not help here. Simply if I do so and then create
>> 100 millions of pages on one run I will still get to the same issue just
>> it'll take a lot longer.

You won't reach 100 time the memory taken by downloading a 100 million pages after you download a million pages. The caches have limits and you should measure how much memory in your setting is needed to reach that limit. Tests such as yours, after they reach the max, will get slower because of the DB access following the cache eviction; they are not very realistic under that point of view.

The best way to follow this is to use JMX and get a graph of the cache values, you should see maximum be reached after a while.
At curriki, we have adjusted the caches to be a bit bigger and we do reach the maximum (100'000 if I remember well, for the document cache) after about an hour after the restart. Then things get pretty stable with our max 8G of memory.

We are considering to raise all that since hardware has about 32G but that'll be for later.
Our app servers (Sun Java App Server or Tomcat) can stay up for several months.

Reporting on this simulation to the list is certainly interesting.
We've started an amount of monitoring in production using Zabbix but we have not yet had the time find the "keys" that would access the JMX cache sizes using Zabbix which has its own language to describe JMX bean properties. Certainly something worth sharing.
In your simulation case, jConsole is probably enough.

Paul
_______________________________________________
devs mailing list
[hidden email]
http://lists.xwiki.org/mailman/listinfo/devs
Reply | Threaded
Open this post in threaded view
|

Re: Out of memory errors (scalability of xwiki).

kgardas
In reply to this post by Sergiu Dumitriu-3
On 05/28/14 06:26 PM, Sergiu Dumitriu wrote:
> The standalone .zip package is not designed to hold many pages. It uses
> an in-memory database that requires as much heap space as the amount of
> data that you have (plus all the other memory that XWiki normally
> requires). I thought there was a bigger warning on the download page
> that clarified that the standalone package is only supposed to be used
> for small tests...

Unfortunately I've not been able to judge what's for small test and
what's for bigger deployment based on the description provided on the
download page here:
http://enterprise.xwiki.org/xwiki/bin/view/Main/Download -- it just
divides various installers based on user maturity to work with xwiki.
Tells nothing about scalability at all.

> The pgsql package should behave better, though, since it separates the
> database from the live objects, except that you need to make sure
> Tomcat's default memory is increased.

Indeed, I've installed tomcat/pgsql/xwiki 6.0.1, then I've used exactly
xwiki.org's catalina opts. With this I've been able to create and get
those 100k of clear/empty pages I'm testing here, but I still get OOM
errors. I've tested twice and twice it hits RMI connection to JConsole
so basically JConsole disconnects with this message thrown to the
separate window:

ay 29, 2014 5:02:20 AM ClientCommunicatorAdmin Checker-run
WARNING: Failed to check the connection:
java.net.SocketTimeoutException: Read timed out
May 29, 2014 5:03:17 AM ClientNotifForwarder NotifFetcher-run
SEVERE: Failed to fetch notification, stopping thread. Error is:
java.rmi.UnmarshalException: error unmarshalling return; nested
exception is:
        java.io.EOFException
java.rmi.UnmarshalException: error unmarshalling return; nested
exception is:
        java.io.EOFException
        at sun.rmi.server.UnicastRef.invoke(UnicastRef.java:191)
        at com.sun.jmx.remote.internal.PRef.invoke(Unknown Source)
        at
javax.management.remote.rmi.RMIConnectionImpl_Stub.fetchNotifications(Unknown
Source)
        at
javax.management.remote.rmi.RMIConnector$RMINotifClient.fetchNotifs(RMIConnector.java:1337)
        at
com.sun.jmx.remote.internal.ClientNotifForwarder$NotifFetcher.fetchNotifs(ClientNotifForwarder.java:587)
        at
com.sun.jmx.remote.internal.ClientNotifForwarder$NotifFetcher.doRun(ClientNotifForwarder.java:470)
        at
com.sun.jmx.remote.internal.ClientNotifForwarder$NotifFetcher.run(ClientNotifForwarder.java:451)
        at
com.sun.jmx.remote.internal.ClientNotifForwarder$LinearExecutor$1.run(ClientNotifForwarder.java:107)
Caused by: java.io.EOFException
        at
java.io.ObjectInputStream$BlockDataInputStream.peekByte(ObjectInputStream.java:2571)
        at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1315)
        at java.io.ObjectInputStream.readObject(ObjectInputStream.java:369)
        at sun.rmi.server.UnicastRef.unmarshalValue(UnicastRef.java:324)
        at sun.rmi.server.UnicastRef.invoke(UnicastRef.java:173)
        ... 7 more

May 29, 2014 6:04:09 PM ClientCommunicatorAdmin Checker-run
WARNING: Failed to check the connection:
java.net.SocketTimeoutException: Read timed out



and on the tomcat console I see messages like:
Exception in thread "RMI TCP Connection(idle)"
java.lang.OutOfMemoryError: Java heap space
Exception in thread "RMI TCP Connection(idle)"
java.lang.OutOfMemoryError: Java heap space
Exception in thread "RMI TCP Connection(idle)"
java.lang.OutOfMemoryError: Java heap space
Exception in thread "RMI TCP Connection(idle)"
java.lang.OutOfMemoryError: Java heap space
Exception in thread "RMI TCP Connection(idle)"
java.lang.OutOfMemoryError: Java heap space
Exception in thread "RMI TCP Connection(idle)"
java.lang.OutOfMemoryError: Java heap space
Exception in thread "RMI TCP Connection(idle)"
java.lang.OutOfMemoryError: Java heap space
Exception in thread "RMI TCP Connection(idle)"
java.lang.OutOfMemoryError: Java heap space
Exception in thread "RMI TCP Connection(idle)"
java.lang.OutOfMemoryError: Java heap space


Still my test runs, but honestly speaking the stage of server itself is
not the most trusted one. I even uncommented *cache* lines in xwiki.cfg
and left default values there, but this does help neither.

As I said, this is when running with: CATALINA_OPTS="-server -Xms800m
-Xmx800m -XX:MaxPermSize=196m -Dfile.encoding=utf-8
-Djava.awt.headless=true -XX:+UseParallelGC -XX:MaxGCPauseMillis=100"


Thanks!
Karel

>
> On 05/28/2014 12:02 PM, Karel Gardas wrote:
>>
>> Thomas,
>>
>> thanks for your fast response. My comments are below.
>>
>> On 05/28/14 05:27 PM, Thomas Mortagne wrote:
>>> You are mixing several things here. It's not because you hit Out of
>>> memory errors that you have a memory leak, it can simply mean that you
>>> have a document cache too big for the memory you allocated for
>>> example. You can modify the documents cache size in xwiki.cfg.
>>
>> I see two caches in that file:
>>
>> xwiki.store.cache
>> xwiki.render.cache
>>
>> both seems to be commented out.
>>
>>> How much memory did you allocated ? XWiki is not a small beast and it
>>> require a minimum to work. See
>>> http://platform.xwiki.org/xwiki/bin/view/AdminGuide/Performances#HMemory.
>>
>> To prevent misunderstanding. I've not set any limit on memory etc. I
>> just simply use xwiki-enterprise-jetty-hsqldb-6.0.1.zip as distributed
>> on xwiki.org. This distro does have 512MB RAM cap which following your
>> link above should be good for medium installs.
>>
>> The question is, if the value of caches above are commented out in
>> xwiki.cfg, then what are actual default values which are used in the
>> xwiki-enterprise-jetty-hsqldb-6.0.1.zip distro? Just to know from which
>> value I shall go lower...
>>
>> Thanks!
>> Karel
>>
>>>
>>> On Wed, May 28, 2014 at 4:58 PM, Karel
>>> Gardas<[hidden email]>   wrote:
>>>>
>>>> Folks,
>>>>
>>>> I'm testing kind of scalability of XWiki by simple benchmark which
>>>> creates N
>>>> pages in a loop (one page at a time, it's not parallel run!) and then
>>>> when
>>>> this loop finishes, it again in another loop gets all the pages from the
>>>> server (again serially, one page at the time). For page creation
>>>> we're using
>>>> REST API, for getting the pages we're using common browseable URL
>>>> (/xwiki/bin/view/...).
>>>> Now the problem is that if I try to attempt creation of 100k pages,
>>>> then I
>>>> hit Java's out of memory errors and server is unresponsive from that
>>>> time.
>>>> I've tested this on:
>>>>
>>>> - xwiki-jetty-hsql-6.0.0
>>>> - xwiki-jetty-hsql-6.0.1
>>>> - xwiki-tomcat7-pgsql -- debian xwiki packages running on top of
>>>> debian 7.5
>>>>
>>>> Of course I know the way how to increase Java's memory space/heap
>>>> space. The
>>>> problem is that this will not help here. Simply if I do so and then
>>>> create
>>>> 100 millions of pages on one run I will still get to the same issue just
>>>> it'll take a lot longer.
>>>>
>>>> I've googled a bit for memoryleaks issues on Java and found an
>>>> interesting
>>>> recommendation to use parallel GC. So I've changed start_xwiki.sh to
>>>> include
>>>> -XX:+UseParallelGC in XWIKI_OPTS
>>>>
>>>> Anyway, the situation is still looking suspiciously. I've connected
>>>> JConsole
>>>> to the xwiki java process and overall view looks:
>>>>
>>>> https://app.box.com/s/udndu96pl2fvuz3igvor
>>>>
>>>> this is whole run overview, but perhaps even more clear is it on last 2
>>>> hours view which is here:
>>>>
>>>> https://app.box.com/s/deuix33fzejra4uur941
>>>>
>>>> Sidenote this all is from debugging xwiki-jetty-hsql-6.0.1 distro.
>>>>
>>>> Now, what worries me a lot is this bottom cap which is growing. You
>>>> can see
>>>> that clearly in Heap Memory Usage from 15:15. In CPU usage you can
>>>> also see
>>>> that around the same time the CPU consumption went up from ~15% to ~45%
>>>>
>>>> When I switch to Memory Tab in JConsole and click several times on
>>>> "Perform
>>>> GC" button, the bottom cap is still there and I cannot get lower in
>>>> memory
>>>> usage. With this going on I can also see server failing after some
>>>> time on
>>>> OOM error.
>>>>
>>>> Any help with this is highly appreciated here.
>>>>
>>>> Thanks!
>>>> Karel
>
>

_______________________________________________
devs mailing list
[hidden email]
http://lists.xwiki.org/mailman/listinfo/devs
Reply | Threaded
Open this post in threaded view
|

Re: Out of memory errors (scalability of xwiki).

kgardas
In reply to this post by Thomas Mortagne
On 05/28/14 06:30 PM, Thomas Mortagne wrote:
>> To prevent misunderstanding. I've not set any limit on memory etc. I just
>> simply use xwiki-enterprise-jetty-hsqldb-6.0.1.zip as distributed on
>> xwiki.org. This distro does have 512MB RAM cap which following your link
>> above should be good for medium installs.
>
> Ok that explains it, this distribution is more for tests purpose and
> the entire database is in memory so indeed the more pages you add the
> more memory you take.

This indeed clears that. Unfortunately even with tomcat/pgsql combo +
xwiki.org's CATALINA_OPTS I still get OOM errors, this time in RMI for
JConsole connection...

Thanks,
Karel

_______________________________________________
devs mailing list
[hidden email]
http://lists.xwiki.org/mailman/listinfo/devs
Reply | Threaded
Open this post in threaded view
|

Re: Out of memory errors (scalability of xwiki).

kgardas
In reply to this post by Paul Libbrecht-2

Paul,

On 05/28/14 08:11 PM, Paul Libbrecht wrote:

>>> Of course I know the way how to increase Java's memory space/heap
>>> space. The problem is that this will not help here. Simply if I
>>> do so and then create 100 millions of pages on one run I will
>>> still get to the same issue just it'll take a lot longer.
>
> You won't reach 100 time the memory taken by downloading a 100
> million pages after you download a million pages. The caches have
> limits and you should measure how much memory in your setting is
> needed to reach that limit. Tests such as yours, after they reach the
> max, will get slower because of the DB access following the cache
> eviction; they are not very realistic under that point of view.

yes, I'm sure the test is not that realistic so far since majority of
xwiki usecases is probably download/rendering of page than upload of
page which I'm testing now. The reason of my test is simple, my task is
to attempt importing whole Wikipedia content w/o history into the XWiki.
For this I need to have it stable for import which is something I do
have problem now.

> The best way to follow this is to use JMX and get a graph of the
> cache values, you should see maximum be reached after a while. At
> curriki, we have adjusted the caches to be a bit bigger and we do
> reach the maximum (100'000 if I remember well, for the document
> cache) after about an hour after the restart. Then things get pretty
> stable with our max 8G of memory.

I'm glad that your setup is stable. I do have following cache setup here:

$ grep cache `find . -name 'xwiki.cfg'`
#-# Put a cache in front of the document store. This greatly improves
performance at the cost of memory consumption.
xwiki.store.cache=1
#-# Maximum number of documents to keep in the cache.
xwiki.store.cache.capacity=100
#-# Maximum number of documents to keep in the rendered cache
xwiki.render.cache.capacity=100
# xwiki.authentication.ldap.groupcache_expiration=21600
xwiki.plugin.image.cache.capacity=30

this is XWiki 6.0.1 on top of Tomcat 7.0.53 using PostgreSQL 9.3/64bit.
The JVM is 1.7.0_07 everything running on Solaris 11.1. When I give 2GB
RAM to catalina by:

$ cat bin/setenv.sh
CATALINA_OPTS="-server -Xms2048m -Xmx2048m -XX:MaxPermSize=196m
-Dfile.encoding=utf-8 -Djava.awt.headless=true -XX:+UseParallelGC
-XX:MaxGCPauseMillis=100"
export CATALINA_OPTS


and then run my benchmark which is just using REST to import following
simple page:

"<?xml version=\"1.0\" encoding=\"UTF-8\" standalone=\"yes\"?>"
      ++ "<page xmlns=\"http://www.xwiki.org\">"
      ++ "<title>Semantic XWiki Benchmark page " ++  title ++ "</title>"
      ++ "<syntax>xwiki/2.0</syntax>"
      ++ "<content>This is a benchark page."
      ++ "The list of properties defined for this page is:"
      ++ "</content>"
      ++ "</page>"

where title is BenchPage_<number>

Now, when I pushed that, then after around 400k pages imported I hit OOM
error. JConsole is thrown out by closed connection and even benchmark
tool complains about few 500 HTTP error codes received. The exception
output on Tomcat's console looks:

Exception in thread "DefaultQuartzScheduler_QuartzSchedulerThread"
java.lang.OutOfMemoryError: Java heap space
Exception in thread "RMI TCP Connection(idle)" Exception in thread "RMI
TCP Connection(idle)" java.lang.OutOfMemoryError: Java heap space
java.lang.OutOfMemoryError: Java heap space
Exception in thread
"TxCleanupService,platform.security.authorization.cache,local"
java.lang.OutOfMemoryError: Java heap space
Exception in thread
"TxCleanupService,localization.bundle.document,local" Exception in
thread "TxCleanupService,wiki.descriptor.cache.wikiId,local"
java.lang.OutOfMemoryError: Java heap space
java.lang.OutOfMemoryError: Java heap space
Exception in thread "TxCleanupService,xwiki.store.pageexistcache,local"
java.lang.OutOfMemoryError: Java heap space
Exception in thread "TxCleanupService,xwiki.store.pagecache,local"
java.lang.OutOfMemoryError: Java heap space
May 30, 2014 4:12:18 PM org.restlet.engine.application.StatusFilter doHandle
WARNING: Exception or error caught in status service
java.lang.OutOfMemoryError: Java heap space

May 30, 2014 4:12:18 PM org.restlet.engine.application.StatusFilter doHandle
WARNING: Exception or error caught in status service
java.lang.OutOfMemoryError: Java heap space
         at java.lang.Class.getDeclaredMethods0(Native Method)
         at java.lang.Class.privateGetDeclaredMethods(Class.java:2442)
         at java.lang.Class.getMethod0(Class.java:2685)
         at java.lang.Class.getMethod(Class.java:1620)
         at
org.xwiki.legacy.internal.oldcore.notification.LegacyNotificationDispatcher.getNotificationManager(LegacyNotificationDispatcher.java:94)
         at
org.xwiki.legacy.internal.oldcore.notification.LegacyNotificationDispatcher.onEvent(LegacyNotificationDispatcher.java:109)
         at
org.xwiki.observation.internal.DefaultObservationManager.notify(DefaultObservationManager.java:299)
         at
org.xwiki.observation.internal.DefaultObservationManager.notify(DefaultObservationManager.java:264)
         at com.xpn.xwiki.XWiki.saveDocument(XWiki.java:1323)
         at com.xpn.xwiki.api.Document.saveDocument(Document.java:2299)
         at com.xpn.xwiki.api.Document.save(Document.java:2202)
         at com.xpn.xwiki.api.Document.save(Document.java:2196)
         at
org.xwiki.rest.internal.resources.pages.ModifiablePageResource.putPage(ModifiablePageResource.java:67)
         at
org.xwiki.rest.internal.resources.pages.PageResourceImpl.putPage(PageResourceImpl.java:62)
         at sun.reflect.GeneratedMethodAccessor296.invoke(Unknown Source)
         at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
         at java.lang.reflect.Method.invoke(Method.java:601)
         at
org.restlet.ext.jaxrs.internal.wrappers.AbstractMethodWrapper.internalInvoke(AbstractMethodWrapper.java:171)
         at
org.restlet.ext.jaxrs.internal.wrappers.ResourceMethod.invoke(ResourceMethod.java:291)
         at
org.restlet.ext.jaxrs.JaxRsRestlet.invokeMethod(JaxRsRestlet.java:1043)
         at org.restlet.ext.jaxrs.JaxRsRestlet.handle(JaxRsRestlet.java:792)
         at org.restlet.routing.Filter.doHandle(Filter.java:159)
         at org.restlet.routing.Filter.handle(Filter.java:206)
         at org.restlet.routing.Router.doHandle(Router.java:500)
         at org.restlet.routing.Router.handle(Router.java:740)
         at org.restlet.routing.Filter.doHandle(Filter.java:159)
         at org.restlet.routing.Filter.handle(Filter.java:206)
         at org.restlet.routing.Filter.doHandle(Filter.java:159)
         at org.restlet.routing.Filter.handle(Filter.java:206)
         at org.restlet.routing.Filter.doHandle(Filter.java:159)
         at org.restlet.routing.Filter.handle(Filter.java:206)
         at org.restlet.routing.Filter.doHandle(Filter.java:159)

May 30, 2014 4:12:18 PM org.restlet.engine.application.StatusFilter doHandle
WARNING: Exception or error caught in status service
java.lang.OutOfMemoryError: Java heap space
         at java.lang.Class.getDeclaredMethods0(Native Method)
         at java.lang.Class.privateGetDeclaredMethods(Class.java:2442)
         at java.lang.Class.getMethod0(Class.java:2685)
         at java.lang.Class.getMethod(Class.java:1620)
         at
org.xwiki.legacy.internal.oldcore.notification.LegacyNotificationDispatcher.getNotificationManager(LegacyNotificationDispatcher.java:94)
         at
org.xwiki.legacy.internal.oldcore.notification.LegacyNotificationDispatcher.onEvent(LegacyNotificationDispatcher.java:109)
         at
org.xwiki.observation.internal.DefaultObservationManager.notify(DefaultObservationManager.java:299)
         at
org.xwiki.observation.internal.DefaultObservationManager.notify(DefaultObservationManager.java:264)
         at com.xpn.xwiki.XWiki.saveDocument(XWiki.java:1323)
         at com.xpn.xwiki.api.Document.saveDocument(Document.java:2299)
         at com.xpn.xwiki.api.Document.save(Document.java:2202)
         at com.xpn.xwiki.api.Document.save(Document.java:2196)
         at
org.xwiki.rest.internal.resources.pages.ModifiablePageResource.putPage(ModifiablePageResource.java:67)
         at
org.xwiki.rest.internal.resources.pages.PageResourceImpl.putPage(PageResourceImpl.java:62)
         at sun.reflect.GeneratedMethodAccessor296.invoke(Unknown Source)
         at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
         at java.lang.reflect.Method.invoke(Method.java:601)
         at
org.restlet.ext.jaxrs.internal.wrappers.AbstractMethodWrapper.internalInvoke(AbstractMethodWrapper.java:171)
         at
org.restlet.ext.jaxrs.internal.wrappers.ResourceMethod.invoke(ResourceMethod.java:291)
         at
org.restlet.ext.jaxrs.JaxRsRestlet.invokeMethod(JaxRsRestlet.java:1043)
         at org.restlet.ext.jaxrs.JaxRsRestlet.handle(JaxRsRestlet.java:792)
         at org.restlet.routing.Filter.doHandle(Filter.java:159)
         at org.restlet.routing.Filter.handle(Filter.java:206)
         at org.restlet.routing.Router.doHandle(Router.java:500)
         at org.restlet.routing.Router.handle(Router.java:740)
         at org.restlet.routing.Filter.doHandle(Filter.java:159)
         at org.restlet.routing.Filter.handle(Filter.java:206)
         at org.restlet.routing.Filter.doHandle(Filter.java:159)
         at org.restlet.routing.Filter.handle(Filter.java:206)
         at org.restlet.routing.Filter.doHandle(Filter.java:159)
         at org.restlet.routing.Filter.handle(Filter.java:206)
         at org.restlet.routing.Filter.doHandle(Filter.java:159)

May 30, 2014 4:12:18 PM org.restlet.engine.application.StatusFilter doHandle
WARNING: Exception or error caught in status service
java.lang.OutOfMemoryError: Java heap space

2014-05-30 16:12:18,851 [DefaultQuartzScheduler_Worker-7] ERROR
o.q.c.JobRunShell              - Job
DEFAULT.xwiki:Scheduler.WatchListHourlyNotifier_0 threw an unhandled
Exception:
java.lang.OutOfMemoryError: Java heap space
2014-05-30 16:12:18,856 [XWiki Solr index thread] WARN
o.h.u.JDBCExceptionReporter    - SQL Error: 0, SQLState: 08001
2014-05-30 16:12:18,856 [XWiki Solr index thread] ERROR
o.h.u.JDBCExceptionReporter    - The connection attempt failed.
2014-05-30 16:12:18,858 [DefaultQuartzScheduler_Worker-7] ERROR
o.q.c.ErrorLogger              - Job
(DEFAULT.xwiki:Scheduler.WatchListHourlyNotifier_0 threw an exception.
org.quartz.SchedulerException: Job threw an unhandled exception.
         at org.quartz.core.JobRunShell.run(JobRunShell.java:213)
~[quartz-1.6.5.jar:1.6.5]
         at
org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:525)
[quartz-1.6.5.jar:1.6.5]
Caused by: java.lang.OutOfMemoryError: Java heap space
2014-05-30 16:12:18,859 [DefaultQuartzScheduler_Worker-7] ERROR
c.x.x.p.s.StatusListener       - Job
(DEFAULT.xwiki:Scheduler.WatchListHourlyNotifier_0 threw an exception.
org.quartz.SchedulerException: Job threw an unhandled exception.
         at org.quartz.core.JobRunShell.run(JobRunShell.java:213)
~[quartz-1.6.5.jar:1.6.5]
         at
org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:525)
[quartz-1.6.5.jar:1.6.5]
Caused by: java.lang.OutOfMemoryError: Java heap space


Funny thing is that before the error JConsole does not show any signs
that the server was allocating a lot of memory. Probably this should be
quick spike in mem consumption or otherwise I don't understand at all
why this happen.


Now, the question is: I do have or I think I do have very limited cache
setup here. I've increase RAM side to 2GB which is over the recommended
size by xwiki.org itself -- they warn about size bigger than 1GB due to
slowness of GC then. Anyway, 2GB, limited size of cache and yet I hit
OOM. Do you think I shall increase RAM limit even further?

Thanks!
Karel
_______________________________________________
devs mailing list
[hidden email]
http://lists.xwiki.org/mailman/listinfo/devs
Reply | Threaded
Open this post in threaded view
|

Re: Out of memory errors (scalability of xwiki).

Thomas Mortagne
Administrator
The references to the watclist in your log make me think about
something: go to you XWiki user profile, disable "Automatic document
watching" in Watchlist section and try again.

By default evey time a user create a page it's added to his watchlist
and I'm wondering if your OOM here is watchlist trying to generate a
mail about what happen in the past hour which would certainly generate
a huge mail for 400k documents.

On Fri, May 30, 2014 at 8:12 PM, Karel Gardas <[hidden email]> wrote:

>
> Paul,
>
>
> On 05/28/14 08:11 PM, Paul Libbrecht wrote:
>>>>
>>>> Of course I know the way how to increase Java's memory space/heap
>>>> space. The problem is that this will not help here. Simply if I
>>>> do so and then create 100 millions of pages on one run I will
>>>> still get to the same issue just it'll take a lot longer.
>>
>>
>> You won't reach 100 time the memory taken by downloading a 100
>> million pages after you download a million pages. The caches have
>> limits and you should measure how much memory in your setting is
>> needed to reach that limit. Tests such as yours, after they reach the
>> max, will get slower because of the DB access following the cache
>> eviction; they are not very realistic under that point of view.
>
>
> yes, I'm sure the test is not that realistic so far since majority of xwiki
> usecases is probably download/rendering of page than upload of page which
> I'm testing now. The reason of my test is simple, my task is to attempt
> importing whole Wikipedia content w/o history into the XWiki. For this I
> need to have it stable for import which is something I do have problem now.
>
>
>> The best way to follow this is to use JMX and get a graph of the
>> cache values, you should see maximum be reached after a while. At
>> curriki, we have adjusted the caches to be a bit bigger and we do
>> reach the maximum (100'000 if I remember well, for the document
>> cache) after about an hour after the restart. Then things get pretty
>> stable with our max 8G of memory.
>
>
> I'm glad that your setup is stable. I do have following cache setup here:
>
> $ grep cache `find . -name 'xwiki.cfg'`
> #-# Put a cache in front of the document store. This greatly improves
> performance at the cost of memory consumption.
> xwiki.store.cache=1
> #-# Maximum number of documents to keep in the cache.
> xwiki.store.cache.capacity=100
> #-# Maximum number of documents to keep in the rendered cache
> xwiki.render.cache.capacity=100
> # xwiki.authentication.ldap.groupcache_expiration=21600
> xwiki.plugin.image.cache.capacity=30
>
> this is XWiki 6.0.1 on top of Tomcat 7.0.53 using PostgreSQL 9.3/64bit. The
> JVM is 1.7.0_07 everything running on Solaris 11.1. When I give 2GB RAM to
> catalina by:
>
> $ cat bin/setenv.sh
> CATALINA_OPTS="-server -Xms2048m -Xmx2048m -XX:MaxPermSize=196m
> -Dfile.encoding=utf-8 -Djava.awt.headless=true -XX:+UseParallelGC
> -XX:MaxGCPauseMillis=100"
> export CATALINA_OPTS
>
>
> and then run my benchmark which is just using REST to import following
> simple page:
>
> "<?xml version=\"1.0\" encoding=\"UTF-8\" standalone=\"yes\"?>"
>      ++ "<page xmlns=\"http://www.xwiki.org\">"
>      ++ "<title>Semantic XWiki Benchmark page " ++  title ++ "</title>"
>      ++ "<syntax>xwiki/2.0</syntax>"
>      ++ "<content>This is a benchark page."
>      ++ "The list of properties defined for this page is:"
>      ++ "</content>"
>      ++ "</page>"
>
> where title is BenchPage_<number>
>
> Now, when I pushed that, then after around 400k pages imported I hit OOM
> error. JConsole is thrown out by closed connection and even benchmark tool
> complains about few 500 HTTP error codes received. The exception output on
> Tomcat's console looks:
>
> Exception in thread "DefaultQuartzScheduler_QuartzSchedulerThread"
> java.lang.OutOfMemoryError: Java heap space
> Exception in thread "RMI TCP Connection(idle)" Exception in thread "RMI TCP
> Connection(idle)" java.lang.OutOfMemoryError: Java heap space
> java.lang.OutOfMemoryError: Java heap space
> Exception in thread
> "TxCleanupService,platform.security.authorization.cache,local"
> java.lang.OutOfMemoryError: Java heap space
> Exception in thread "TxCleanupService,localization.bundle.document,local"
> Exception in thread "TxCleanupService,wiki.descriptor.cache.wikiId,local"
> java.lang.OutOfMemoryError: Java heap space
> java.lang.OutOfMemoryError: Java heap space
> Exception in thread "TxCleanupService,xwiki.store.pageexistcache,local"
> java.lang.OutOfMemoryError: Java heap space
> Exception in thread "TxCleanupService,xwiki.store.pagecache,local"
> java.lang.OutOfMemoryError: Java heap space
> May 30, 2014 4:12:18 PM org.restlet.engine.application.StatusFilter doHandle
> WARNING: Exception or error caught in status service
> java.lang.OutOfMemoryError: Java heap space
>
> May 30, 2014 4:12:18 PM org.restlet.engine.application.StatusFilter doHandle
> WARNING: Exception or error caught in status service
> java.lang.OutOfMemoryError: Java heap space
>         at java.lang.Class.getDeclaredMethods0(Native Method)
>         at java.lang.Class.privateGetDeclaredMethods(Class.java:2442)
>         at java.lang.Class.getMethod0(Class.java:2685)
>         at java.lang.Class.getMethod(Class.java:1620)
>         at
> org.xwiki.legacy.internal.oldcore.notification.LegacyNotificationDispatcher.getNotificationManager(LegacyNotificationDispatcher.java:94)
>         at
> org.xwiki.legacy.internal.oldcore.notification.LegacyNotificationDispatcher.onEvent(LegacyNotificationDispatcher.java:109)
>         at
> org.xwiki.observation.internal.DefaultObservationManager.notify(DefaultObservationManager.java:299)
>         at
> org.xwiki.observation.internal.DefaultObservationManager.notify(DefaultObservationManager.java:264)
>         at com.xpn.xwiki.XWiki.saveDocument(XWiki.java:1323)
>         at com.xpn.xwiki.api.Document.saveDocument(Document.java:2299)
>         at com.xpn.xwiki.api.Document.save(Document.java:2202)
>         at com.xpn.xwiki.api.Document.save(Document.java:2196)
>         at
> org.xwiki.rest.internal.resources.pages.ModifiablePageResource.putPage(ModifiablePageResource.java:67)
>         at
> org.xwiki.rest.internal.resources.pages.PageResourceImpl.putPage(PageResourceImpl.java:62)
>         at sun.reflect.GeneratedMethodAccessor296.invoke(Unknown Source)
>         at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:601)
>         at
> org.restlet.ext.jaxrs.internal.wrappers.AbstractMethodWrapper.internalInvoke(AbstractMethodWrapper.java:171)
>         at
> org.restlet.ext.jaxrs.internal.wrappers.ResourceMethod.invoke(ResourceMethod.java:291)
>         at
> org.restlet.ext.jaxrs.JaxRsRestlet.invokeMethod(JaxRsRestlet.java:1043)
>         at org.restlet.ext.jaxrs.JaxRsRestlet.handle(JaxRsRestlet.java:792)
>         at org.restlet.routing.Filter.doHandle(Filter.java:159)
>         at org.restlet.routing.Filter.handle(Filter.java:206)
>         at org.restlet.routing.Router.doHandle(Router.java:500)
>         at org.restlet.routing.Router.handle(Router.java:740)
>         at org.restlet.routing.Filter.doHandle(Filter.java:159)
>         at org.restlet.routing.Filter.handle(Filter.java:206)
>         at org.restlet.routing.Filter.doHandle(Filter.java:159)
>         at org.restlet.routing.Filter.handle(Filter.java:206)
>         at org.restlet.routing.Filter.doHandle(Filter.java:159)
>         at org.restlet.routing.Filter.handle(Filter.java:206)
>         at org.restlet.routing.Filter.doHandle(Filter.java:159)
>
> May 30, 2014 4:12:18 PM org.restlet.engine.application.StatusFilter doHandle
> WARNING: Exception or error caught in status service
> java.lang.OutOfMemoryError: Java heap space
>         at java.lang.Class.getDeclaredMethods0(Native Method)
>         at java.lang.Class.privateGetDeclaredMethods(Class.java:2442)
>         at java.lang.Class.getMethod0(Class.java:2685)
>         at java.lang.Class.getMethod(Class.java:1620)
>         at
> org.xwiki.legacy.internal.oldcore.notification.LegacyNotificationDispatcher.getNotificationManager(LegacyNotificationDispatcher.java:94)
>         at
> org.xwiki.legacy.internal.oldcore.notification.LegacyNotificationDispatcher.onEvent(LegacyNotificationDispatcher.java:109)
>         at
> org.xwiki.observation.internal.DefaultObservationManager.notify(DefaultObservationManager.java:299)
>         at
> org.xwiki.observation.internal.DefaultObservationManager.notify(DefaultObservationManager.java:264)
>         at com.xpn.xwiki.XWiki.saveDocument(XWiki.java:1323)
>         at com.xpn.xwiki.api.Document.saveDocument(Document.java:2299)
>         at com.xpn.xwiki.api.Document.save(Document.java:2202)
>         at com.xpn.xwiki.api.Document.save(Document.java:2196)
>         at
> org.xwiki.rest.internal.resources.pages.ModifiablePageResource.putPage(ModifiablePageResource.java:67)
>         at
> org.xwiki.rest.internal.resources.pages.PageResourceImpl.putPage(PageResourceImpl.java:62)
>         at sun.reflect.GeneratedMethodAccessor296.invoke(Unknown Source)
>         at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:601)
>         at
> org.restlet.ext.jaxrs.internal.wrappers.AbstractMethodWrapper.internalInvoke(AbstractMethodWrapper.java:171)
>         at
> org.restlet.ext.jaxrs.internal.wrappers.ResourceMethod.invoke(ResourceMethod.java:291)
>         at
> org.restlet.ext.jaxrs.JaxRsRestlet.invokeMethod(JaxRsRestlet.java:1043)
>         at org.restlet.ext.jaxrs.JaxRsRestlet.handle(JaxRsRestlet.java:792)
>         at org.restlet.routing.Filter.doHandle(Filter.java:159)
>         at org.restlet.routing.Filter.handle(Filter.java:206)
>         at org.restlet.routing.Router.doHandle(Router.java:500)
>         at org.restlet.routing.Router.handle(Router.java:740)
>         at org.restlet.routing.Filter.doHandle(Filter.java:159)
>         at org.restlet.routing.Filter.handle(Filter.java:206)
>         at org.restlet.routing.Filter.doHandle(Filter.java:159)
>         at org.restlet.routing.Filter.handle(Filter.java:206)
>         at org.restlet.routing.Filter.doHandle(Filter.java:159)
>         at org.restlet.routing.Filter.handle(Filter.java:206)
>         at org.restlet.routing.Filter.doHandle(Filter.java:159)
>
> May 30, 2014 4:12:18 PM org.restlet.engine.application.StatusFilter doHandle
> WARNING: Exception or error caught in status service
> java.lang.OutOfMemoryError: Java heap space
>
> 2014-05-30 16:12:18,851 [DefaultQuartzScheduler_Worker-7] ERROR
> o.q.c.JobRunShell              - Job
> DEFAULT.xwiki:Scheduler.WatchListHourlyNotifier_0 threw an unhandled
> Exception:
> java.lang.OutOfMemoryError: Java heap space
> 2014-05-30 16:12:18,856 [XWiki Solr index thread] WARN
> o.h.u.JDBCExceptionReporter    - SQL Error: 0, SQLState: 08001
> 2014-05-30 16:12:18,856 [XWiki Solr index thread] ERROR
> o.h.u.JDBCExceptionReporter    - The connection attempt failed.
> 2014-05-30 16:12:18,858 [DefaultQuartzScheduler_Worker-7] ERROR
> o.q.c.ErrorLogger              - Job
> (DEFAULT.xwiki:Scheduler.WatchListHourlyNotifier_0 threw an exception.
> org.quartz.SchedulerException: Job threw an unhandled exception.
>         at org.quartz.core.JobRunShell.run(JobRunShell.java:213)
> ~[quartz-1.6.5.jar:1.6.5]
>         at
> org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:525)
> [quartz-1.6.5.jar:1.6.5]
> Caused by: java.lang.OutOfMemoryError: Java heap space
> 2014-05-30 16:12:18,859 [DefaultQuartzScheduler_Worker-7] ERROR
> c.x.x.p.s.StatusListener       - Job
> (DEFAULT.xwiki:Scheduler.WatchListHourlyNotifier_0 threw an exception.
> org.quartz.SchedulerException: Job threw an unhandled exception.
>         at org.quartz.core.JobRunShell.run(JobRunShell.java:213)
> ~[quartz-1.6.5.jar:1.6.5]
>         at
> org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:525)
> [quartz-1.6.5.jar:1.6.5]
> Caused by: java.lang.OutOfMemoryError: Java heap space
>
>
> Funny thing is that before the error JConsole does not show any signs that
> the server was allocating a lot of memory. Probably this should be quick
> spike in mem consumption or otherwise I don't understand at all why this
> happen.
>
>
> Now, the question is: I do have or I think I do have very limited cache
> setup here. I've increase RAM side to 2GB which is over the recommended size
> by xwiki.org itself -- they warn about size bigger than 1GB due to slowness
> of GC then. Anyway, 2GB, limited size of cache and yet I hit OOM. Do you
> think I shall increase RAM limit even further?
>
> Thanks!
> Karel
>
> _______________________________________________
> devs mailing list
> [hidden email]
> http://lists.xwiki.org/mailman/listinfo/devs



--
Thomas Mortagne
_______________________________________________
devs mailing list
[hidden email]
http://lists.xwiki.org/mailman/listinfo/devs
Reply | Threaded
Open this post in threaded view
|

Re: Out of memory errors (scalability of xwiki).

kgardas

Thomas,

very good idea, but unfortunately my user is Admin and fortunately it
does have automatic document watching (found in Watchlist) disabled
since otherwise it would probably attempt to sent email to
[hidden email]...

Anyway, is there any other user in the Xwiki created by default with
automatic document watching enabled and email set? At least I don't see
any in http://localhost:8080/xwiki/bin/view/Main/UserDirectory (except
Admin)

Thanks!
Karel

On 05/30/14 08:31 PM, Thomas Mortagne wrote:

> The references to the watclist in your log make me think about
> something: go to you XWiki user profile, disable "Automatic document
> watching" in Watchlist section and try again.
>
> By default evey time a user create a page it's added to his watchlist
> and I'm wondering if your OOM here is watchlist trying to generate a
> mail about what happen in the past hour which would certainly generate
> a huge mail for 400k documents.
>
> On Fri, May 30, 2014 at 8:12 PM, Karel Gardas<[hidden email]>  wrote:
>>
>> Paul,
>>
>>
>> On 05/28/14 08:11 PM, Paul Libbrecht wrote:
>>>>>
>>>>> Of course I know the way how to increase Java's memory space/heap
>>>>> space. The problem is that this will not help here. Simply if I
>>>>> do so and then create 100 millions of pages on one run I will
>>>>> still get to the same issue just it'll take a lot longer.
>>>
>>>
>>> You won't reach 100 time the memory taken by downloading a 100
>>> million pages after you download a million pages. The caches have
>>> limits and you should measure how much memory in your setting is
>>> needed to reach that limit. Tests such as yours, after they reach the
>>> max, will get slower because of the DB access following the cache
>>> eviction; they are not very realistic under that point of view.
>>
>>
>> yes, I'm sure the test is not that realistic so far since majority of xwiki
>> usecases is probably download/rendering of page than upload of page which
>> I'm testing now. The reason of my test is simple, my task is to attempt
>> importing whole Wikipedia content w/o history into the XWiki. For this I
>> need to have it stable for import which is something I do have problem now.
>>
>>
>>> The best way to follow this is to use JMX and get a graph of the
>>> cache values, you should see maximum be reached after a while. At
>>> curriki, we have adjusted the caches to be a bit bigger and we do
>>> reach the maximum (100'000 if I remember well, for the document
>>> cache) after about an hour after the restart. Then things get pretty
>>> stable with our max 8G of memory.
>>
>>
>> I'm glad that your setup is stable. I do have following cache setup here:
>>
>> $ grep cache `find . -name 'xwiki.cfg'`
>> #-# Put a cache in front of the document store. This greatly improves
>> performance at the cost of memory consumption.
>> xwiki.store.cache=1
>> #-# Maximum number of documents to keep in the cache.
>> xwiki.store.cache.capacity=100
>> #-# Maximum number of documents to keep in the rendered cache
>> xwiki.render.cache.capacity=100
>> # xwiki.authentication.ldap.groupcache_expiration=21600
>> xwiki.plugin.image.cache.capacity=30
>>
>> this is XWiki 6.0.1 on top of Tomcat 7.0.53 using PostgreSQL 9.3/64bit. The
>> JVM is 1.7.0_07 everything running on Solaris 11.1. When I give 2GB RAM to
>> catalina by:
>>
>> $ cat bin/setenv.sh
>> CATALINA_OPTS="-server -Xms2048m -Xmx2048m -XX:MaxPermSize=196m
>> -Dfile.encoding=utf-8 -Djava.awt.headless=true -XX:+UseParallelGC
>> -XX:MaxGCPauseMillis=100"
>> export CATALINA_OPTS
>>
>>
>> and then run my benchmark which is just using REST to import following
>> simple page:
>>
>> "<?xml version=\"1.0\" encoding=\"UTF-8\" standalone=\"yes\"?>"
>>       ++ "<page xmlns=\"http://www.xwiki.org\">"
>>       ++ "<title>Semantic XWiki Benchmark page " ++  title ++ "</title>"
>>       ++ "<syntax>xwiki/2.0</syntax>"
>>       ++ "<content>This is a benchark page."
>>       ++ "The list of properties defined for this page is:"
>>       ++ "</content>"
>>       ++ "</page>"
>>
>> where title is BenchPage_<number>
>>
>> Now, when I pushed that, then after around 400k pages imported I hit OOM
>> error. JConsole is thrown out by closed connection and even benchmark tool
>> complains about few 500 HTTP error codes received. The exception output on
>> Tomcat's console looks:
>>
>> Exception in thread "DefaultQuartzScheduler_QuartzSchedulerThread"
>> java.lang.OutOfMemoryError: Java heap space
>> Exception in thread "RMI TCP Connection(idle)" Exception in thread "RMI TCP
>> Connection(idle)" java.lang.OutOfMemoryError: Java heap space
>> java.lang.OutOfMemoryError: Java heap space
>> Exception in thread
>> "TxCleanupService,platform.security.authorization.cache,local"
>> java.lang.OutOfMemoryError: Java heap space
>> Exception in thread "TxCleanupService,localization.bundle.document,local"
>> Exception in thread "TxCleanupService,wiki.descriptor.cache.wikiId,local"
>> java.lang.OutOfMemoryError: Java heap space
>> java.lang.OutOfMemoryError: Java heap space
>> Exception in thread "TxCleanupService,xwiki.store.pageexistcache,local"
>> java.lang.OutOfMemoryError: Java heap space
>> Exception in thread "TxCleanupService,xwiki.store.pagecache,local"
>> java.lang.OutOfMemoryError: Java heap space
>> May 30, 2014 4:12:18 PM org.restlet.engine.application.StatusFilter doHandle
>> WARNING: Exception or error caught in status service
>> java.lang.OutOfMemoryError: Java heap space
>>
>> May 30, 2014 4:12:18 PM org.restlet.engine.application.StatusFilter doHandle
>> WARNING: Exception or error caught in status service
>> java.lang.OutOfMemoryError: Java heap space
>>          at java.lang.Class.getDeclaredMethods0(Native Method)
>>          at java.lang.Class.privateGetDeclaredMethods(Class.java:2442)
>>          at java.lang.Class.getMethod0(Class.java:2685)
>>          at java.lang.Class.getMethod(Class.java:1620)
>>          at
>> org.xwiki.legacy.internal.oldcore.notification.LegacyNotificationDispatcher.getNotificationManager(LegacyNotificationDispatcher.java:94)
>>          at
>> org.xwiki.legacy.internal.oldcore.notification.LegacyNotificationDispatcher.onEvent(LegacyNotificationDispatcher.java:109)
>>          at
>> org.xwiki.observation.internal.DefaultObservationManager.notify(DefaultObservationManager.java:299)
>>          at
>> org.xwiki.observation.internal.DefaultObservationManager.notify(DefaultObservationManager.java:264)
>>          at com.xpn.xwiki.XWiki.saveDocument(XWiki.java:1323)
>>          at com.xpn.xwiki.api.Document.saveDocument(Document.java:2299)
>>          at com.xpn.xwiki.api.Document.save(Document.java:2202)
>>          at com.xpn.xwiki.api.Document.save(Document.java:2196)
>>          at
>> org.xwiki.rest.internal.resources.pages.ModifiablePageResource.putPage(ModifiablePageResource.java:67)
>>          at
>> org.xwiki.rest.internal.resources.pages.PageResourceImpl.putPage(PageResourceImpl.java:62)
>>          at sun.reflect.GeneratedMethodAccessor296.invoke(Unknown Source)
>>          at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>          at java.lang.reflect.Method.invoke(Method.java:601)
>>          at
>> org.restlet.ext.jaxrs.internal.wrappers.AbstractMethodWrapper.internalInvoke(AbstractMethodWrapper.java:171)
>>          at
>> org.restlet.ext.jaxrs.internal.wrappers.ResourceMethod.invoke(ResourceMethod.java:291)
>>          at
>> org.restlet.ext.jaxrs.JaxRsRestlet.invokeMethod(JaxRsRestlet.java:1043)
>>          at org.restlet.ext.jaxrs.JaxRsRestlet.handle(JaxRsRestlet.java:792)
>>          at org.restlet.routing.Filter.doHandle(Filter.java:159)
>>          at org.restlet.routing.Filter.handle(Filter.java:206)
>>          at org.restlet.routing.Router.doHandle(Router.java:500)
>>          at org.restlet.routing.Router.handle(Router.java:740)
>>          at org.restlet.routing.Filter.doHandle(Filter.java:159)
>>          at org.restlet.routing.Filter.handle(Filter.java:206)
>>          at org.restlet.routing.Filter.doHandle(Filter.java:159)
>>          at org.restlet.routing.Filter.handle(Filter.java:206)
>>          at org.restlet.routing.Filter.doHandle(Filter.java:159)
>>          at org.restlet.routing.Filter.handle(Filter.java:206)
>>          at org.restlet.routing.Filter.doHandle(Filter.java:159)
>>
>> May 30, 2014 4:12:18 PM org.restlet.engine.application.StatusFilter doHandle
>> WARNING: Exception or error caught in status service
>> java.lang.OutOfMemoryError: Java heap space
>>          at java.lang.Class.getDeclaredMethods0(Native Method)
>>          at java.lang.Class.privateGetDeclaredMethods(Class.java:2442)
>>          at java.lang.Class.getMethod0(Class.java:2685)
>>          at java.lang.Class.getMethod(Class.java:1620)
>>          at
>> org.xwiki.legacy.internal.oldcore.notification.LegacyNotificationDispatcher.getNotificationManager(LegacyNotificationDispatcher.java:94)
>>          at
>> org.xwiki.legacy.internal.oldcore.notification.LegacyNotificationDispatcher.onEvent(LegacyNotificationDispatcher.java:109)
>>          at
>> org.xwiki.observation.internal.DefaultObservationManager.notify(DefaultObservationManager.java:299)
>>          at
>> org.xwiki.observation.internal.DefaultObservationManager.notify(DefaultObservationManager.java:264)
>>          at com.xpn.xwiki.XWiki.saveDocument(XWiki.java:1323)
>>          at com.xpn.xwiki.api.Document.saveDocument(Document.java:2299)
>>          at com.xpn.xwiki.api.Document.save(Document.java:2202)
>>          at com.xpn.xwiki.api.Document.save(Document.java:2196)
>>          at
>> org.xwiki.rest.internal.resources.pages.ModifiablePageResource.putPage(ModifiablePageResource.java:67)
>>          at
>> org.xwiki.rest.internal.resources.pages.PageResourceImpl.putPage(PageResourceImpl.java:62)
>>          at sun.reflect.GeneratedMethodAccessor296.invoke(Unknown Source)
>>          at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>          at java.lang.reflect.Method.invoke(Method.java:601)
>>          at
>> org.restlet.ext.jaxrs.internal.wrappers.AbstractMethodWrapper.internalInvoke(AbstractMethodWrapper.java:171)
>>          at
>> org.restlet.ext.jaxrs.internal.wrappers.ResourceMethod.invoke(ResourceMethod.java:291)
>>          at
>> org.restlet.ext.jaxrs.JaxRsRestlet.invokeMethod(JaxRsRestlet.java:1043)
>>          at org.restlet.ext.jaxrs.JaxRsRestlet.handle(JaxRsRestlet.java:792)
>>          at org.restlet.routing.Filter.doHandle(Filter.java:159)
>>          at org.restlet.routing.Filter.handle(Filter.java:206)
>>          at org.restlet.routing.Router.doHandle(Router.java:500)
>>          at org.restlet.routing.Router.handle(Router.java:740)
>>          at org.restlet.routing.Filter.doHandle(Filter.java:159)
>>          at org.restlet.routing.Filter.handle(Filter.java:206)
>>          at org.restlet.routing.Filter.doHandle(Filter.java:159)
>>          at org.restlet.routing.Filter.handle(Filter.java:206)
>>          at org.restlet.routing.Filter.doHandle(Filter.java:159)
>>          at org.restlet.routing.Filter.handle(Filter.java:206)
>>          at org.restlet.routing.Filter.doHandle(Filter.java:159)
>>
>> May 30, 2014 4:12:18 PM org.restlet.engine.application.StatusFilter doHandle
>> WARNING: Exception or error caught in status service
>> java.lang.OutOfMemoryError: Java heap space
>>
>> 2014-05-30 16:12:18,851 [DefaultQuartzScheduler_Worker-7] ERROR
>> o.q.c.JobRunShell              - Job
>> DEFAULT.xwiki:Scheduler.WatchListHourlyNotifier_0 threw an unhandled
>> Exception:
>> java.lang.OutOfMemoryError: Java heap space
>> 2014-05-30 16:12:18,856 [XWiki Solr index thread] WARN
>> o.h.u.JDBCExceptionReporter    - SQL Error: 0, SQLState: 08001
>> 2014-05-30 16:12:18,856 [XWiki Solr index thread] ERROR
>> o.h.u.JDBCExceptionReporter    - The connection attempt failed.
>> 2014-05-30 16:12:18,858 [DefaultQuartzScheduler_Worker-7] ERROR
>> o.q.c.ErrorLogger              - Job
>> (DEFAULT.xwiki:Scheduler.WatchListHourlyNotifier_0 threw an exception.
>> org.quartz.SchedulerException: Job threw an unhandled exception.
>>          at org.quartz.core.JobRunShell.run(JobRunShell.java:213)
>> ~[quartz-1.6.5.jar:1.6.5]
>>          at
>> org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:525)
>> [quartz-1.6.5.jar:1.6.5]
>> Caused by: java.lang.OutOfMemoryError: Java heap space
>> 2014-05-30 16:12:18,859 [DefaultQuartzScheduler_Worker-7] ERROR
>> c.x.x.p.s.StatusListener       - Job
>> (DEFAULT.xwiki:Scheduler.WatchListHourlyNotifier_0 threw an exception.
>> org.quartz.SchedulerException: Job threw an unhandled exception.
>>          at org.quartz.core.JobRunShell.run(JobRunShell.java:213)
>> ~[quartz-1.6.5.jar:1.6.5]
>>          at
>> org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:525)
>> [quartz-1.6.5.jar:1.6.5]
>> Caused by: java.lang.OutOfMemoryError: Java heap space
>>
>>
>> Funny thing is that before the error JConsole does not show any signs that
>> the server was allocating a lot of memory. Probably this should be quick
>> spike in mem consumption or otherwise I don't understand at all why this
>> happen.
>>
>>
>> Now, the question is: I do have or I think I do have very limited cache
>> setup here. I've increase RAM side to 2GB which is over the recommended size
>> by xwiki.org itself -- they warn about size bigger than 1GB due to slowness
>> of GC then. Anyway, 2GB, limited size of cache and yet I hit OOM. Do you
>> think I shall increase RAM limit even further?
>>
>> Thanks!
>> Karel
>>
>> _______________________________________________
>> devs mailing list
>> [hidden email]
>> http://lists.xwiki.org/mailman/listinfo/devs
>
>
>

_______________________________________________
devs mailing list
[hidden email]
http://lists.xwiki.org/mailman/listinfo/devs
Reply | Threaded
Open this post in threaded view
|

Re: Out of memory errors (scalability of xwiki).

Thomas Mortagne
Administrator
On Fri, May 30, 2014 at 10:02 PM, Karel Gardas <[hidden email]> wrote:

>
> Thomas,
>
> very good idea, but unfortunately my user is Admin and fortunately it does
> have automatic document watching (found in Watchlist) disabled since
> otherwise it would probably attempt to sent email to [hidden email]...
>
> Anyway, is there any other user in the Xwiki created by default with
> automatic document watching enabled and email set? At least I don't see any
> in http://localhost:8080/xwiki/bin/view/Main/UserDirectory (except Admin)

That would not matter since you are using Admin user.

>
> Thanks!
> Karel
>
>
> On 05/30/14 08:31 PM, Thomas Mortagne wrote:
>>
>> The references to the watclist in your log make me think about
>> something: go to you XWiki user profile, disable "Automatic document
>> watching" in Watchlist section and try again.
>>
>> By default evey time a user create a page it's added to his watchlist
>> and I'm wondering if your OOM here is watchlist trying to generate a
>> mail about what happen in the past hour which would certainly generate
>> a huge mail for 400k documents.
>>
>> On Fri, May 30, 2014 at 8:12 PM, Karel Gardas<[hidden email]>
>> wrote:
>>>
>>>
>>> Paul,
>>>
>>>
>>> On 05/28/14 08:11 PM, Paul Libbrecht wrote:
>>>>>>
>>>>>>
>>>>>> Of course I know the way how to increase Java's memory space/heap
>>>>>> space. The problem is that this will not help here. Simply if I
>>>>>> do so and then create 100 millions of pages on one run I will
>>>>>> still get to the same issue just it'll take a lot longer.
>>>>
>>>>
>>>>
>>>> You won't reach 100 time the memory taken by downloading a 100
>>>> million pages after you download a million pages. The caches have
>>>> limits and you should measure how much memory in your setting is
>>>> needed to reach that limit. Tests such as yours, after they reach the
>>>> max, will get slower because of the DB access following the cache
>>>> eviction; they are not very realistic under that point of view.
>>>
>>>
>>>
>>> yes, I'm sure the test is not that realistic so far since majority of
>>> xwiki
>>> usecases is probably download/rendering of page than upload of page which
>>> I'm testing now. The reason of my test is simple, my task is to attempt
>>> importing whole Wikipedia content w/o history into the XWiki. For this I
>>> need to have it stable for import which is something I do have problem
>>> now.
>>>
>>>
>>>> The best way to follow this is to use JMX and get a graph of the
>>>> cache values, you should see maximum be reached after a while. At
>>>> curriki, we have adjusted the caches to be a bit bigger and we do
>>>> reach the maximum (100'000 if I remember well, for the document
>>>> cache) after about an hour after the restart. Then things get pretty
>>>> stable with our max 8G of memory.
>>>
>>>
>>>
>>> I'm glad that your setup is stable. I do have following cache setup here:
>>>
>>> $ grep cache `find . -name 'xwiki.cfg'`
>>> #-# Put a cache in front of the document store. This greatly improves
>>> performance at the cost of memory consumption.
>>> xwiki.store.cache=1
>>> #-# Maximum number of documents to keep in the cache.
>>> xwiki.store.cache.capacity=100
>>> #-# Maximum number of documents to keep in the rendered cache
>>> xwiki.render.cache.capacity=100
>>> # xwiki.authentication.ldap.groupcache_expiration=21600
>>> xwiki.plugin.image.cache.capacity=30
>>>
>>> this is XWiki 6.0.1 on top of Tomcat 7.0.53 using PostgreSQL 9.3/64bit.
>>> The
>>> JVM is 1.7.0_07 everything running on Solaris 11.1. When I give 2GB RAM
>>> to
>>> catalina by:
>>>
>>> $ cat bin/setenv.sh
>>> CATALINA_OPTS="-server -Xms2048m -Xmx2048m -XX:MaxPermSize=196m
>>> -Dfile.encoding=utf-8 -Djava.awt.headless=true -XX:+UseParallelGC
>>> -XX:MaxGCPauseMillis=100"
>>> export CATALINA_OPTS
>>>
>>>
>>> and then run my benchmark which is just using REST to import following
>>> simple page:
>>>
>>> "<?xml version=\"1.0\" encoding=\"UTF-8\" standalone=\"yes\"?>"
>>>       ++ "<page xmlns=\"http://www.xwiki.org\">"
>>>       ++ "<title>Semantic XWiki Benchmark page " ++  title ++ "</title>"
>>>       ++ "<syntax>xwiki/2.0</syntax>"
>>>       ++ "<content>This is a benchark page."
>>>       ++ "The list of properties defined for this page is:"
>>>       ++ "</content>"
>>>       ++ "</page>"
>>>
>>> where title is BenchPage_<number>
>>>
>>> Now, when I pushed that, then after around 400k pages imported I hit OOM
>>> error. JConsole is thrown out by closed connection and even benchmark
>>> tool
>>> complains about few 500 HTTP error codes received. The exception output
>>> on
>>> Tomcat's console looks:
>>>
>>> Exception in thread "DefaultQuartzScheduler_QuartzSchedulerThread"
>>> java.lang.OutOfMemoryError: Java heap space
>>> Exception in thread "RMI TCP Connection(idle)" Exception in thread "RMI
>>> TCP
>>> Connection(idle)" java.lang.OutOfMemoryError: Java heap space
>>> java.lang.OutOfMemoryError: Java heap space
>>> Exception in thread
>>> "TxCleanupService,platform.security.authorization.cache,local"
>>> java.lang.OutOfMemoryError: Java heap space
>>> Exception in thread "TxCleanupService,localization.bundle.document,local"
>>> Exception in thread "TxCleanupService,wiki.descriptor.cache.wikiId,local"
>>> java.lang.OutOfMemoryError: Java heap space
>>> java.lang.OutOfMemoryError: Java heap space
>>> Exception in thread "TxCleanupService,xwiki.store.pageexistcache,local"
>>> java.lang.OutOfMemoryError: Java heap space
>>> Exception in thread "TxCleanupService,xwiki.store.pagecache,local"
>>> java.lang.OutOfMemoryError: Java heap space
>>> May 30, 2014 4:12:18 PM org.restlet.engine.application.StatusFilter
>>> doHandle
>>> WARNING: Exception or error caught in status service
>>> java.lang.OutOfMemoryError: Java heap space
>>>
>>> May 30, 2014 4:12:18 PM org.restlet.engine.application.StatusFilter
>>> doHandle
>>> WARNING: Exception or error caught in status service
>>> java.lang.OutOfMemoryError: Java heap space
>>>          at java.lang.Class.getDeclaredMethods0(Native Method)
>>>          at java.lang.Class.privateGetDeclaredMethods(Class.java:2442)
>>>          at java.lang.Class.getMethod0(Class.java:2685)
>>>          at java.lang.Class.getMethod(Class.java:1620)
>>>          at
>>>
>>> org.xwiki.legacy.internal.oldcore.notification.LegacyNotificationDispatcher.getNotificationManager(LegacyNotificationDispatcher.java:94)
>>>          at
>>>
>>> org.xwiki.legacy.internal.oldcore.notification.LegacyNotificationDispatcher.onEvent(LegacyNotificationDispatcher.java:109)
>>>          at
>>>
>>> org.xwiki.observation.internal.DefaultObservationManager.notify(DefaultObservationManager.java:299)
>>>          at
>>>
>>> org.xwiki.observation.internal.DefaultObservationManager.notify(DefaultObservationManager.java:264)
>>>          at com.xpn.xwiki.XWiki.saveDocument(XWiki.java:1323)
>>>          at com.xpn.xwiki.api.Document.saveDocument(Document.java:2299)
>>>          at com.xpn.xwiki.api.Document.save(Document.java:2202)
>>>          at com.xpn.xwiki.api.Document.save(Document.java:2196)
>>>          at
>>>
>>> org.xwiki.rest.internal.resources.pages.ModifiablePageResource.putPage(ModifiablePageResource.java:67)
>>>          at
>>>
>>> org.xwiki.rest.internal.resources.pages.PageResourceImpl.putPage(PageResourceImpl.java:62)
>>>          at sun.reflect.GeneratedMethodAccessor296.invoke(Unknown Source)
>>>          at
>>>
>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>>          at java.lang.reflect.Method.invoke(Method.java:601)
>>>          at
>>>
>>> org.restlet.ext.jaxrs.internal.wrappers.AbstractMethodWrapper.internalInvoke(AbstractMethodWrapper.java:171)
>>>          at
>>>
>>> org.restlet.ext.jaxrs.internal.wrappers.ResourceMethod.invoke(ResourceMethod.java:291)
>>>          at
>>> org.restlet.ext.jaxrs.JaxRsRestlet.invokeMethod(JaxRsRestlet.java:1043)
>>>          at
>>> org.restlet.ext.jaxrs.JaxRsRestlet.handle(JaxRsRestlet.java:792)
>>>          at org.restlet.routing.Filter.doHandle(Filter.java:159)
>>>          at org.restlet.routing.Filter.handle(Filter.java:206)
>>>          at org.restlet.routing.Router.doHandle(Router.java:500)
>>>          at org.restlet.routing.Router.handle(Router.java:740)
>>>          at org.restlet.routing.Filter.doHandle(Filter.java:159)
>>>          at org.restlet.routing.Filter.handle(Filter.java:206)
>>>          at org.restlet.routing.Filter.doHandle(Filter.java:159)
>>>          at org.restlet.routing.Filter.handle(Filter.java:206)
>>>          at org.restlet.routing.Filter.doHandle(Filter.java:159)
>>>          at org.restlet.routing.Filter.handle(Filter.java:206)
>>>          at org.restlet.routing.Filter.doHandle(Filter.java:159)
>>>
>>> May 30, 2014 4:12:18 PM org.restlet.engine.application.StatusFilter
>>> doHandle
>>> WARNING: Exception or error caught in status service
>>> java.lang.OutOfMemoryError: Java heap space
>>>          at java.lang.Class.getDeclaredMethods0(Native Method)
>>>          at java.lang.Class.privateGetDeclaredMethods(Class.java:2442)
>>>          at java.lang.Class.getMethod0(Class.java:2685)
>>>          at java.lang.Class.getMethod(Class.java:1620)
>>>          at
>>>
>>> org.xwiki.legacy.internal.oldcore.notification.LegacyNotificationDispatcher.getNotificationManager(LegacyNotificationDispatcher.java:94)
>>>          at
>>>
>>> org.xwiki.legacy.internal.oldcore.notification.LegacyNotificationDispatcher.onEvent(LegacyNotificationDispatcher.java:109)
>>>          at
>>>
>>> org.xwiki.observation.internal.DefaultObservationManager.notify(DefaultObservationManager.java:299)
>>>          at
>>>
>>> org.xwiki.observation.internal.DefaultObservationManager.notify(DefaultObservationManager.java:264)
>>>          at com.xpn.xwiki.XWiki.saveDocument(XWiki.java:1323)
>>>          at com.xpn.xwiki.api.Document.saveDocument(Document.java:2299)
>>>          at com.xpn.xwiki.api.Document.save(Document.java:2202)
>>>          at com.xpn.xwiki.api.Document.save(Document.java:2196)
>>>          at
>>>
>>> org.xwiki.rest.internal.resources.pages.ModifiablePageResource.putPage(ModifiablePageResource.java:67)
>>>          at
>>>
>>> org.xwiki.rest.internal.resources.pages.PageResourceImpl.putPage(PageResourceImpl.java:62)
>>>          at sun.reflect.GeneratedMethodAccessor296.invoke(Unknown Source)
>>>          at
>>>
>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>>          at java.lang.reflect.Method.invoke(Method.java:601)
>>>          at
>>>
>>> org.restlet.ext.jaxrs.internal.wrappers.AbstractMethodWrapper.internalInvoke(AbstractMethodWrapper.java:171)
>>>          at
>>>
>>> org.restlet.ext.jaxrs.internal.wrappers.ResourceMethod.invoke(ResourceMethod.java:291)
>>>          at
>>> org.restlet.ext.jaxrs.JaxRsRestlet.invokeMethod(JaxRsRestlet.java:1043)
>>>          at
>>> org.restlet.ext.jaxrs.JaxRsRestlet.handle(JaxRsRestlet.java:792)
>>>          at org.restlet.routing.Filter.doHandle(Filter.java:159)
>>>          at org.restlet.routing.Filter.handle(Filter.java:206)
>>>          at org.restlet.routing.Router.doHandle(Router.java:500)
>>>          at org.restlet.routing.Router.handle(Router.java:740)
>>>          at org.restlet.routing.Filter.doHandle(Filter.java:159)
>>>          at org.restlet.routing.Filter.handle(Filter.java:206)
>>>          at org.restlet.routing.Filter.doHandle(Filter.java:159)
>>>          at org.restlet.routing.Filter.handle(Filter.java:206)
>>>          at org.restlet.routing.Filter.doHandle(Filter.java:159)
>>>          at org.restlet.routing.Filter.handle(Filter.java:206)
>>>          at org.restlet.routing.Filter.doHandle(Filter.java:159)
>>>
>>> May 30, 2014 4:12:18 PM org.restlet.engine.application.StatusFilter
>>> doHandle
>>> WARNING: Exception or error caught in status service
>>> java.lang.OutOfMemoryError: Java heap space
>>>
>>> 2014-05-30 16:12:18,851 [DefaultQuartzScheduler_Worker-7] ERROR
>>> o.q.c.JobRunShell              - Job
>>> DEFAULT.xwiki:Scheduler.WatchListHourlyNotifier_0 threw an unhandled
>>> Exception:
>>> java.lang.OutOfMemoryError: Java heap space
>>> 2014-05-30 16:12:18,856 [XWiki Solr index thread] WARN
>>> o.h.u.JDBCExceptionReporter    - SQL Error: 0, SQLState: 08001
>>> 2014-05-30 16:12:18,856 [XWiki Solr index thread] ERROR
>>> o.h.u.JDBCExceptionReporter    - The connection attempt failed.
>>> 2014-05-30 16:12:18,858 [DefaultQuartzScheduler_Worker-7] ERROR
>>> o.q.c.ErrorLogger              - Job
>>> (DEFAULT.xwiki:Scheduler.WatchListHourlyNotifier_0 threw an exception.
>>> org.quartz.SchedulerException: Job threw an unhandled exception.
>>>          at org.quartz.core.JobRunShell.run(JobRunShell.java:213)
>>> ~[quartz-1.6.5.jar:1.6.5]
>>>          at
>>>
>>> org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:525)
>>> [quartz-1.6.5.jar:1.6.5]
>>> Caused by: java.lang.OutOfMemoryError: Java heap space
>>> 2014-05-30 16:12:18,859 [DefaultQuartzScheduler_Worker-7] ERROR
>>> c.x.x.p.s.StatusListener       - Job
>>> (DEFAULT.xwiki:Scheduler.WatchListHourlyNotifier_0 threw an exception.
>>> org.quartz.SchedulerException: Job threw an unhandled exception.
>>>          at org.quartz.core.JobRunShell.run(JobRunShell.java:213)
>>> ~[quartz-1.6.5.jar:1.6.5]
>>>          at
>>>
>>> org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:525)
>>> [quartz-1.6.5.jar:1.6.5]
>>> Caused by: java.lang.OutOfMemoryError: Java heap space
>>>
>>>
>>> Funny thing is that before the error JConsole does not show any signs
>>> that
>>> the server was allocating a lot of memory. Probably this should be quick
>>> spike in mem consumption or otherwise I don't understand at all why this
>>> happen.
>>>
>>>
>>> Now, the question is: I do have or I think I do have very limited cache
>>> setup here. I've increase RAM side to 2GB which is over the recommended
>>> size
>>> by xwiki.org itself -- they warn about size bigger than 1GB due to
>>> slowness
>>> of GC then. Anyway, 2GB, limited size of cache and yet I hit OOM. Do you
>>> think I shall increase RAM limit even further?
>>>
>>> Thanks!
>>> Karel
>>>
>>> _______________________________________________
>>> devs mailing list
>>> [hidden email]
>>> http://lists.xwiki.org/mailman/listinfo/devs
>>
>>
>>
>>
>



--
Thomas Mortagne
_______________________________________________
devs mailing list
[hidden email]
http://lists.xwiki.org/mailman/listinfo/devs
Reply | Threaded
Open this post in threaded view
|

Re: Out of memory errors (scalability of xwiki).

kgardas
In reply to this post by Thomas Mortagne

I've also tested uploading 1mil of simple pages and 8GB of RAM was not
enough for this. Since probably generating such amount of pages is not
main application domain of XWiki I'm not sure if xwiki devs would
consider this to be a bug or unoptimized case. I'm asking since I find
kind of bad to generate a *big* document describing xwiki changes in the
past hour even there is no one interested in this document and this way
consume all the RAM and get XWiki to OOM error.

Question: shall I report it as a bug or feature request asking for kind
of lazy (when it is actually needed) changelog generation pointing to
this thread?

Thanks!
Karel

On 05/30/14 08:31 PM, Thomas Mortagne wrote:

> The references to the watclist in your log make me think about
> something: go to you XWiki user profile, disable "Automatic document
> watching" in Watchlist section and try again.
>
> By default evey time a user create a page it's added to his watchlist
> and I'm wondering if your OOM here is watchlist trying to generate a
> mail about what happen in the past hour which would certainly generate
> a huge mail for 400k documents.
>
> On Fri, May 30, 2014 at 8:12 PM, Karel Gardas<[hidden email]>  wrote:
>>
>> Paul,
>>
>>
>> On 05/28/14 08:11 PM, Paul Libbrecht wrote:
>>>>>
>>>>> Of course I know the way how to increase Java's memory space/heap
>>>>> space. The problem is that this will not help here. Simply if I
>>>>> do so and then create 100 millions of pages on one run I will
>>>>> still get to the same issue just it'll take a lot longer.
>>>
>>>
>>> You won't reach 100 time the memory taken by downloading a 100
>>> million pages after you download a million pages. The caches have
>>> limits and you should measure how much memory in your setting is
>>> needed to reach that limit. Tests such as yours, after they reach the
>>> max, will get slower because of the DB access following the cache
>>> eviction; they are not very realistic under that point of view.
>>
>>
>> yes, I'm sure the test is not that realistic so far since majority of xwiki
>> usecases is probably download/rendering of page than upload of page which
>> I'm testing now. The reason of my test is simple, my task is to attempt
>> importing whole Wikipedia content w/o history into the XWiki. For this I
>> need to have it stable for import which is something I do have problem now.
>>
>>
>>> The best way to follow this is to use JMX and get a graph of the
>>> cache values, you should see maximum be reached after a while. At
>>> curriki, we have adjusted the caches to be a bit bigger and we do
>>> reach the maximum (100'000 if I remember well, for the document
>>> cache) after about an hour after the restart. Then things get pretty
>>> stable with our max 8G of memory.
>>
>>
>> I'm glad that your setup is stable. I do have following cache setup here:
>>
>> $ grep cache `find . -name 'xwiki.cfg'`
>> #-# Put a cache in front of the document store. This greatly improves
>> performance at the cost of memory consumption.
>> xwiki.store.cache=1
>> #-# Maximum number of documents to keep in the cache.
>> xwiki.store.cache.capacity=100
>> #-# Maximum number of documents to keep in the rendered cache
>> xwiki.render.cache.capacity=100
>> # xwiki.authentication.ldap.groupcache_expiration=21600
>> xwiki.plugin.image.cache.capacity=30
>>
>> this is XWiki 6.0.1 on top of Tomcat 7.0.53 using PostgreSQL 9.3/64bit. The
>> JVM is 1.7.0_07 everything running on Solaris 11.1. When I give 2GB RAM to
>> catalina by:
>>
>> $ cat bin/setenv.sh
>> CATALINA_OPTS="-server -Xms2048m -Xmx2048m -XX:MaxPermSize=196m
>> -Dfile.encoding=utf-8 -Djava.awt.headless=true -XX:+UseParallelGC
>> -XX:MaxGCPauseMillis=100"
>> export CATALINA_OPTS
>>
>>
>> and then run my benchmark which is just using REST to import following
>> simple page:
>>
>> "<?xml version=\"1.0\" encoding=\"UTF-8\" standalone=\"yes\"?>"
>>       ++ "<page xmlns=\"http://www.xwiki.org\">"
>>       ++ "<title>Semantic XWiki Benchmark page " ++  title ++ "</title>"
>>       ++ "<syntax>xwiki/2.0</syntax>"
>>       ++ "<content>This is a benchark page."
>>       ++ "The list of properties defined for this page is:"
>>       ++ "</content>"
>>       ++ "</page>"
>>
>> where title is BenchPage_<number>
>>
>> Now, when I pushed that, then after around 400k pages imported I hit OOM
>> error. JConsole is thrown out by closed connection and even benchmark tool
>> complains about few 500 HTTP error codes received. The exception output on
>> Tomcat's console looks:
>>
>> Exception in thread "DefaultQuartzScheduler_QuartzSchedulerThread"
>> java.lang.OutOfMemoryError: Java heap space
>> Exception in thread "RMI TCP Connection(idle)" Exception in thread "RMI TCP
>> Connection(idle)" java.lang.OutOfMemoryError: Java heap space
>> java.lang.OutOfMemoryError: Java heap space
>> Exception in thread
>> "TxCleanupService,platform.security.authorization.cache,local"
>> java.lang.OutOfMemoryError: Java heap space
>> Exception in thread "TxCleanupService,localization.bundle.document,local"
>> Exception in thread "TxCleanupService,wiki.descriptor.cache.wikiId,local"
>> java.lang.OutOfMemoryError: Java heap space
>> java.lang.OutOfMemoryError: Java heap space
>> Exception in thread "TxCleanupService,xwiki.store.pageexistcache,local"
>> java.lang.OutOfMemoryError: Java heap space
>> Exception in thread "TxCleanupService,xwiki.store.pagecache,local"
>> java.lang.OutOfMemoryError: Java heap space
>> May 30, 2014 4:12:18 PM org.restlet.engine.application.StatusFilter doHandle
>> WARNING: Exception or error caught in status service
>> java.lang.OutOfMemoryError: Java heap space
>>
>> May 30, 2014 4:12:18 PM org.restlet.engine.application.StatusFilter doHandle
>> WARNING: Exception or error caught in status service
>> java.lang.OutOfMemoryError: Java heap space
>>          at java.lang.Class.getDeclaredMethods0(Native Method)
>>          at java.lang.Class.privateGetDeclaredMethods(Class.java:2442)
>>          at java.lang.Class.getMethod0(Class.java:2685)
>>          at java.lang.Class.getMethod(Class.java:1620)
>>          at
>> org.xwiki.legacy.internal.oldcore.notification.LegacyNotificationDispatcher.getNotificationManager(LegacyNotificationDispatcher.java:94)
>>          at
>> org.xwiki.legacy.internal.oldcore.notification.LegacyNotificationDispatcher.onEvent(LegacyNotificationDispatcher.java:109)
>>          at
>> org.xwiki.observation.internal.DefaultObservationManager.notify(DefaultObservationManager.java:299)
>>          at
>> org.xwiki.observation.internal.DefaultObservationManager.notify(DefaultObservationManager.java:264)
>>          at com.xpn.xwiki.XWiki.saveDocument(XWiki.java:1323)
>>          at com.xpn.xwiki.api.Document.saveDocument(Document.java:2299)
>>          at com.xpn.xwiki.api.Document.save(Document.java:2202)
>>          at com.xpn.xwiki.api.Document.save(Document.java:2196)
>>          at
>> org.xwiki.rest.internal.resources.pages.ModifiablePageResource.putPage(ModifiablePageResource.java:67)
>>          at
>> org.xwiki.rest.internal.resources.pages.PageResourceImpl.putPage(PageResourceImpl.java:62)
>>          at sun.reflect.GeneratedMethodAccessor296.invoke(Unknown Source)
>>          at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>          at java.lang.reflect.Method.invoke(Method.java:601)
>>          at
>> org.restlet.ext.jaxrs.internal.wrappers.AbstractMethodWrapper.internalInvoke(AbstractMethodWrapper.java:171)
>>          at
>> org.restlet.ext.jaxrs.internal.wrappers.ResourceMethod.invoke(ResourceMethod.java:291)
>>          at
>> org.restlet.ext.jaxrs.JaxRsRestlet.invokeMethod(JaxRsRestlet.java:1043)
>>          at org.restlet.ext.jaxrs.JaxRsRestlet.handle(JaxRsRestlet.java:792)
>>          at org.restlet.routing.Filter.doHandle(Filter.java:159)
>>          at org.restlet.routing.Filter.handle(Filter.java:206)
>>          at org.restlet.routing.Router.doHandle(Router.java:500)
>>          at org.restlet.routing.Router.handle(Router.java:740)
>>          at org.restlet.routing.Filter.doHandle(Filter.java:159)
>>          at org.restlet.routing.Filter.handle(Filter.java:206)
>>          at org.restlet.routing.Filter.doHandle(Filter.java:159)
>>          at org.restlet.routing.Filter.handle(Filter.java:206)
>>          at org.restlet.routing.Filter.doHandle(Filter.java:159)
>>          at org.restlet.routing.Filter.handle(Filter.java:206)
>>          at org.restlet.routing.Filter.doHandle(Filter.java:159)
>>
>> May 30, 2014 4:12:18 PM org.restlet.engine.application.StatusFilter doHandle
>> WARNING: Exception or error caught in status service
>> java.lang.OutOfMemoryError: Java heap space
>>          at java.lang.Class.getDeclaredMethods0(Native Method)
>>          at java.lang.Class.privateGetDeclaredMethods(Class.java:2442)
>>          at java.lang.Class.getMethod0(Class.java:2685)
>>          at java.lang.Class.getMethod(Class.java:1620)
>>          at
>> org.xwiki.legacy.internal.oldcore.notification.LegacyNotificationDispatcher.getNotificationManager(LegacyNotificationDispatcher.java:94)
>>          at
>> org.xwiki.legacy.internal.oldcore.notification.LegacyNotificationDispatcher.onEvent(LegacyNotificationDispatcher.java:109)
>>          at
>> org.xwiki.observation.internal.DefaultObservationManager.notify(DefaultObservationManager.java:299)
>>          at
>> org.xwiki.observation.internal.DefaultObservationManager.notify(DefaultObservationManager.java:264)
>>          at com.xpn.xwiki.XWiki.saveDocument(XWiki.java:1323)
>>          at com.xpn.xwiki.api.Document.saveDocument(Document.java:2299)
>>          at com.xpn.xwiki.api.Document.save(Document.java:2202)
>>          at com.xpn.xwiki.api.Document.save(Document.java:2196)
>>          at
>> org.xwiki.rest.internal.resources.pages.ModifiablePageResource.putPage(ModifiablePageResource.java:67)
>>          at
>> org.xwiki.rest.internal.resources.pages.PageResourceImpl.putPage(PageResourceImpl.java:62)
>>          at sun.reflect.GeneratedMethodAccessor296.invoke(Unknown Source)
>>          at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>          at java.lang.reflect.Method.invoke(Method.java:601)
>>          at
>> org.restlet.ext.jaxrs.internal.wrappers.AbstractMethodWrapper.internalInvoke(AbstractMethodWrapper.java:171)
>>          at
>> org.restlet.ext.jaxrs.internal.wrappers.ResourceMethod.invoke(ResourceMethod.java:291)
>>          at
>> org.restlet.ext.jaxrs.JaxRsRestlet.invokeMethod(JaxRsRestlet.java:1043)
>>          at org.restlet.ext.jaxrs.JaxRsRestlet.handle(JaxRsRestlet.java:792)
>>          at org.restlet.routing.Filter.doHandle(Filter.java:159)
>>          at org.restlet.routing.Filter.handle(Filter.java:206)
>>          at org.restlet.routing.Router.doHandle(Router.java:500)
>>          at org.restlet.routing.Router.handle(Router.java:740)
>>          at org.restlet.routing.Filter.doHandle(Filter.java:159)
>>          at org.restlet.routing.Filter.handle(Filter.java:206)
>>          at org.restlet.routing.Filter.doHandle(Filter.java:159)
>>          at org.restlet.routing.Filter.handle(Filter.java:206)
>>          at org.restlet.routing.Filter.doHandle(Filter.java:159)
>>          at org.restlet.routing.Filter.handle(Filter.java:206)
>>          at org.restlet.routing.Filter.doHandle(Filter.java:159)
>>
>> May 30, 2014 4:12:18 PM org.restlet.engine.application.StatusFilter doHandle
>> WARNING: Exception or error caught in status service
>> java.lang.OutOfMemoryError: Java heap space
>>
>> 2014-05-30 16:12:18,851 [DefaultQuartzScheduler_Worker-7] ERROR
>> o.q.c.JobRunShell              - Job
>> DEFAULT.xwiki:Scheduler.WatchListHourlyNotifier_0 threw an unhandled
>> Exception:
>> java.lang.OutOfMemoryError: Java heap space
>> 2014-05-30 16:12:18,856 [XWiki Solr index thread] WARN
>> o.h.u.JDBCExceptionReporter    - SQL Error: 0, SQLState: 08001
>> 2014-05-30 16:12:18,856 [XWiki Solr index thread] ERROR
>> o.h.u.JDBCExceptionReporter    - The connection attempt failed.
>> 2014-05-30 16:12:18,858 [DefaultQuartzScheduler_Worker-7] ERROR
>> o.q.c.ErrorLogger              - Job
>> (DEFAULT.xwiki:Scheduler.WatchListHourlyNotifier_0 threw an exception.
>> org.quartz.SchedulerException: Job threw an unhandled exception.
>>          at org.quartz.core.JobRunShell.run(JobRunShell.java:213)
>> ~[quartz-1.6.5.jar:1.6.5]
>>          at
>> org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:525)
>> [quartz-1.6.5.jar:1.6.5]
>> Caused by: java.lang.OutOfMemoryError: Java heap space
>> 2014-05-30 16:12:18,859 [DefaultQuartzScheduler_Worker-7] ERROR
>> c.x.x.p.s.StatusListener       - Job
>> (DEFAULT.xwiki:Scheduler.WatchListHourlyNotifier_0 threw an exception.
>> org.quartz.SchedulerException: Job threw an unhandled exception.
>>          at org.quartz.core.JobRunShell.run(JobRunShell.java:213)
>> ~[quartz-1.6.5.jar:1.6.5]
>>          at
>> org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:525)
>> [quartz-1.6.5.jar:1.6.5]
>> Caused by: java.lang.OutOfMemoryError: Java heap space
>>
>>
>> Funny thing is that before the error JConsole does not show any signs that
>> the server was allocating a lot of memory. Probably this should be quick
>> spike in mem consumption or otherwise I don't understand at all why this
>> happen.
>>
>>
>> Now, the question is: I do have or I think I do have very limited cache
>> setup here. I've increase RAM side to 2GB which is over the recommended size
>> by xwiki.org itself -- they warn about size bigger than 1GB due to slowness
>> of GC then. Anyway, 2GB, limited size of cache and yet I hit OOM. Do you
>> think I shall increase RAM limit even further?
>>
>> Thanks!
>> Karel
>>
>> _______________________________________________
>> devs mailing list
>> [hidden email]
>> http://lists.xwiki.org/mailman/listinfo/devs
>
>
>

_______________________________________________
devs mailing list
[hidden email]
http://lists.xwiki.org/mailman/listinfo/devs
Reply | Threaded
Open this post in threaded view
|

Re: Out of memory errors (scalability of xwiki).

Thomas Mortagne
Administrator
On Thu, Jun 12, 2014 at 5:28 PM, Karel Gardas <[hidden email]> wrote:
>
> I've also tested uploading 1mil of simple pages and 8GB of RAM was not
> enough for this. Since probably generating such amount of pages is not main
> application domain of XWiki I'm not sure if xwiki devs would consider this
> to be a bug or unoptimized case.

> I'm asking since I find kind of bad to
> generate a *big* document describing xwiki changes in the past hour even
> there is no one interested in this document and this way consume all the RAM
> and get XWiki to OOM error.

It does not work that way, mails are created based on users bookmarked pages.

>
> Question: shall I report it as a bug or feature request asking for kind of
> lazy (when it is actually needed) changelog generation pointing to this
> thread?
>
>
> Thanks!
> Karel
>
> On 05/30/14 08:31 PM, Thomas Mortagne wrote:
>>
>> The references to the watclist in your log make me think about
>> something: go to you XWiki user profile, disable "Automatic document
>> watching" in Watchlist section and try again.
>>
>> By default evey time a user create a page it's added to his watchlist
>> and I'm wondering if your OOM here is watchlist trying to generate a
>> mail about what happen in the past hour which would certainly generate
>> a huge mail for 400k documents.
>>
>> On Fri, May 30, 2014 at 8:12 PM, Karel Gardas<[hidden email]>
>> wrote:
>>>
>>>
>>> Paul,
>>>
>>>
>>> On 05/28/14 08:11 PM, Paul Libbrecht wrote:
>>>>>>
>>>>>>
>>>>>> Of course I know the way how to increase Java's memory space/heap
>>>>>> space. The problem is that this will not help here. Simply if I
>>>>>> do so and then create 100 millions of pages on one run I will
>>>>>> still get to the same issue just it'll take a lot longer.
>>>>
>>>>
>>>>
>>>> You won't reach 100 time the memory taken by downloading a 100
>>>> million pages after you download a million pages. The caches have
>>>> limits and you should measure how much memory in your setting is
>>>> needed to reach that limit. Tests such as yours, after they reach the
>>>> max, will get slower because of the DB access following the cache
>>>> eviction; they are not very realistic under that point of view.
>>>
>>>
>>>
>>> yes, I'm sure the test is not that realistic so far since majority of
>>> xwiki
>>> usecases is probably download/rendering of page than upload of page which
>>> I'm testing now. The reason of my test is simple, my task is to attempt
>>> importing whole Wikipedia content w/o history into the XWiki. For this I
>>> need to have it stable for import which is something I do have problem
>>> now.
>>>
>>>
>>>> The best way to follow this is to use JMX and get a graph of the
>>>> cache values, you should see maximum be reached after a while. At
>>>> curriki, we have adjusted the caches to be a bit bigger and we do
>>>> reach the maximum (100'000 if I remember well, for the document
>>>> cache) after about an hour after the restart. Then things get pretty
>>>> stable with our max 8G of memory.
>>>
>>>
>>>
>>> I'm glad that your setup is stable. I do have following cache setup here:
>>>
>>> $ grep cache `find . -name 'xwiki.cfg'`
>>> #-# Put a cache in front of the document store. This greatly improves
>>> performance at the cost of memory consumption.
>>> xwiki.store.cache=1
>>> #-# Maximum number of documents to keep in the cache.
>>> xwiki.store.cache.capacity=100
>>> #-# Maximum number of documents to keep in the rendered cache
>>> xwiki.render.cache.capacity=100
>>> # xwiki.authentication.ldap.groupcache_expiration=21600
>>> xwiki.plugin.image.cache.capacity=30
>>>
>>> this is XWiki 6.0.1 on top of Tomcat 7.0.53 using PostgreSQL 9.3/64bit.
>>> The
>>> JVM is 1.7.0_07 everything running on Solaris 11.1. When I give 2GB RAM
>>> to
>>> catalina by:
>>>
>>> $ cat bin/setenv.sh
>>> CATALINA_OPTS="-server -Xms2048m -Xmx2048m -XX:MaxPermSize=196m
>>> -Dfile.encoding=utf-8 -Djava.awt.headless=true -XX:+UseParallelGC
>>> -XX:MaxGCPauseMillis=100"
>>> export CATALINA_OPTS
>>>
>>>
>>> and then run my benchmark which is just using REST to import following
>>> simple page:
>>>
>>> "<?xml version=\"1.0\" encoding=\"UTF-8\" standalone=\"yes\"?>"
>>>       ++ "<page xmlns=\"http://www.xwiki.org\">"
>>>       ++ "<title>Semantic XWiki Benchmark page " ++  title ++ "</title>"
>>>       ++ "<syntax>xwiki/2.0</syntax>"
>>>       ++ "<content>This is a benchark page."
>>>       ++ "The list of properties defined for this page is:"
>>>       ++ "</content>"
>>>       ++ "</page>"
>>>
>>> where title is BenchPage_<number>
>>>
>>> Now, when I pushed that, then after around 400k pages imported I hit OOM
>>> error. JConsole is thrown out by closed connection and even benchmark
>>> tool
>>> complains about few 500 HTTP error codes received. The exception output
>>> on
>>> Tomcat's console looks:
>>>
>>> Exception in thread "DefaultQuartzScheduler_QuartzSchedulerThread"
>>> java.lang.OutOfMemoryError: Java heap space
>>> Exception in thread "RMI TCP Connection(idle)" Exception in thread "RMI
>>> TCP
>>> Connection(idle)" java.lang.OutOfMemoryError: Java heap space
>>> java.lang.OutOfMemoryError: Java heap space
>>> Exception in thread
>>> "TxCleanupService,platform.security.authorization.cache,local"
>>> java.lang.OutOfMemoryError: Java heap space
>>> Exception in thread "TxCleanupService,localization.bundle.document,local"
>>> Exception in thread "TxCleanupService,wiki.descriptor.cache.wikiId,local"
>>> java.lang.OutOfMemoryError: Java heap space
>>> java.lang.OutOfMemoryError: Java heap space
>>> Exception in thread "TxCleanupService,xwiki.store.pageexistcache,local"
>>> java.lang.OutOfMemoryError: Java heap space
>>> Exception in thread "TxCleanupService,xwiki.store.pagecache,local"
>>> java.lang.OutOfMemoryError: Java heap space
>>> May 30, 2014 4:12:18 PM org.restlet.engine.application.StatusFilter
>>> doHandle
>>> WARNING: Exception or error caught in status service
>>> java.lang.OutOfMemoryError: Java heap space
>>>
>>> May 30, 2014 4:12:18 PM org.restlet.engine.application.StatusFilter
>>> doHandle
>>> WARNING: Exception or error caught in status service
>>> java.lang.OutOfMemoryError: Java heap space
>>>          at java.lang.Class.getDeclaredMethods0(Native Method)
>>>          at java.lang.Class.privateGetDeclaredMethods(Class.java:2442)
>>>          at java.lang.Class.getMethod0(Class.java:2685)
>>>          at java.lang.Class.getMethod(Class.java:1620)
>>>          at
>>>
>>> org.xwiki.legacy.internal.oldcore.notification.LegacyNotificationDispatcher.getNotificationManager(LegacyNotificationDispatcher.java:94)
>>>          at
>>>
>>> org.xwiki.legacy.internal.oldcore.notification.LegacyNotificationDispatcher.onEvent(LegacyNotificationDispatcher.java:109)
>>>          at
>>>
>>> org.xwiki.observation.internal.DefaultObservationManager.notify(DefaultObservationManager.java:299)
>>>          at
>>>
>>> org.xwiki.observation.internal.DefaultObservationManager.notify(DefaultObservationManager.java:264)
>>>          at com.xpn.xwiki.XWiki.saveDocument(XWiki.java:1323)
>>>          at com.xpn.xwiki.api.Document.saveDocument(Document.java:2299)
>>>          at com.xpn.xwiki.api.Document.save(Document.java:2202)
>>>          at com.xpn.xwiki.api.Document.save(Document.java:2196)
>>>          at
>>>
>>> org.xwiki.rest.internal.resources.pages.ModifiablePageResource.putPage(ModifiablePageResource.java:67)
>>>          at
>>>
>>> org.xwiki.rest.internal.resources.pages.PageResourceImpl.putPage(PageResourceImpl.java:62)
>>>          at sun.reflect.GeneratedMethodAccessor296.invoke(Unknown Source)
>>>          at
>>>
>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>>          at java.lang.reflect.Method.invoke(Method.java:601)
>>>          at
>>>
>>> org.restlet.ext.jaxrs.internal.wrappers.AbstractMethodWrapper.internalInvoke(AbstractMethodWrapper.java:171)
>>>          at
>>>
>>> org.restlet.ext.jaxrs.internal.wrappers.ResourceMethod.invoke(ResourceMethod.java:291)
>>>          at
>>> org.restlet.ext.jaxrs.JaxRsRestlet.invokeMethod(JaxRsRestlet.java:1043)
>>>          at
>>> org.restlet.ext.jaxrs.JaxRsRestlet.handle(JaxRsRestlet.java:792)
>>>          at org.restlet.routing.Filter.doHandle(Filter.java:159)
>>>          at org.restlet.routing.Filter.handle(Filter.java:206)
>>>          at org.restlet.routing.Router.doHandle(Router.java:500)
>>>          at org.restlet.routing.Router.handle(Router.java:740)
>>>          at org.restlet.routing.Filter.doHandle(Filter.java:159)
>>>          at org.restlet.routing.Filter.handle(Filter.java:206)
>>>          at org.restlet.routing.Filter.doHandle(Filter.java:159)
>>>          at org.restlet.routing.Filter.handle(Filter.java:206)
>>>          at org.restlet.routing.Filter.doHandle(Filter.java:159)
>>>          at org.restlet.routing.Filter.handle(Filter.java:206)
>>>          at org.restlet.routing.Filter.doHandle(Filter.java:159)
>>>
>>> May 30, 2014 4:12:18 PM org.restlet.engine.application.StatusFilter
>>> doHandle
>>> WARNING: Exception or error caught in status service
>>> java.lang.OutOfMemoryError: Java heap space
>>>          at java.lang.Class.getDeclaredMethods0(Native Method)
>>>          at java.lang.Class.privateGetDeclaredMethods(Class.java:2442)
>>>          at java.lang.Class.getMethod0(Class.java:2685)
>>>          at java.lang.Class.getMethod(Class.java:1620)
>>>          at
>>>
>>> org.xwiki.legacy.internal.oldcore.notification.LegacyNotificationDispatcher.getNotificationManager(LegacyNotificationDispatcher.java:94)
>>>          at
>>>
>>> org.xwiki.legacy.internal.oldcore.notification.LegacyNotificationDispatcher.onEvent(LegacyNotificationDispatcher.java:109)
>>>          at
>>>
>>> org.xwiki.observation.internal.DefaultObservationManager.notify(DefaultObservationManager.java:299)
>>>          at
>>>
>>> org.xwiki.observation.internal.DefaultObservationManager.notify(DefaultObservationManager.java:264)
>>>          at com.xpn.xwiki.XWiki.saveDocument(XWiki.java:1323)
>>>          at com.xpn.xwiki.api.Document.saveDocument(Document.java:2299)
>>>          at com.xpn.xwiki.api.Document.save(Document.java:2202)
>>>          at com.xpn.xwiki.api.Document.save(Document.java:2196)
>>>          at
>>>
>>> org.xwiki.rest.internal.resources.pages.ModifiablePageResource.putPage(ModifiablePageResource.java:67)
>>>          at
>>>
>>> org.xwiki.rest.internal.resources.pages.PageResourceImpl.putPage(PageResourceImpl.java:62)
>>>          at sun.reflect.GeneratedMethodAccessor296.invoke(Unknown Source)
>>>          at
>>>
>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>>          at java.lang.reflect.Method.invoke(Method.java:601)
>>>          at
>>>
>>> org.restlet.ext.jaxrs.internal.wrappers.AbstractMethodWrapper.internalInvoke(AbstractMethodWrapper.java:171)
>>>          at
>>>
>>> org.restlet.ext.jaxrs.internal.wrappers.ResourceMethod.invoke(ResourceMethod.java:291)
>>>          at
>>> org.restlet.ext.jaxrs.JaxRsRestlet.invokeMethod(JaxRsRestlet.java:1043)
>>>          at
>>> org.restlet.ext.jaxrs.JaxRsRestlet.handle(JaxRsRestlet.java:792)
>>>          at org.restlet.routing.Filter.doHandle(Filter.java:159)
>>>          at org.restlet.routing.Filter.handle(Filter.java:206)
>>>          at org.restlet.routing.Router.doHandle(Router.java:500)
>>>          at org.restlet.routing.Router.handle(Router.java:740)
>>>          at org.restlet.routing.Filter.doHandle(Filter.java:159)
>>>          at org.restlet.routing.Filter.handle(Filter.java:206)
>>>          at org.restlet.routing.Filter.doHandle(Filter.java:159)
>>>          at org.restlet.routing.Filter.handle(Filter.java:206)
>>>          at org.restlet.routing.Filter.doHandle(Filter.java:159)
>>>          at org.restlet.routing.Filter.handle(Filter.java:206)
>>>          at org.restlet.routing.Filter.doHandle(Filter.java:159)
>>>
>>> May 30, 2014 4:12:18 PM org.restlet.engine.application.StatusFilter
>>> doHandle
>>> WARNING: Exception or error caught in status service
>>> java.lang.OutOfMemoryError: Java heap space
>>>
>>> 2014-05-30 16:12:18,851 [DefaultQuartzScheduler_Worker-7] ERROR
>>> o.q.c.JobRunShell              - Job
>>> DEFAULT.xwiki:Scheduler.WatchListHourlyNotifier_0 threw an unhandled
>>> Exception:
>>> java.lang.OutOfMemoryError: Java heap space
>>> 2014-05-30 16:12:18,856 [XWiki Solr index thread] WARN
>>> o.h.u.JDBCExceptionReporter    - SQL Error: 0, SQLState: 08001
>>> 2014-05-30 16:12:18,856 [XWiki Solr index thread] ERROR
>>> o.h.u.JDBCExceptionReporter    - The connection attempt failed.
>>> 2014-05-30 16:12:18,858 [DefaultQuartzScheduler_Worker-7] ERROR
>>> o.q.c.ErrorLogger              - Job
>>> (DEFAULT.xwiki:Scheduler.WatchListHourlyNotifier_0 threw an exception.
>>> org.quartz.SchedulerException: Job threw an unhandled exception.
>>>          at org.quartz.core.JobRunShell.run(JobRunShell.java:213)
>>> ~[quartz-1.6.5.jar:1.6.5]
>>>          at
>>>
>>> org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:525)
>>> [quartz-1.6.5.jar:1.6.5]
>>> Caused by: java.lang.OutOfMemoryError: Java heap space
>>> 2014-05-30 16:12:18,859 [DefaultQuartzScheduler_Worker-7] ERROR
>>> c.x.x.p.s.StatusListener       - Job
>>> (DEFAULT.xwiki:Scheduler.WatchListHourlyNotifier_0 threw an exception.
>>> org.quartz.SchedulerException: Job threw an unhandled exception.
>>>          at org.quartz.core.JobRunShell.run(JobRunShell.java:213)
>>> ~[quartz-1.6.5.jar:1.6.5]
>>>          at
>>>
>>> org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:525)
>>> [quartz-1.6.5.jar:1.6.5]
>>> Caused by: java.lang.OutOfMemoryError: Java heap space
>>>
>>>
>>> Funny thing is that before the error JConsole does not show any signs
>>> that
>>> the server was allocating a lot of memory. Probably this should be quick
>>> spike in mem consumption or otherwise I don't understand at all why this
>>> happen.
>>>
>>>
>>> Now, the question is: I do have or I think I do have very limited cache
>>> setup here. I've increase RAM side to 2GB which is over the recommended
>>> size
>>> by xwiki.org itself -- they warn about size bigger than 1GB due to
>>> slowness
>>> of GC then. Anyway, 2GB, limited size of cache and yet I hit OOM. Do you
>>> think I shall increase RAM limit even further?
>>>
>>> Thanks!
>>> Karel
>>>
>>> _______________________________________________
>>> devs mailing list
>>> [hidden email]
>>> http://lists.xwiki.org/mailman/listinfo/devs
>>
>>
>>
>>
>



--
Thomas Mortagne
_______________________________________________
devs mailing list
[hidden email]
http://lists.xwiki.org/mailman/listinfo/devs
Reply | Threaded
Open this post in threaded view
|

Re: Out of memory errors (scalability of xwiki).

kgardas
On 06/12/14 05:35 PM, Thomas Mortagne wrote:

> On Thu, Jun 12, 2014 at 5:28 PM, Karel Gardas<[hidden email]>  wrote:
>>
>> I've also tested uploading 1mil of simple pages and 8GB of RAM was not
>> enough for this. Since probably generating such amount of pages is not main
>> application domain of XWiki I'm not sure if xwiki devs would consider this
>> to be a bug or unoptimized case.
>
>> I'm asking since I find kind of bad to
>> generate a *big* document describing xwiki changes in the past hour even
>> there is no one interested in this document and this way consume all the RAM
>> and get XWiki to OOM error.
>
> It does not work that way, mails are created based on users bookmarked pages.

If so, then why I do get OOM error and Notifier seems to be the cause?

Thanks!
Karel
_______________________________________________
devs mailing list
[hidden email]
http://lists.xwiki.org/mailman/listinfo/devs
Reply | Threaded
Open this post in threaded view
|

Re: Out of memory errors (scalability of xwiki).

Thomas Mortagne
Administrator
On Thu, Jun 12, 2014 at 5:51 PM, Karel Gardas <[hidden email]> wrote:

> On 06/12/14 05:35 PM, Thomas Mortagne wrote:
>>
>> On Thu, Jun 12, 2014 at 5:28 PM, Karel Gardas<[hidden email]>
>> wrote:
>>>
>>>
>>> I've also tested uploading 1mil of simple pages and 8GB of RAM was not
>>> enough for this. Since probably generating such amount of pages is not
>>> main
>>> application domain of XWiki I'm not sure if xwiki devs would consider
>>> this
>>> to be a bug or unoptimized case.
>>
>>
>>> I'm asking since I find kind of bad to
>>> generate a *big* document describing xwiki changes in the past hour even
>>> there is no one interested in this document and this way consume all the
>>> RAM
>>> and get XWiki to OOM error.
>>
>>
>> It does not work that way, mails are created based on users bookmarked
>> pages.
>
>
> If so, then why I do get OOM error and Notifier seems to be the cause?
>
> Thanks!
> Karel

Did you disabled auto watch in your user profile as I suggested in a
previous mail ?

--
Thomas Mortagne
_______________________________________________
devs mailing list
[hidden email]
http://lists.xwiki.org/mailman/listinfo/devs
Reply | Threaded
Open this post in threaded view
|

Re: Out of memory errors (scalability of xwiki).

kgardas
On 06/12/14 06:14 PM, Thomas Mortagne wrote:

>>>> I'm asking since I find kind of bad to
>>>> generate a *big* document describing xwiki changes in the past hour even
>>>> there is no one interested in this document and this way consume all the
>>>> RAM
>>>> and get XWiki to OOM error.
>>>
>>>
>>> It does not work that way, mails are created based on users bookmarked
>>> pages.
>>
>>
>> If so, then why I do get OOM error and Notifier seems to be the cause?
>>
>> Thanks!
>> Karel
>
> Did you disabled auto watch in your user profile as I suggested in a
> previous mail ?

IIRC I replied[1] that I'm using Admin solely and it does have disabled
auto watch by default and you already replied to it too [2]

Thanks!
Karel

[1]: http://lists.xwiki.org/pipermail/devs/2014-May/056832.html
[2]: http://lists.xwiki.org/pipermail/devs/2014-May/056833.html
_______________________________________________
devs mailing list
[hidden email]
http://lists.xwiki.org/mailman/listinfo/devs
Reply | Threaded
Open this post in threaded view
|

Re: Out of memory errors (scalability of xwiki).

Thomas Mortagne
Administrator
If you have an easy way to reproduce the issue you can explain in a
jira issue you should always create one. At worst we will indicate
won't fix we explanation why but I don't have the filling this is a
won't fix.

On Thu, Jun 12, 2014 at 7:42 PM, Karel Gardas <[hidden email]> wrote:

> On 06/12/14 06:14 PM, Thomas Mortagne wrote:
>>>>>
>>>>> I'm asking since I find kind of bad to
>>>>> generate a *big* document describing xwiki changes in the past hour
>>>>> even
>>>>> there is no one interested in this document and this way consume all
>>>>> the
>>>>> RAM
>>>>> and get XWiki to OOM error.
>>>>
>>>>
>>>>
>>>> It does not work that way, mails are created based on users bookmarked
>>>> pages.
>>>
>>>
>>>
>>> If so, then why I do get OOM error and Notifier seems to be the cause?
>>>
>>> Thanks!
>>> Karel
>>
>>
>> Did you disabled auto watch in your user profile as I suggested in a
>> previous mail ?
>
>
> IIRC I replied[1] that I'm using Admin solely and it does have disabled auto
> watch by default and you already replied to it too [2]
>
> Thanks!
> Karel
>
> [1]: http://lists.xwiki.org/pipermail/devs/2014-May/056832.html
> [2]: http://lists.xwiki.org/pipermail/devs/2014-May/056833.html



--
Thomas Mortagne
_______________________________________________
devs mailing list
[hidden email]
http://lists.xwiki.org/mailman/listinfo/devs