Page Complexity limiter

classic Classic list List threaded Threaded
5 messages Options
ktc
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Page Complexity limiter

ktc
Devs,

In MediaWiki there are preprocessor limits that track how complex a page is so that rendering can gracefully bail out if any of these limits are broken to keep the server stable.  MediaWiki will include this information in a comment in the rendered page like:
<!--
NewPP limit report
Parsed by mw1270
Cached time: 20170223033729
Cache expiry: 2592000
Dynamic content: false
CPU time usage: 0.124 seconds
Real time usage: 0.170 seconds
Preprocessor visited node count: 468/1000000
Preprocessor generated node count: 0/1500000
Post‐expand include size: 50512/2097152 bytes
Template argument size: 37/2097152 bytes
Highest expansion depth: 7/40
Expensive parser function count: 0/500
Lua time usage: 0.039/10.000 seconds
Lua memory usage: 1.66 MB/50 MB
-->  

We have cases where our users will want to include pages as templates into other pages and these can get pretty deeply nested as well as contain some complex content that is expensive and/or takes a long time to render.  We want to make sure that a user cannot bring our servers down accidentally or intentionally due to using too many expensive/long running inclusions on their pages.  Testing this on our instance of XWiki is showing that XWiki will run until a) the page is finally able to render which could be minutes in some cases or b) the server runs out of memory and falls over.

I have been looking but have been unable to find if XWiki has this sort of feature to turn on and configure.  I was also unable to find it in the XWiki Jira or in this forum so I wanted to ask the Devs directly.  Does this sort of limiting exist on XWiki?  And if so, how can I turn it on?

Thanks!
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Page Complexity limiter

vmassol
Administrator
Hi,

> On 28 Feb 2017, at 22:18, ktc <[hidden email]> wrote:
>
> Devs,
>
> In MediaWiki there are  preprocessor limits
> <https://en.wikipedia.org/wiki/Wikipedia:Template_limits>   that track how
> complex a page is so that rendering can gracefully bail out if any of these
> limits are broken to keep the server stable.  MediaWiki will include this
> information in a comment in the rendered page like:
> &lt;!--
> NewPP limit report
> Parsed by mw1270
> Cached time: 20170223033729
> Cache expiry: 2592000
> Dynamic content: false
> CPU time usage: 0.124 seconds
> Real time usage: 0.170 seconds
> Preprocessor visited node count: 468/1000000
> Preprocessor generated node count: 0/1500000
> Post‐expand include size: 50512/2097152 bytes
> Template argument size: 37/2097152 bytes
> Highest expansion depth: 7/40
> Expensive parser function count: 0/500
> Lua time usage: 0.039/10.000 seconds
> Lua memory usage: 1.66 MB/50 MB
> --&gt;  
>
> We have cases where our users will want to include pages as templates into
> other pages and these can get pretty deeply nested as well as contain some
> complex content that is expensive and/or takes a long time to render.  We
> want to make sure that a user cannot bring our servers down accidentally or
> intentionally due to using too many expensive/long running inclusions on
> their pages.  Testing this on our instance of XWiki is showing that XWiki
> will run until a) the page is finally able to render which could be minutes
> in some cases or b) the server runs out of memory and falls over.
>
> I have been looking but have been unable to find if XWiki has this sort of
> feature to turn on and configure.  I was also unable to find it in the XWiki
> Jira or in this forum so I wanted to ask the Devs directly.  Does this sort
> of limiting exist on XWiki?  And if so, how can I turn it on?

No there isn’t. The closest I can think of is http://extensions.xwiki.org/xwiki/bin/view/Extension/GroovyModuleCommons#HTimedInterruptCustomizer

Do you know other java programs who do this? How would you implement this in java? AFAIK it’s not possible. IMO you’d need a custom JVM to guarantee this since aborting a thread is not something guaranteed in java.

Thanks
-Vincent

> Thanks!
>
> View this message in context: http://xwiki.475771.n2.nabble.com/Page-Complexity-limiter-tp7602880.html
> Sent from the XWiki- Dev mailing list archive at Nabble.com.

ktc
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Page Complexity limiter

ktc
That extension is on the right track to what we would need, unfortunately it only works for groovy.

I wasn't thinking that it would have to be at the JVM level but maybe at the rendering context level or context level in general?  The context could then can keep track of how deeply nested the inclusions are as well as keep track of how long the request has been running.  These limits can then be checked at critical times while rendering is occurring and then throw an Exception to abort the process should a limit be broken.  It wouldn't necessarily have to guarantee 100% accuracy in the first iteration of the feature, but a best effort could be good for some protection.
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Page Complexity limiter

vmassol
Administrator
Hi,

> On 28 Feb 2017, at 23:25, ktc <[hidden email]> wrote:
>
> That extension is on the right track to what we would need, unfortunately it
> only works for groovy.
>
> I wasn't thinking that it would have to be at the JVM level but maybe at the
> rendering context level or context level in general?  The context could then
> can keep track of how deeply nested the inclusions are as well as keep track
> of how long the request has been running.  These limits can then be checked
> at critical times while rendering is occurring and then throw an Exception
> to abort the process should a limit be broken.  It wouldn't necessarily have
> to guarantee 100% accuracy in the first iteration of the feature, but a best
> effort could be good for some protection.

If you check the groovy pas I linked to in my first response you’ll see that the limiter will fail as soon as a java api is called so indeed there’s no way to guarantee a response time.

Now back to what you mentioned above, the only loop that would make sense is the MacroTransformation component since macros are what could take time (especially the script macros). So yes, it would be possible to stop the rendering there (i.e. stop evaluating transformations when they take too much time). Actually we already have a protection there to avoid infinite cycle (a macro generating itself).

That’s certainly doable and not too hard but you need to understand that it wouldn’t guarantee anything. Could you please raise a jira issue for this idea so that it’s recorded and if anyone has an interest in implementing it they can find the idea?

Thanks
-Vincent

> --
> View this message in context: http://xwiki.475771.n2.nabble.com/Page-Complexity-limiter-tp7602880p7602882.html
> Sent from the XWiki- Dev mailing list archive at Nabble.com.

Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Page Complexity limiter

vmassol
Administrator

> On 1 Mar 2017, at 09:04, Vincent Massol <[hidden email]> wrote:
>
> Hi,
>
>> On 28 Feb 2017, at 23:25, ktc <[hidden email]> wrote:
>>
>> That extension is on the right track to what we would need, unfortunately it
>> only works for groovy.
>>
>> I wasn't thinking that it would have to be at the JVM level but maybe at the
>> rendering context level or context level in general?  The context could then
>> can keep track of how deeply nested the inclusions are as well as keep track
>> of how long the request has been running.  These limits can then be checked
>> at critical times while rendering is occurring and then throw an Exception
>> to abort the process should a limit be broken.  It wouldn't necessarily have
>> to guarantee 100% accuracy in the first iteration of the feature, but a best
>> effort could be good for some protection.
>
> If you check the groovy pas

^^^^
I meant “page” not “pas” ;)

Thanks
-Vincent

> I linked to in my first response you’ll see that the limiter will fail as soon as a java api is called so indeed there’s no way to guarantee a response time.
>
> Now back to what you mentioned above, the only loop that would make sense is the MacroTransformation component since macros are what could take time (especially the script macros). So yes, it would be possible to stop the rendering there (i.e. stop evaluating transformations when they take too much time). Actually we already have a protection there to avoid infinite cycle (a macro generating itself).
>
> That’s certainly doable and not too hard but you need to understand that it wouldn’t guarantee anything. Could you please raise a jira issue for this idea so that it’s recorded and if anyone has an interest in implementing it they can find the idea?
>
> Thanks
> -Vincent
>
>> --
>> View this message in context: http://xwiki.475771.n2.nabble.com/Page-Complexity-limiter-tp7602880p7602882.html
>> Sent from the XWiki- Dev mailing list archive at Nabble.com.

Loading...