SPAM: Re: [OPEN-ILS-DEV] PATCH: osrf_json_object.c (miscellaneous)

Scott McKellar mck9 at swbell.net
Mon Dec 3 23:46:59 EST 2007


--- Mike Rylander <mrylander at gmail.com> wrote:

> On Dec 3, 2007 9:53 PM, Mike Rylander <mrylander at gmail.com> wrote:
> > On Dec 2, 2007 11:55 PM, Scott McKellar <mck9 at swbell.net> wrote:
> > [snip]
> > > 4. I introduced a linked list of jsonObjects that have been
> allocated
> > > from the heap but are not currently in use.
> >
> > I like this idea a lot, but I'd like to have a mitigation strategy
> in
> > mind if it turns out there is an issue with bloat.  There are some
> > patterns in cstore that I could imagine causing a problem, but I
> > haven't had time to test that.

<snip>

The problem is that a fancy scheme to fine tune the allocation
strategy at run time will be largely self-defeating.  The 
instrumentation and analysis necessary to optimize things will 
incur overhead of its own, leaving us roughly back where we started.

Probably the most reasonable way to avoid memory bloat is the 
simplest.  In jsonObjectFree(), apply the following logic:

    IF length_of_free_list >= free_max
        return the jsonObject to the heap with free()
    ELSE
        sitck it on the free list
        add 1 to length_of_free_list
    END IF

...and likewise decrement length_of_free_list whenever we allocate
from the free list.

This logic imposes a limit on the amount of unused memory to be
cached, with a minimum of additional overhead.

The next question is: what value do we use for free_max?

Within reasonable limits it probably doesn't much matter what the
maximum is, as long as it's big enough to cover us most of the time.
We could arbitrarily set a maximum of, say, 50.  You should have a
better idea than I do of what's reasonable.

It might be worthwhile to create a special instrumented version that
logs allocations and deallocations, or does whatever other analysis
might be useful.  Link the instrumented version to an application
of interest, run some typical real-world type loads through it, and
see what happens.

We might discover that the free list never gets terribly long under
realistic conditions, and we can stop worrying about it.

It may turn out that different applications have dramatically
different allocation profiles.  We could easily provide a function
to set the value of free_max at run time, so that we can apply
different limits to different applications, or even to different
parts of the same application.  I'm not sure whether such a function 
should prune any excess off of the free list, or just let it 
disappear by attrition.

We can dream up still more elaborations, but anything beyond the
simplest measures would probably be wasted effort.

-------------

The idea of reusing cached chunks of memory is a very general one.
For example, we could apply it to growing_buffers, or any other
structure that sees a lot of turnover.  You know better than I
what the likeliest candidates are.  If we're going to tweak the
code to limit memory bloat, we should apply the same tweaks
wherever we cache memory, for the sake of consistency.

Scott McKellar
http://home.swbell.net/mck9/ct/
 


More information about the Open-ils-dev mailing list