[OPEN-ILS-DEV] Postgres speed issues

Todd Holbrook todd at dreamtheorystudios.com
Mon Jun 30 12:03:07 EDT 2008


Dan,

I haven't started working on Evergreen yet, so I can't speak directly  
to that side, but I've done lots of PostgreSQL work.  In addition to  
Reuben's suggestions, which I would try first, also have a look at  
PostgreSQL's configuration for work_mem and consider bumping that up.   
If you're doing queries which sort large numbers of records (including  
DISTINCT and GROUP BY clauses), raising this high enough to keep the  
operations in memory rather than writing them to temporary files can  
make a big difference.  It's a good idea to try these one at a time so  
you can see which one made the most difference in case you want to  
bump raise it more in the future when you have more records.

If you haven't played with them yet, the "EXPLAIN" and "EXPLAIN  
ANALYZE" commands for dissecting queries would be really useful, too -  
you'll be able to see what indexes are being used and where your  
slowest operations are.

Todd



On 30-Jun-08, at 8:28 AM, Reuben Pasquini wrote:

> Try vacuuming the database.
>     $ vacuumdb -U postgres --all --full --analyze
>
> http://www.postgresql.org/docs/8.2/static/maintenance.html
>
> You may also want to increase the default number
> of shared-memory buffers postgres can use to
> stash data in memory.
>    http://www.postgresql.org/docs/8.2/static/runtime-config.html
>
> Good luck!
> Reuben
>
>
>
>>>> "Dan Wells" <dbw2 at calvin.edu> 6/30/2008 9:40 AM >>>
> Hello all,
>
> I have been spending the last few weeks trying out different methods
> for loading a batch of around 100,000 records.  I have had varying
> degrees of success with getting the load to "work," including copy  
> level
> information, but the last few loads seem to have somehow developed a
> search speed problem.  I say "developed" because I am pretty sure that
> it wasn't horribly slow the first few times I did the load, but has
> gotten progressively worse to the point that any keyword search which
> returns several thousand results no longer completes in the OPAC, and
> takes from 45 seconds to several minutes to process in postgres.  I am
> not certain that it has gotten worse, but I am sure that it is  
> currently
> quite bad.
>
> In between each load I have been running build-db.sh, and even dropped
> and recreated my entire evergreen database before a recent load.  Is
> there something else I need to do in order to get an entirely clean
> slate?  This project has been my first experience with postgres, so I
> may be missing something rather obvious.
>
> My server specs are somewhat modest but by no means slow; I am
> currently running a RedHat ES 5 virtual machine on an Opteron 2.4Ghz
> with 1GB of dedicated RAM.
>
> Any suggestions?  Naturally one might suspect an index problem.  Is
> there any way to verify that all relevant indexes are working, and/ 
> or to
> simply rebuild them all?
>
> Thanks,
> DW
>
>
>

--
Todd Holbrook
Dream Theory Studios Inc.
todd at dreamtheorystudios.com







More information about the Open-ils-dev mailing list