[pmwiki-users] Server open file count growing

Patrick R. Michaud pmichaud at pobox.com
Thu Jun 8 13:34:22 CDT 2006


On Thu, Jun 08, 2006 at 11:06:50AM -0500, Doug Carter wrote:
> Context:
>  pmwiki 2.1.6 using Farms, "neutral" skin, no cookbook items
>  PHP 4.4.1
>  OpenBSD 3.9 with default chrooted Apache 1.3
> 
> On the surface pmwiki seems to be working perfectly (and is much
> appreciated by the former Sharepoint user base :)

Excellent!

> I have a problem that manifests itself as a growing list of files
> kept open by Apache (user www on my system) that seems to be
> driven by pmwiki editing.
> 
> I can restart Apache and then count the number of open files that
> it owns (fstat -u www|wc) and I see about 85 open files.  This
> seems quite normal as each of the ten initial httpd daemons opens
> various files when they initialize.
> 
> After operating for about a week (with very moderate usage) this
> count grew to about 1,300 open files and I started getting errors
> from other applications complaining about lack of file
> descriptors.
> 
> I restarted Apache and over the last week I have watched the open
> file count slowly grow.  After a bunch of editing of pages this
> morning the count grew by about 100 files; it is now at 437 open
> files.

Hmmm.  I just double-checked all of the fopen() calls in PmWiki
to make sure they're being properly closed, and as far as I can
tell they are.  The only one that is a little questionable
is the .flock file (and it's entirely possible that this is the
culprit).  And of course, there are other PHP functions which
might be allocating file descriptors and not releasing them --
I haven't checked yet.

I should note in passing that I've had trouble in the past with
Apache and PHP not properly closing and unlocking files.

Is there any way that you could find out *which files* the httpd
daemons are holding open?  Perhaps something in the /proc filesystem
could tell us?  On my linux boxes, the following bash command will 
tell me the number of file descriptors currently held by each process:

for i in /proc/*/fd; do WC=$(echo $i/* | wc -w); echo $WC $i; done | sort -n     
An 'ls -l' command in one of the /proc/*/fd directories might even
tell us which files are being held open.

What's the setting for MaxRequestsPerChild in Apache?
If it's zero, then one workaround might be to set 
MaxRequestsPerChild so that Apache child processes eventually 
exit (thus freeing up any leaked file descriptors).  But
I'd still prefer to see if we can find out what is holding
open files.

Pm




More information about the pmwiki-users mailing list