[pmwiki-users] Blog proposal

Martin Fick fick at fgm.com
Fri Dec 16 14:09:33 CST 2005


> Anyway, we could make caching an option.  It's also possible to
> derive a PageStore class that caches the ls() results and use
> that instead.

 Yes

> >   For both of these it helps to split this into two separate
> > non nested main loops, one to read the files and one to
> > check for globbing (pattern matching).  
> 
> I think I'll probably hybridize this, so that I use preg_grep
> on all of the entries in a single directory (for those who have
> wiki.d/ segregated into per-group directories).  Or perhaps 
> use preg_grep on every hundred entries or so, instead of 
> preg_match on every entry.  That would get a significant
> speedup while also preventing us from eating up a huge chunk
> of memory with page names just to throw them away again.  :-)


  Ah, that makes sense.  I wouldn't make it too complicated,
this would probably not benefit too many people... Although,
you never know, the new pagelist code could spur on some
interesting uses where people might appreciate the
optimizations, both for memory and speed!



> Lastly, I'm thinking that the match against a valid page name
> pattern ought to go at the end of the pattern list instead of 
> the beginning, since it'll be the least restrictive of any patterns 
> passed in to the ls() function.

  I think that is a very good idea!

  I also was toying with the idea of somehow caching the
patterns sets and their results, since these would probably
be smaller than the full directory reads, and would save
time doubly.  I just couldn't figure out a good way to do
that (I guess you could concat them together as a key to an
array)? And, of course, this would benefit even fewer ls()s,
only the ones with the same sets.

  -Martin




More information about the pmwiki-users mailing list