[pmwiki-users] Active prevention of crawling by web robots
Daniel Scheibler
scheibi at gmail.com
Mon Aug 8 05:44:37 CDT 2005
2005/8/8, Simon <s-i-m-o-n at paradise.net.nz>:
> Daniel Scheibler wrote:
>
> >At
> >
> >http://www.pmwiki.org/wiki/Cookbook/BlockCrawler
> >
> >I describe a recipe to active prevent web crawlers.
> >
> >Greets,
> >
> >scheiby.
> Was there a reason why robots.txt was not used, or the robots meta tag?
>
> S
>
> http://www.searchengineworld.com/robots/robots_tutorial.htm
> http://www.robotstxt.org/wc/meta-user.html
I use robots.txt normally at my pages.
But I although have some private pages/groups/fields that I use to
bookmark informations or coordinate meetings without the possibility
of promote a read password.
This pages I want to protect additional by deselecting special user
agents (robots etc.)
One suggestion is that I couldn't control the usage of robots.txt by
each web crawler and pages they don't retrieve they although couldn't
indexing.
The possibility to use read passwords to these pages I thinked about,
but especiall my bookmark page I use as start page in my web browser
and putting a password isn't nice at this point.
Greets,
scheiby.
--
Daniel Scheibler ========:} student at
eMail: scheibi at gmail.com BTU Cottbus/Germany
WWW: http://www.scheiby.de
More information about the pmwiki-users
mailing list