<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
<head>
<meta content="text/html;charset=ISO-8859-1" http-equiv="Content-Type">
</head>
<body bgcolor="#ffffff" text="#000000">
Patrick R. Michaud wrote:
<blockquote cite="mid20061108230315.GA26128@host.pmichaud.com"
type="cite">
<pre wrap="">On Wed, Nov 08, 2006 at 04:36:56PM -0600, Jon Haupt wrote:
</pre>
<blockquote type="cite">
<pre wrap="">On 11/8/06, Simon <a class="moz-txt-link-rfc2396E" href="mailto:s-i-m-o-n@paradise.net.nz"><s-i-m-o-n@paradise.net.nz></a> wrote:
</pre>
<blockquote type="cite">
<pre wrap="">I wonder if it would be worth considering an unblock whitelist, or a
more granular unblock list.
It is possibly a bit strong to obviate the entire rule.
</pre>
</blockquote>
<pre wrap="">I would agree, this is one reason I'm actually still using
cmsb-blocklist. The other is the Blocklog. The ability to unblock
bits of a block rule and the ability to log blocks (so that you can
see what's been blocked) are very useful tools.
</pre>
</blockquote>
<pre wrap=""><!---->
How exactly does one unblock "bits of a block rule"? I mean,
how can that work?
Pm
</pre>
</blockquote>
Well yes that is a good question, an initial thought is that each rule
extracts and saves the text that matches the rule <br>
(in normal circumstances this doesn't happen).<br>
these text extracts are then compared against a "whitelist" eg<br>
<br>
white: musica<br>
It would seem that for this to work the white text and extract text
would have to match exactly.<br>
If all extracts on a page are "overridden" by white text the page could
be posted.<br>
<br>
Another idea, more related to the problem I had (which was that I
genuine URL I wanted to put in a page was being rejected by a block),<br>
is that URLs should be extracted and treated separately from text in a
page, perhaps checked against the blocks only after being checked
against a whitelist of urls.<br>
<br>
cheers, and thanks again<br>
<br>
Simon<br>
<br>
</body>
</html>