HCE Project Python language Distributed Tasks Manager Application, Distributed Crawler Application and client API bindings.  2.0.0-chaika
Hierarchical Cluster Engine Python language binding
dc_crawler.OwnRobots Namespace Reference

Classes

class  _Ruleset
 
class  RobotExclusionRulesParser
 
class  RobotFileParserLookalike
 

Functions

def _raise_error (error, message)
 
def _unquote_path (path)
 
def _scrub_data (s)
 
def _parse_content_type_header (header)
 

Variables

 PY_MAJOR_VERSION = sys.version_info[0]
 
int MK1996 = 1
 
int GYM2008 = 2
 
 _end_of_line_regex = re.compile(r"(?:\r\n)|\r|\n")
 
 _directive_regex = re.compile("(allow|disallow|user[-]?agent|sitemap|crawl-delay):[ \t]*(.*)", re.IGNORECASE)
 
int SEVEN_DAYS = 60 * 60 * 24 * 7
 
int MAX_FILESIZE = 100 * 1024
 
 _control_characters_regex = re.compile()
 
 _charset_extraction_regex = re.compile()
 

Detailed Description

A robot exclusion rules parser for Python by Philip Semanchuk

Full documentation, examples and a comparison to Python's robotparser module 
reside here:
http://NikitaTheSpider.com/python/rerp/

Comments, bug reports, etc. are most welcome via email to:
   philip@semanchuk.com

Simple usage examples:

    import robotexclusionrulesparser
    
    rerp = robotexclusionrulesparser.RobotExclusionRulesParser()

    try:
  rerp.fetch('http://www.example.com/robots.txt')
    except:
  # See the documentation for expected errors
  pass
    
    if rerp.is_allowed('CrunchyFrogBot', '/foo.html'):
  print "It is OK to fetch /foo.html"

OR supply the contents of robots.txt yourself:

    rerp = RobotExclusionRulesParser()
    s = open("robots.txt").read()
    rerp.parse(s)
    
    if rerp.is_allowed('CrunchyFrogBot', '/foo.html'):
  print "It is OK to fetch /foo.html"

The function is_expired() tells you if you need to fetch a fresh copy of 
this robots.txt.
    
    if rerp.is_expired():
  # Get a new copy
  pass


RobotExclusionRulesParser supports __unicode__() and __str()__ so you can print
an instance to see the its rules in robots.txt format.

The comments refer to MK1994, MK1996 and GYM2008. These are:
MK1994 = the 1994 robots.txt draft spec (http://www.robotstxt.org/orig.html)
MK1996 = the 1996 robots.txt draft spec (http://www.robotstxt.org/norobots-rfc.txt)
GYM2008 = the Google-Yahoo-Microsoft extensions announced in 2008
(http://www.google.com/support/webmasters/bin/answer.py?hl=en&answer=40360)


This code is released under the following BSD license --

Copyright (c) 2010, Philip Semanchuk
All rights reserved.

Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
    * Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
    * Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
    * Neither the name of robotexclusionrulesparser nor the
names of its contributors may be used to endorse or promote products
derived from this software without specific prior written permission.

THIS SOFTWARE IS PROVIDED BY ITS CONTRIBUTORS ''AS IS'' AND ANY
EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL Philip Semanchuk BE LIABLE FOR ANY
DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

Function Documentation

◆ _parse_content_type_header()

def dc_crawler.OwnRobots._parse_content_type_header (   header)
private

Definition at line 162 of file OwnRobots.py.

162 def _parse_content_type_header(header):
163  media_type = ""
164  encoding = ""
165 
166  # A typical content-type looks like this:
167  # text/plain; charset=UTF-8
168  # The portion after "text/plain" is optional and often not present.
169  # ref: http://www.w3.org/Protocols/rfc2616/rfc2616-sec3.html#sec3.7
170 
171  if header:
172  header = header.strip().lower()
173  else:
174  header = ""
175 
176  chunks = [s.strip() for s in header.split(";")]
177  media_type = chunks[0]
178  if len(chunks) > 1:
179  for parameter in chunks[1:]:
180  m = _charset_extraction_regex.search(parameter)
181  if m and m.group("encoding"):
182  encoding = m.group("encoding")
183 
184  return media_type.strip(), encoding.strip()
185 
186 
def _parse_content_type_header(header)
Definition: OwnRobots.py:162
Here is the caller graph for this function:

◆ _raise_error()

def dc_crawler.OwnRobots._raise_error (   error,
  message 
)
private

Definition at line 133 of file OwnRobots.py.

133 def _raise_error(error, message):
134  # I have to exec() this code because the Python 2 syntax is invalid
135  # under Python 3 and vice-versa.
136  s = "raise "
137  s += "error, message" if (PY_MAJOR_VERSION == 2) else "error(message)"
138 
139  exec(s)
140 
141 
def _raise_error(error, message)
Definition: OwnRobots.py:133
Here is the caller graph for this function:

◆ _scrub_data()

def dc_crawler.OwnRobots._scrub_data (   s)
private

Definition at line 151 of file OwnRobots.py.

151 def _scrub_data(s):
152  # Data is either a path or user agent name; i.e. the data portion of a
153  # robots.txt line. Scrubbing it consists of (a) removing extraneous
154  # whitespace, (b) turning tabs into spaces (path and UA names should not
155  # contain tabs), and (c) stripping control characters which, like tabs,
156  # shouldn't be present. (See MK1996 section 3.3 "Formal Syntax".)
157  s = _control_characters_regex.sub("", s)
158  s = s.replace("\t", " ")
159  return s.strip()
160 
161 
Here is the caller graph for this function:

◆ _unquote_path()

def dc_crawler.OwnRobots._unquote_path (   path)
private

Definition at line 142 of file OwnRobots.py.

142 def _unquote_path(path):
143  # MK1996 says, 'If a %xx encoded octet is encountered it is unencoded
144  # prior to comparison, unless it is the "/" character, which has
145  # special meaning in a path.'
146  path = re.sub("%2[fF]", "\n", path)
147  path = urllib_unquote(path)
148  return path.replace("\n", "%2F")
149 
150 
def _unquote_path(path)
Definition: OwnRobots.py:142
Here is the caller graph for this function:

Variable Documentation

◆ _charset_extraction_regex

dc_crawler.OwnRobots._charset_extraction_regex = re.compile()
private

Definition at line 130 of file OwnRobots.py.

◆ _control_characters_regex

dc_crawler.OwnRobots._control_characters_regex = re.compile()
private

Definition at line 126 of file OwnRobots.py.

◆ _directive_regex

dc_crawler.OwnRobots._directive_regex = re.compile("(allow|disallow|user[-]?agent|sitemap|crawl-delay):[ \t]*(.*)", re.IGNORECASE)
private

Definition at line 113 of file OwnRobots.py.

◆ _end_of_line_regex

dc_crawler.OwnRobots._end_of_line_regex = re.compile(r"(?:\r\n)|\r|\n")
private

Definition at line 106 of file OwnRobots.py.

◆ GYM2008

int dc_crawler.OwnRobots.GYM2008 = 2

Definition at line 104 of file OwnRobots.py.

◆ MAX_FILESIZE

int dc_crawler.OwnRobots.MAX_FILESIZE = 100 * 1024

Definition at line 123 of file OwnRobots.py.

◆ MK1996

int dc_crawler.OwnRobots.MK1996 = 1

Definition at line 103 of file OwnRobots.py.

◆ PY_MAJOR_VERSION

dc_crawler.OwnRobots.PY_MAJOR_VERSION = sys.version_info[0]

Definition at line 82 of file OwnRobots.py.

◆ SEVEN_DAYS

int dc_crawler.OwnRobots.SEVEN_DAYS = 60 * 60 * 24 * 7

Definition at line 117 of file OwnRobots.py.