Public | Automated Build

Last pushed: 12 hours ago
Short Description
SerpScrap out of the box. Image is based on ecoron/python36-sklearn. dependencies already installed.
Full Description



.. image::

.. image::
:alt: Documentation Status

.. image::

.. image::

A python scraper to extract, analyze data from search engine result pages and urls. It might be usefull
for SEO and research tasks. Also some text processing tools are available.

  • Extract position, url, title, description, related keywords and other details of searchresults for the given keywords.
  • get screenshots of each resultpage.
  • use a list of proxies for scraping.
  • scrape also the origin url of the searchresult, the cleaned raw text content from this url would be extracted.
  • save results as csv for future analytics


in version 0.8.0 the text processing tools was removed. this will be part of a new project. This changes helps to
reduce the requirements and to make it more easy to setup and run SerpScrap.

See for documentation.

Source is available at


The easy way to do:

.. code-block:: python

pip uninstall SerpScrap -y
pip install SerpScrap --upgrade

More details in the install_ section of the documentation.


SerpScrap in your applications

.. code-block:: python


-- coding: utf-8 --

import pprint
import serpscrap

keywords = ['example']

config = serpscrap.Config()
config.set('scrape_urls', False)

scrap = serpscrap.SerpScrap()
scrap.init(config=config.get(), keywords=keywords)
results =

for result in results:

More detailes in the examples_ section of the documentation.

To avoid encode/decode issues use this command before you start using SerpScrap in your cli.

.. code-block:: bash

chcp 65001

.. image::


SerpScrap is using PhantomJs a scriptable headless WebKit, which is installed automaticly on the first run (Linux, Windows).
The scrapcore is based on GoogleScraper
with several improvements.

.. target-notes::

.. install:
.. PhantomJs:

Docker Pull Command
Source Repository

Comments (0)