Public domain code that collects data about the bills, amendments, roll call votes, and other core data about the U.S. Congress.
A data importing script for the official bulk bill status data from Congress, the official source of information on the life and times of legislation.
Scrapers for House and Senate roll call votes.
A scraper for GPO FDSys, the official repository for most legislative documents.
A defunct THOMAS scraper for presidential nominations in Congress.
Read about the contents and schema in the documentation in the github project wiki.
For background on how this repository came to be, see Eric's blog post.
This project is tested using Python 2.7.
On Ubuntu, you'll need
pip, and some support packages:
sudo apt-get install git python-dev libxml2-dev libxslt1-dev libz-dev python-pip
On OS X, you'll need developer tools installed (XCode), and
brew install wget
It's recommended you use a
virtualenv (virtual environment) for development. The easiest way is install
sudo if necessary:
sudo pip install virtualenv sudo pip install virtualenvwrapper
Create a virtualenv for this project:
And activate it before any development session using:
Finally, with your virtual environment activated, install Python packages:
pip install -r requirements.txt
Collecting the data
The general form to start the scraping process is:
./run <data-type> [--force] [other options]
where data-type is one of:
bills(see Bills) and Amendments)
committee_meetings(see Committee Meetings)
fdsys(see Bill Text)
deepbills(see Bill Text)
statutes(see Bills and Bill Text)
To scrape bills, resolutions, and amendments from THOMAS, run:
./run fdsys --collections=BILLSTATUS ./run bills
The bills script will output bulk data into a top-level
data directory, then organized by Congress number, bill type, and bill number. Two data output files will be generated for each bill: a JSON version (data.json) and an XML version (data.xml).
Debugging messages are hidden by default. To include them, run with --log=info or --debug. To hide even warnings, run with --log=error.
To get emailed with errors, copy config.yml.example to config.yml and fill in the SMTP options. The script will automatically use the details when a parsing or execution error occurs.
The --force flag applies to all data types and supresses use of a cache for network-retreived resources.
The script will cache downloaded pages in a top-level
cache directory, and output bulk data in a top-level
Two bulk data output files will be generated for each object: a JSON version (data.json) and an XML version (data.xml). The XML version attempts to maintain backwards compatibility with the XML bulk data that GovTrack.us has provided for years. Add the --govtrack flag to get fully backward-compatible output using GovTrack IDs (otherwise the source IDs used for legislators is used).
See the project wiki for documentation on the output format.
Pull requests with patches are awesome. Unit tests are strongly encouraged (example tests).
The best way to file a bug is to open a ticket.
To run this project's unit tests:
Who's Using This Data
Both Sunlight and GovTrack operate APIs where you can get much of this data delivered over HTTP:
The project is in the public domain within the United States, and copyright and related rights in the work worldwide are waived through the CC0 1.0 Universal public domain dedication.
All contributions to this project will be released under the CC0 dedication. By submitting a pull request, you are agreeing to comply with this waiver of copyright interest.