short-jokes-dataset

by amoudgl

Python scripts for building 'Short Jokes' dataset, featured on Kaggle

209 Stars 37 Forks Last release: Not found GNU General Public License v2.0 8 Commits 0 Releases

Available items

No Items, yet!

The developer of this repository has not created any items for sale yet. Need a bug fixed? Help with integration? A different license? Create a request here:

short-jokes-dataset

This repository contains all the python scripts used to build Short Jokes dataset. The dataset contains 231,657 short jokes scraped from various websites.

All the web scraper scripts are present in the

/scripts/scrapers/
folder. These scripts are written for specific websites (website link mentioned in the header of each file) and they generate csv files of jokes in
/data/
folder with the fixed format:
ID, Joke
.

Scrapers were written only for those websites which allowed scraping and there were no CAPTCHA or any blocking javascripts for information gathering bots.

Jokes from subreddits

/r/jokes
and
/r/cleanjokes
are extracted using
scripts/scrapers/subredditarchive.py
. The script uses PRAW, a Reddit API wrapper which uses timestamp search for finding posts between given timestamps on a particular subreddit. Following Reddit's API terms of use, request for data is made every 2 seconds. For each subreddit, posts are downloaded from the day of creation of subreddit to 31st January, 2017. The script generates a json file for each post in a separate folder. Json dumps for both the subreddits can be accessed from here (Uncompressed 2.3GB). Jokes from all the json files are extracted and written to a csv file using
scripts/json_to_csv.py
.

scripts/merge_csvs.py
removes the duplicates from all the csv files and merges the jokes into a single csv to get the final dataset
shortjokes.csv
.

Contributions

  • If you are aware of any resource (preferably large) of good clean jokes, feel free to suggest or send a pull request with scraper script and csv file in the above format.
  • Any other positive suggestions for the dataset are welcome.

We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.