Need help with short-jokes-dataset?
Click the “chat” button below for chat support from the developer who created it, or find similar developers for support.

About the developer

amoudgl
223 Stars 51 Forks GNU General Public License v2.0 8 Commits 5 Opened issues

Description

Python scripts for building 'Short Jokes' dataset, featured on Kaggle

Services available

!
?

Need anything else?

Contributors list

# 122,854
c-plus-...
scraper...
Jupyter...
visual-...
8 commits

short-jokes-dataset

This repository contains all the python scripts used to build Short Jokes dataset. The dataset contains 231,657 short jokes scraped from various websites.

All the web scraper scripts are present in the

/scripts/scrapers/
folder. These scripts are written for specific websites (website link mentioned in the header of each file) and they generate csv files of jokes in
/data/
folder with the fixed format:
ID, Joke
.

Scrapers were written only for those websites which allowed scraping and there were no CAPTCHA or any blocking javascripts for information gathering bots.

Jokes from subreddits

/r/jokes
and
/r/cleanjokes
are extracted using
scripts/scrapers/subredditarchive.py
. The script uses PRAW, a Reddit API wrapper which uses timestamp search for finding posts between given timestamps on a particular subreddit. Following Reddit's API terms of use, request for data is made every 2 seconds. For each subreddit, posts are downloaded from the day of creation of subreddit to 31st January, 2017. The script generates a json file for each post in a separate folder. Json dumps for both the subreddits can be accessed from here (Uncompressed 2.3GB). Jokes from all the json files are extracted and written to a csv file using
scripts/json_to_csv.py
.

scripts/merge_csvs.py
removes the duplicates from all the csv files and merges the jokes into a single csv to get the final dataset
shortjokes.csv
.

Contributions

  • If you are aware of any resource (preferably large) of good clean jokes, feel free to suggest or send a pull request with scraper script and csv file in the above format.
  • Any other positive suggestions for the dataset are welcome.

We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.