This repository provides everything you need to get started with Python for (social science) research.
Want to learn how to use Python for (Social Science) Research?
This repository has everything that you need to get started!
Author: Ties de Kok (Personal Page)
The goal of this GitHub page is to provide you with everything you need to get started with Python for actual research projects.
The topics and techniques demonstrated in this repository are primarily oriented towards empirical research projects in fields such as Accounting, Finance, Marketing, Political Science, and other Social Sciences.
However, many of the basics are also perfectly applicable if you are looking to use Python for any other type of Data Science!
This repository is written to facilitate learning by doing
If you are starting from scratch I recommend the following:
Getting your Python setup readyand
Using Pythonsections below
Code along!section to make sure that you can interactively use the Jupyter Notebooks
0_python_basics.ipynbnotebook and try to get a basics grasp on the Python syntax
2_handling_data.ipynbnotebook is very comprehensive, feel free to skip the more advanced parts at first.
If you are interested in web-scraping:
If you are interested in Natural Language Processing with Python:
If you are already familiar with the Python basics:
Use the notebooks provided in this repository selectively depending on the types of problems that you try to solve with Python.
Everything in the notebooks is purposely sectioned by the task description. So if you, for example, are looking to merge two Pandas dataframes together, you can use the
Combining dataframessection of the
2_handling_data.ipynbnotebook as a starting point.
There are multiple ways to get your Python environment set up. To keep things simple I will only provide you with what I believe to be the best and easiest way to get started: the Anaconda distribution + a conda environment.
The Anaconda Distribution bundles Python with a large collection of Python packages from the (data) science Python eco-system.
By installing the Anaconda Distribution you essentially obtain everything you need to get started with Python for Research!
python, it should say Anaconda at the top.
Note: Anaconda also comes with the
Anaconda Explorer, I haven't personally used it yet but it might be convenient.
cd(i.e. Change) to the folder where you extracted the ZIP file
conda env create -f environment.yml
conda activate LearnPythonforResearch
A full list of all the packages used is provided in the
Python 3.x is the newer and superior version over Python 2.7 so I strongly recommend to use Python 3.x whenever possible. There is no reason to use Python 2.7, unless you are forced to work with old Python 2.7 code.
The native way to run Python code is by saving the code to a file with the ".py" extension and executing it from the console / terminal:
Alternatively, you can run some quick code by starting a python or ipython interactive console by typing either
ipythonin your console / terminal.
The above is, however, not very convenient for research purposes as we desire easy interactivity and good documentation options.
Fortunately, the awesome Jupyter Notebooks provide a great alternative way of using Python for research purposes.
Jupyter comes pre-installed with the Anaconda distribution so you should have everything already installed and ready to go.
Note on Jupyter Lab
JupyterLab 1.0: Jupyter’s Next-Generation Notebook Interface
JupyterLab is a web-based interactive development environment for Jupyter notebooks, code, and data. JupyterLab is flexible: configure and arrange the user interface to support a wide range of workflows in data science, scientific computing, and machine learning. JupyterLab is extensible and modular: write plugins that add new components and integrate with existing ones.
Jupyter Lab is an additional interface layer that extends the functionality of Jupyter Notebooks which are the primary way you interact with Python code.
What is the Jupyter Notebook?
From the Jupyter website:
The Jupyter Notebook is an open-source web application that allows you to create and share documents that contain live code, equations, visualizations and explanatory text.
In other words, the Jupyter Notebook allows you to program Python code straight from your browser!
How does the Jupyter Notebook/Lab work in the background?
The diagram below sums up the basics components of Jupyter:
At the heart there is the Jupyter Server that handles everything, the Jupyter Notebook which is accessed and used through your browser, and the kernel that executes the code. We will be focusing on the natively included Python Kernel but Jupyter is language agnostic so you can also use it with other languages/software such as 'R'.
It is worth noting that in most cases you will be running the
Jupyter Serveron your own computer and will connect to it locally in your browser (i.e. you don't need to be connected to the internet). However, it is also possible to run the Jupyter Server on a different computer, for example a high performance computation server in the cloud, and connect to it over the internet.
How to start a Jupyter Notebook/Lab?
The primary method that I would recommend to start a Jupyter Notebook/Lab is to use the command line (terminal) directly:
conda activate LearnPythonForResearch
cd(i.e. Change) to the desired starting directory
This should automatically open up the corresponding Jupyter Notebook/Lab in your default browser. You can also manually go to the Jupyter Notebook/Lab by going to
localhost:8888with your browser. (You might be asked for a password, which can find in the terminal window where there Jupyter server is running.)
How to close a Jupyter Server erver?
If you want to close down the Jupyter Server: open up the command prompt window that runs the server and press
CTRL + Ctwice.
How to use the Jupyter Notebook?
Some shortcuts are worth mentioning for reference purposes:
command mode--> enable by pressing
edit mode--> enable by pressing
both modes|--- |--- |--- |
Y: cell to code |
Tab: code completion or indent |
Shift-Enter: run cell, select below |
M: cell to markdown |
Shift-Tab: tooltip |
Ctrl-Enter: run cell |
A: insert cell above |
Ctrl-A: select all | |
B: insert cell below |
Ctrl-Z: undo | |
X: cut selected cell |
The Python eco-system consists of many packages and modules that people have programmed and made available for everyone to use.
These packages/modules are one of the things that makes Python so useful.
Some packages are natively included with Python and Anaconda, but anything not included you need to install first before you can import them.
I will discuss the three primary methods of installing packages:
Method 1: use
Many packages are available on the "Python Package Index" (i.e. "PyPI"): https://pypi.python.org/pypi
You can install packages that are on "PyPI" by using thepipcommand:
Example, install therequestspackage: runpip install requestsin your command line / terminal (not in the Jupyter Notebook!).
To uninstall you can usepip uninstalland to upgrade an existing package you can add the-Uflag (pip install -U requests)
Method 2: use
Sometimes when you try something withpipyou get a compile error (especially on Windows). You can try to fix this by configuring the right compiler but most of the times it is easier to try to install it directly via Anaconda as these are pre-compiled. For example:conda install scipy
Full documentation is here: Conda documentation
Method 3: install directly using the
Sometimes a package is not on pypi and conda (you often find these packages on GitHub). Follow these steps to install those:
- Download the folder with all the files (if archived, make sure to unpack the folder)
- Open your command prompt (terminal) andcdto the folder you just downloaded
- Type:python setup.py install
This repository covers the following topics:
0_python_basics.ipynb: Basics of the Python syntax
1_opening_files.ipynb: Examples on how to open TXT, CSV, Excel, Stata, Sas, JSON, and HDF files.
2_handling_data.ipynb: A comprehensive overview on how to use the
Pandaslibrary for data wrangling.
3_visualizing_data.ipynb: Examples on how to generate visualizations with Python.
4_web_scraping.ipynb: A comprehensive overview on how to use
Seleniumfor APIs and web scraping.
Note: To avoid the "oh, that looks easy!" trap I have not uploaded the exercises notebook with examples answers.
Feel free to email me for the answer keys once you are done!
You can code along in two ways:
If you want to experiment with the code in a live environment you can also use
Binder allows to create a live environment where you can execute code just as-if you were on your own computer based on a GitHub repository, it is very awesome!
Click on the button below to launch binder:
Note: you could use binder to complete the exercises but it will not save!!
You can essentially "download" the contents of this repository by cloning the repository.
You can do this by clicking "Clone or download" button and then "Download ZIP":
After you download and extracted the zip file into a folder you can follow the steps to set up your environment:
If you have questions or experience problems please use the
issuestab of this repository.
MIT - Ties de Kok - 2020
https://github.com/teles/array-mixer for having an awesome readme that I used as a template.