Need help with FinBERT?
Click the “chat” button below for chat support from the developer who created it, or find similar developers for support.

About the developer

psnonis
131 Stars 41 Forks 91 Commits 3 Opened issues

Description

BERT for Finance : UC Berkeley MIDS w266 Final Project

Services available

!
?

Need anything else?

Contributors list

No Data

FinBERT: Pre-Trained on SEC Filings for Financial NLP Tasks

Vinicio DeSola, Kevin Hanna, Pri Nonis

MODEL WEIGHTS

  • https://drive.google.com/drive/folders/1rcRXZhb3JLY3A_kIO8gMk8jacRyR-Ik6?usp=sharing

PUBLICATION

  • https://www.researchgate.net/publication/334974348FinBERTpre-trainedmodelonSECfilingsforfinancialnaturallanguage_tasks

MOTIVATIONS

Goal 1

FinBERT-Prime_128MSL-500K+512MSL-10K vs BERT
- Compare mask LM prediction accurracy on technical financial sentences - Compare analogy on financial relationships

Goal 2

FinBERT-Prime_128MSL-500K vs FinBERT-Pre2K_128MSL-500K
- Compare mask LM prediction accuracy on financial news from 2019 - Compare analogy on financial relationship, measure shift in understanding : risk vs climate in 1999 vs 2019

Goal 3

FinBERT-Prime_128MSL-500K vs FinBERT-Prime_128MSK-500K+512MSL-10K
- Compare mask LM prediction accuracy on long financial sentences

Goal 4

FinBERT-Combo_128MSL-250K vs FinBERT-Prime_128MSL-500K+512MSL-10K
- Compare mask LM prediction accuracy on financial sentences : can we get same accuracy with less training by building on original BERT weights.

TERMINOLOGY

  • Prime

    Pre-trained from scratch on 2017, 2018, 2019 SEC 10K dataset
  • Pre2K

    Pre-traind from scratch on 1998, 1999 SEC 10K dataset
  • Combo

    Pre-trained continued from original BERT on 2017, 2018, 2019 SEC 10K dataset

ANALYSIS

We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.