RT Journal Article SR Electronic T1 Context, Language Modeling, and Multimodal Data in Finance JF The Journal of Financial Data Science FD Institutional Investor Journals SP jfds.2021.1.063 DO 10.3905/jfds.2021.1.063 A1 Sanjiv Das A1 Connor Goggins A1 John He A1 George Karypis A1 Sandeep Krishnamurthy A1 Mitali Mahajan A1 Nagpurnanand Prabhala A1 Dylan Slack A1 Rob van Dusen A1 Shenghua Yue A1 Sheng Zha A1 Shuai Zheng YR 2021 UL https://pm-research.com/content/early/2021/06/01/jfds.2021.1.063.abstract AB The authors enhance pretrained language models with Securities and Exchange Commission filings data to create better language representations for features used in a predictive model. Specifically, they train RoBERTa class models with additional financial regulatory text, which they denote as a class of RoBERTa-Fin models. Using different datasets, the authors assess whether there is material improvement over models that use only text-based numerical features (e.g., sentiment, readability, polarity), which is the traditional approach adopted in academia and practice. The RoBERTa-Fin models also outperform generic bidirectional encoder representations from transformers (BERT) class models that are not trained with financial text. The improvement in classification accuracy is material, suggesting that full text and context are important in classifying financial documents and that the benefits from the use of mixed data, (i.e., enhancing numerical tabular data with text) are feasible and fruitful in machine learning models in finance.TOPICS: Quantitative methods, big data/machine learning, legal/regulatory/public policy, information providers/credit ratingsKey Findings▪ Machine learning based on multimodal data provides meaningful improvement over models based on numerical data alone.▪ Context-rich models perform better than context-free models.▪ Pretrained language models that mix common text and financial text do better than those pretrained on financial text alone.