February 16, 2022
Authored by: Dr. Morten Middelfart, Sam Martin, and Ben Martin
Abstract
In this paper, we demonstrate that, compared to deep learning, random contrast learning (RCL) produces unsupervised language models with faster training, faster inference, and reduced size, all by orders of magnitude, while maintaining better recall. Thus far, we have applied RCL to several small datasets. Our findings indicate a promising path toward broader applications in language and exhibit the power of RCL as a new paradigm in machine learning.