February 16, 2022

Authored by: Dr. Morten Middelfart, Sam Martin, and Ben Martin

Abstract

In this paper, we demonstrate that, compared to deep learning, random contrast learning (RCL) produces unsupervised language models with faster training, faster inference, and reduced size, all by orders of magnitude, while maintaining better recall. Thus far, we have applied RCL to several small datasets. Our findings indicate a promising path toward broader applications in language and exhibit the power of RCL as a new paradigm in machine learning.

 

Important PrismRCL Update: Model Compatibility and Dual Version Support - Retain your existing models and explore 2.4.x features! Learn more about your options.

X