Technical Report: Auxiliary Tuning and its Application to Conditional Text Generation
Jan 25, 20251 min read
year : 2020
authors : Yoel Zeldes, Dan Padnos, Or Sharir, Barak Peleg
repository : arXiv
proceedings :
journal :
volume :
issue :
publisher :
doi : 10.48550/arXiv.2006.16823
Abstract : We introduce a simple and efficient method, called Auxiliary Tuning, for adapting a pre-trained Language Model to a novel task; we demonstrate this approach on the task of conditional text generation. Our approach supplements the original pre-trained model with an auxiliary model that shifts the output distribution according to the target task. The auxiliary model is trained by adding its logits to the pre-trained model logits and maximizing the likelihood of the target task output. Our method imposes no constraints on the auxiliary architecture. In particular, the auxiliary model can ingest additional input relevant to the target task, independently from the pre-trained model's input. Furthermore, mixing the models at the logits level provides a natural probabilistic interpretation of the method. Our method achieved similar results to training from scratch for several different tasks, while using significantly fewer resources for training; we share a specific example of text generation conditioned on keywords.