- A Pre-trained Sequential Advice Framework: Reputation Dynamics for Zero-shot Switch
Authors: Junting Wang, Praneet Rathi, Hari Sundaram
Summary: Sequential recommenders are essential to the success of on-line purposes, eg e-commerce, video streaming, and social media. Whereas mannequin architectures proceed to enhance, for each new software area, we nonetheless have to coach a brand new mannequin from scratch for top of the range suggestions. Alternatively, pre-trained language and imaginative and prescient fashions have proven nice success in zero-shot or few-shot adaptation to new software domains. Impressed by the success of pre-trained fashions in peer AI fields, we suggest a novel pre-trained sequential advice framework: PrepRec. We be taught common merchandise representations by modeling merchandise reputation dynamics. By way of intensive experiments on 5 real-world datasets, we present that PrepRec, with none auxiliary data, cannot solely zero-shot switch to a brand new area, however obtain aggressive efficiency in comparison with state-of-the-art sequential recommender fashions with solely a fraction of the mannequin measurement. As well as, with a easy post-hoc interpolation, PrepRec can enhance the efficiency of present sequential recommenders on common by 13.8% in Recall@10 and 29.5% in NDCG@10. We offer an anonymized implementation of PrepRec at https://anonymous.4open.science/r/PrepRec–2F60/