ReChorus:PyTorch隐式反馈Top-K推荐框架

网友投稿 1029 2022-10-23

ReChorus:PyTorch隐式反馈Top-K推荐框架

ReChorus:PyTorch隐式反馈Top-K推荐框架

ReChorus

ReChorus is a general PyTorch framework for Top-K recommendation with implicit feedback, especially for research purpose. It aims to provide a fair benchmark to compare different state-of-the-art algorithms. We hope this can partly alleviate the problem that different papers adopt different experimental settings, so as to form a "Chorus" of recommendation algorithms.

This framework is especially suitable for researchers to compare algorithms under the same experimental setting, and newcomers to get familar with classical methods. The characteristics of our framework can be summarized as follows:

Agile: concentrate on your model design in a single file and implement new models quicklyEasy: the framework is accomplished in less than a thousand lines of code, which is easy to use with clean codes and adequate commentsEfficient: multi-thread batch preparation, special implementations for the evaluation, and around 90% GPU utilization during training for deep modelsFlexible: implement new readers or runners for different datasets and experimental settings, and each model can be assigned with specific helpers

Generally, ReChorus decomposes the whole process into three modules:

Reader: read dataset into DataFrame and append necessary information to each instanceRunner: control the training process and model evaluationModel: define how to generate ranking scores and prepare batches

Getting Started

Install Anaconda with Python >= 3.5Clone the repository and install requirements

git clone https://github.com/THUwangcy/ReChorus.gitcd ReChoruspip install -r requirements.txt

Run model with build-in dataset

python main.py --model_name BPR --emb_size 64 --lr 1e-3 --lr 1e-6 --dataset Grocery_and_Gourmet_Food

(optional) Run jupyter notebook in data folder to download and build new amazon datasets, or prepare your own datasets according to README in data(optional) Implement your own models according to README in src

Models

We have implemented the following methods (still updating):

BPR (UAI'09): Bayesian personalized ranking from implicit feedbackNCF (WWW'17): Neural Collaborative FilteringTensor (RecSys'10): N-dimensional Tensor Factorization for Context-aware Collaborative FilteringGRU4Rec (ICLR'16): Session-based Recommendations with Recurrent Neural NetworksNARM (CIKM'17): Neural Attentive Session-based RecommendationSASRec (IEEE'18): Self-attentive Sequential RecommendationTiSASRec (WSDM'20): Time Interval Aware Self-Attention for Sequential RecommendationCFKG (MDPI'18): Learning heterogeneous knowledge base embeddings for explainable recommendationSLRC (WWW'19): Modeling Item-specific Temporal Dynamics of Repeat Consumption for Recommender SystemsChorus (SIGIR'20): Make It a Chorus: Knowledge- and Time-aware Item Modeling for Sequential Recommendation

The table below lists the results of these models in Grocery_and_Gourmet_Food dataset (145.8k entries). Leave-one-out is applied to split data: the most recent interaction of each user for testing, the second recent item for validation, and the remaining items for training. We randomly sample 99 negative items for each test case to rank together with the ground-truth item. These settings are all common in Top-K sequential recommendation.

ModelHR@5NDCG@5Time/iterSequentialKnowledgeTime-aware
BPR0.35540.24572.5s
NCF0.32320.22343.4s
Tensor0.35480.26712.8s
GRU4Rec0.36460.25984.9s
NARM0.36210.25958.2s
SASRec0.42470.30567.2s
TiSASRec0.42760.307439s
CFKG0.42390.30188.7s
SLRC'0.45190.33354.3s
Chorus0.47380.34484.9s

For fair comparison, the batch size is fixed to 256, and the embedding size is set to 64. We strive to tune all the other hyper-parameters to obtain the best performance for each model (may be not optimal now, which will be updated if better scores are achieved). Current commands are listed in run.sh. We repeat each experiment 5 times with different random seeds and report the average score (see exp.py). All experiments are conducted with a single GTX-1080Ti GPU.

Citation

This is also our public implementation for the paper:

Chenyang Wang, Min Zhang, Weizhi Ma, Yiqun Liu, and Shaoping Ma. Make It a Chorus: Knowledge- and Time-aware Item Modeling for Sequential Recommendation. In SIGIR'20.

Checkout to SIGIR20 branch to reproduce the results.

git clone -b SIGIR20 https://github.com/THUwangcy/ReChorus.git

Please cite this paper if you use our codes. Thanks!

Author: Chenyang Wang (THUwangcy@gmail.com)

版权声明:本文内容由网络用户投稿,版权归原作者所有,本站不拥有其著作权,亦不承担相应法律责任。如果您发现本站中有涉嫌抄袭或描述失实的内容,请联系我们jiasou666@gmail.com 处理,核实后本网站将在24小时内删除侵权内容。

上一篇:Redis读写分离(三)
下一篇:分布式前修课:Zookeeper锁实现方式
相关文章

 发表评论

暂时没有评论,来抢沙发吧~