69一区二三区好的精华液,中文字幕无码av波多野吉衣,亚洲精品久久久久久无码色欲四季,日本不卡高字幕在线2019

近期團隊兩篇論文被國際頂刊IEEE TIP和IJIS錄用
來源: 黃震華/
華南師范大學
2638
8
0
2021-12-31
近期團隊兩篇論文被國際頂刊錄用:
 
1. "Feature Map Distillation of Thin Nets for Low-resolution Object Recognition"被IEEE Transactions on Image Processing(CCF A,中科院一區,IF:10.856) 錄用.
Abstract—Intelligent video surveillance is an important computer vision application in natural environments. Since detected objects under surveillance are usually low-resolution and noisy, their accurate recognition represents a huge challenge. Knowledge distillation is an effective method to deal with it, but existing related work usually focuses on reducing the channel count of a student network, not feature map size. As a result, they cannot transfer “privilege information” hidden in feature maps of a wide and deep teacher network into a thin and shallow student one, leading to the latter’s poor performance. To address this issue, we propose a Feature Map Distillation (FMD) framework under which the feature map size of teacher and student networks is different. FMD consists of two main components: Feature Decoder Distillation (FDD) and Feature Map Consistency-enforcement (FMC). FDD reconstructs the shallow texture features of a thin student network to approximate the corresponding samples in a teacher network, which allows the high-resolution ones to directly guide the learning of the shallow features of the student network. FMC makes the size and direction of each deep feature map consistent between studentand teacher networks, which constrains each pair of feature maps to produce the same feature distribution. FDD and FMC allow a thin student network to learn rich “privilege information” in feature maps of a wide teacher network. The overall performance of FMD is verified in multiple recognition tasks by comparing it with state-of-the-art knowledge distillation methods on low resolution and noisy objects.
Keywords—Knowledge distillation, Low-resolution, Intelligent video surveillance, Internet of Things, Efficiency
 
2. "A Two-phase Knowledge Distillation Model for Graph Convolutional Network-based Recommendation "被International Journal of Intelligent Systems(CAA B,中科院一區,IF:8.709) 錄用.
Abstract—Graph convolutional network (GCN)-based recommendation has recently attracted significant attention in the recommender system community. Although current studies propose various GCNs to improve recommendation performance, existing methods suffer from two main limitations. First, user-item interaction data is generally sparse in practice, highlighting these methods’ ineffectiveness in learning user and item feature representations. Second, they usually perform a dot-product operation to model and calculate user preferences on items, leading to inaccurate user preference learning. To address these limitations, this study adopts a design idea that sharply differs from existing works. Specifically, we introduce the knowledge distillation concept into GCN-based recommendation and propose a two-phase knowledge distillation model (TKDM) improving recommendation performance. In Phase I, a self-distillation method on a graph auto-encoder learns the user and item feature representations. This auto-encoder employs a simple two-layer GCN as an encoder and a fully-connected layer as a decoder. On this basis, in Phase II, a mutual-distillation method on a fully-connected layer is introduced to learn user preferences on items with triple-based Bayesian personalized ranking. Extensive experiments on three real-world datasets demonstrate that TKDM outperforms classic and state-of-the-art methods related to GCN-based recommendation problems.
Keywords— Graph convolutional network, Knowledge distillation, Recommender system, Neural network, Deep learning
 


登錄用戶可以查看和發表評論, 請前往  登錄 或  注冊
SCHOLAT.com 學者網
免責聲明 | 關于我們 | 聯系我們
聯系我們:
主站蜘蛛池模板: 海南省| 五莲县| 忻州市| 无锡市| 景洪市| 芜湖市| 会东县| 岳普湖县| 惠安县| 小金县| 区。| 饶河县| 崇阳县| 喜德县| 雅江县| 遂宁市| 巨鹿县| 清水河县| 宁德市| 京山县| 三台县| 永登县| 洛浦县| 宽城| 沽源县| 福海县| 灵武市| 茶陵县| 平乐县| 来凤县| 营山县| 东台市| 呼和浩特市| 中方县| 建宁县| 安宁市| 乐陵市| 婺源县| 鄂托克旗| 正宁县| 蒲江县|