69一区二三区好的精华液,中文字幕无码av波多野吉衣,亚洲精品久久久久久无码色欲四季,日本不卡高字幕在线2019

Two papers has been accepted for presentation at CVPR 2024

Two papers  have been accepted for presentation at CVPR 2024 and inclusion in the proceedings.

 

(1) Title: Adaptive Bidirectional Displacement for Semi-Supervised Medical Image Segmentation

Author: Hanyang Chi, Jian Pang, Bingfeng Zhang, Weifeng Liu

Abstract: Consistency learning is a central strategy to tackle unlabeled data in semi-supervised medical image segmentation (SSMIS), which enforces the model to produce consistent predictions under the perturbation. However, most current approaches solely focus on utilizing a specific single perturbation, which can only cope with limited cases, while employing multiple perturbations simultaneously is hard to guarantee the quality of consistency learning. In this paper, we propose an Adaptive Bidirectional Displacement (ABD) approach to solve the above challenge. Specifically, we first design a bidirectional patch displacement based on reliable prediction confidence for unlabeled data to generate new samples, which can effectively suppress uncontrollable regions and still retain the influence of input perturbations. Meanwhile, to enforce the model to learn the potentially uncontrollable content, a bidirectional displacement operation with inverse confidence is proposed for the labeled images, which generates samples with more unreliable information to facilitate model learning. Extensive experiments show that ABD achieves new state-of-the-art performances for SSMIS, significantly improving different baselines. 

(2) Title: Rethinking Prior Information Generation with CLIP for Few-Shot Segmentation 

Author: Jin Wang, Bingfeng Zhang, Jian Pang, Honglong Chen, Weifeng Liu

Abstract: Few-shot segmentation remains challenging due to the limitations of its labeling information for unseen classes. Most previous approaches rely on extracting high-level feature maps from the frozen visual encoder to compute the pixel-wise similarity as a key prior guidance for the decoder. However, such a prior representation suffers from coarse granularity and poor generalization to new classes since these high-level feature maps have obvious category bias. In this work, we propose to replace the visual prior representation with the visual-text alignment capacity to capture more reliable guidance and enhance the model generalization. Specifically, we design two kinds of training-free prior information generation strategy that attempts to utilize the semantic alignment capability of the Contrastive Language-Image Pre-training model (CLIP) to locate the target class. Besides, to acquire more accurate prior guidance, we build a high-order relationship of attention maps and utilize it to refine the initial prior information. Experiments on both the PASCAL-5{i} and COCO-20{i} datasets show that our method obtains a clearly substantial improvement and reaches the new state-of-the-art performance. 

 


登錄用戶可以查看和發(fā)表評論, 請前往  登錄 或  注冊
SCHOLAT.com 學者網(wǎng)
免責聲明 | 關于我們 | 用戶反饋
聯(lián)系我們:
主站蜘蛛池模板: 浏阳市| 额济纳旗| 汶川县| 邵阳市| 定远县| 屯门区| 湘阴县| 长治市| 勃利县| 广灵县| 海盐县| 昌宁县| 大英县| 长汀县| 盐边县| 南阳市| 石嘴山市| 唐山市| 碌曲县| 常德市| 丰镇市| 建湖县| 胶南市| 从化市| 德钦县| 禄丰县| 商城县| 浮梁县| 天台县| 迁西县| 长子县| 房山区| 龙口市| 商洛市| 灌云县| 宁波市| 自贡市| 石城县| 通城县| 汝州市| 衡山县|