


default search action
Pattern Recognition Letters, Volume 165
Volume 165, January 2023
- Seonghye Jeong, Seongmoon Jeong, Simon S. Woo
, Jong Hwan Ko
:
An overhead-free region-based JPEG framework for task-driven image compression. 1-8 - Thanh Tuan Nguyen, Thanh Phuong Nguyen, Nadège Thirion-Moreau:
Locating robust patterns based on invariant of LTP-based features. 9-16 - Priyadarshini Dwivedi
, Gyanajyoti Routray, Rajesh M. Hegde
:
Learning based method for near field acoustic range estimation in spherical harmonics domain using intensity vectors. 17-24 - Hua Jiang, Bing Xiao
, Yintao Luo, Junliang Ma:
A self-attentive model for tracing knowledge and engagement in parallel. 25-32 - Jiapei Feng, Xinggang Wang
, Te Li
, Shanshan Ji, Wenyu Liu:
Weakly-supervised semantic segmentation via online pseudo-mask correcting. 33-38 - Jianing Xue
, Zhe Sun, Feng Duan, Cesar F. Caiafa
, Jordi Solé-Casals
:
Underwater sEMG-based recognition of hand gestures using tensor decomposition. 39-46 - Tingting Zhao
, Guixi Li
, Yajing Song, Yuan Wang, Yarui Chen, Jucheng Yang:
A multi-scenario text generation method based on meta reinforcement learning. 47-54 - Elissavet Batziou, Konstantinos Ioannidis
, Ioannis Patras, Stefanos Vrochidis
, Ioannis Kompatsiaris
:
Artistic neural style transfer using CycleGAN and FABEMD by adaptive information selection. 55-62 - Bo Yang
, Jianming Wu, Kazushi Ikeda, Gen Hattori, Masaru Sugano, Yusuke Iwasawa, Yutaka Matsuo:
Deep Learning Pipeline for Spotting Macro- and Micro-expressions in Long Video Sequences Based on Action Units and Optical Flow. 63-74 - Yujun Xu, Enguang Yao, Chaoyue Liu
, Qidong Liu
, Mingliang Xu:
A novel ensemble model with two-stage learning for joint dialog act recognition and sentiment classification. 77-83 - Laura-Bianca Bilius
, Stefan Gheorghe Pentiuc, Radu-Daniel Vatavu:
TIGER: A Tucker-based instrument for gesture recognition with inertial sensors. 84-90 - Ruiyang Xia
, Guoquan Li, Zhengwen Huang, Hongying Meng
, Yu Pang:
Bi-path Combination YOLO for Real-time Few-shot Object Detection. 91-97 - Akrem Sellami
, Mohamed Farah
, Mauro Dalla Mura:
SHCNet: A semi-supervised hypergraph convolutional networks based on relevant feature selection for hyperspectral image classification. 98-106 - Maomei Liu
, Lei Tang, Sheng Zhong, Hangzai Luo, Jinye Peng:
Learning to recover lost details from the dark. 107-113 - Ying Wei
, Jianwei Zhang, Longqi Zhong:
RIS-GAN: Self-Supervised GANs via Recovering Initial State of Subimages. 114-121 - Hanzhou Wu
, Chen Li, Gen Liu, Xinpeng Zhang:
Hiding data hiding. 122-127 - Zhizhou Li, Shichong Zhou, Cai Chang, Yuanyuan Wang, Yi Guo:
A weakly supervised deep active contour model for nodule segmentation in thyroid ultrasound images. 128-137 - Francisco Fernandes
, Ivo Roupa
, Sérgio Barroso Gonçalves
, Gonçalo Moita, Miguel Tavares da Silva
, João Pereira
, Joaquim Jorge
, Richard R. Neptune
, Daniel Simões Lopes
:
Sticks and STONES may build my bones: Deep learning reconstruction of limb rotations in stick figures. 138-145 - Hongyi Wang
, Yang Xue
, Jiaxin Zhang, Lianwen Jin:
Scene table structure recognition with segmentation collaboration and alignment. 146-153 - Yirui Wu, Hao Li, Xi Feng, Andrea Casanova, Andrea F. Abate, Shaohua Wan
:
GDRL: An interpretable framework for thoracic pathologic prediction. 154-160 - Can Zhang
, Richard Yi Da Xu
, Xu Zhang, Wanming Huang:
Capture and control content discrepancies via normalised flow transfer. 161-167 - Shibao Li
, Zekun Jia, Yixuan Liu, Xuerong Cui, Jianhang Liu, Tingpei Huang, Jiuyun Xu:
CLS-DETR: A DETR-series object detection network using classification information to accelerate convergence. 168-175 - Fuxian Huang, Naye Ji, Huajian Ni, Shijian Li, Xi Li:
Adaptive cooperative exploration for reinforcement learning from imperfect demonstrations. 176-182

manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.