


default search action
Pattern Recognition, Volume 68
Volume 68, August 2017
- Ceyhun Celik, Hasan Sakir Bilge:
Content based image retrieval with sparse representations and local feature descriptors : A comparative study. 1-13 - Guangfeng Lin
, Kaiyang Liao, Bangyong Sun, Yajun Chen, Fan Zhao:
Dynamic graph fusion label propagation for semi-supervised multi-modality classification. 14-23 - Ziyu Wang, Jing-Hao Xue:
The matched subspace detector with interaction effects. 24-37 - Raghvendra Kannao
, Prithwijit Guha:
Success based locally weighted Multiple Kernel combination. 38-51 - Yongshan Zhang, Jia Wu
, Chuan Zhou, Zhihua Cai:
Instance cloned extreme learning machine. 52-65 - Sujuan Hou, Ling Chen, Dacheng Tao, Shangbo Zhou, Wenjie Liu, Yuanjie Zheng:
Multi-layer multi-view topic model for classifying advertising video. 66-81 - Songlin Chen, Renbo Xia, Jibin Zhao, Yueling Chen, Maobang Hu:
A hybrid method for ellipse detection in industrial images. 82-98 - Tiecheng Song, Jianfei Cai
, Tianqi Zhang, Chenqiang Gao, Fanman Meng, Qingbo Wu:
Semi-supervised manifold-embedded hashing with joint feature representation and classifier learning. 99-110 - Thomas Kautz, Björn M. Eskofier
, Cristian F. Pasluosta
:
Generic performance measure for multiclass-classifiers. 111-125 - Weilin Huang, Hujun Yin
:
Robust face recognition with structural binary gradient patterns. 126-140 - Siu-Kai Choy, Shu Yan Lam, Kwok Wai Yu, Wing Yan Lee
, King Tai Leung:
Fuzzy model-based clustering and its application in image segmentation. 141-157 - Palaiahnakote Shivakumara
, Liang Wu
, Tong Lu, Chew Lim Tan, Michael Blumenstein, Basavaraj S. Anami:
Fractals based multi-oriented text detection system for recognition in mobile video images. 158-174 - Guile Wu, Wenxiong Kang:
Exploiting superpixel and hybrid hash for kernel-based visual tracking. 175-190 - Lingfeng Wang, Zehao Huang
, Yongchao Gong, Chunhong Pan:
Ensemble based deep networks for image super-resolution. 191-198 - Xin Shen, Lingfeng Niu, Zhiquan Qi, Yingjie Tian
:
Support vector machine classifier with truncated pinball loss. 199-210 - Ya Su, Xinbo Gao
, Xu-Cheng Yin:
Fast alignment for sparse representation based face recognition. 211-221 - Byunghwan Jeon, Yoonmi Hong, Dongjin Han, Yeonggul Jang, Sunghee Jung
, Youngtaek Hong, Seongmin Ha, Hackjoon Shim, Hyuk-Jae Chang
:
Maximum a posteriori estimation method for aorta localization and coronary seed identification. 222-232 - Cristina Carmona-Duarte
, Miguel A. Ferrer
, Antonio Parziale
, Angelo Marcelli
:
Temporal evolution in synthetic handwriting. 233-244 - Shibai Yin, Yiming Qian, Minglun Gong
:
Unsupervised hierarchical image segmentation through fuzzy entropy maximization. 245-259 - Weihong Deng
, Jiani Hu, Zhongjun Wu, Jun Guo:
Lighting-aware face frontalization for unconstrained face recognition. 260-271 - Jianfang Hu, Wei-Shi Zheng, Xiaohua Xie, Jianhuang Lai:
Sparse transfer for facial shape-from-shading. 272-285 - Qianqian Wang
, Quanxue Gao, Xinbo Gao
, Feiping Nie
:
Optimal mean two-dimensional principal component analysis with F-norm minimization. 286-294 - Li Liu, Shu Wang, Guoxin Su
, Zi-Gang Huang
, Ming Liu:
Towards complex activity recognition using a Bayesian network-based probabilistic generative framework. 295-309 - Angelos P. Giotis
, Giorgos Sfikas, Basilis Gatos, Christophoros Nikou:
A survey of document image word spotting techniques. 310-332
- Tao Mei, Jason J. Corso
, Jiebo Luo
:
Editorial for special section of video analytics with deep learning. 333 - Shugao Ma
, Sarah Adel Bargal, Jianming Zhang, Leonid Sigal, Stan Sclaroff:
Do less and achieve more: Training CNNs for action recognition utilizing action images from the Web. 334-345
- Mengyuan Liu, Hong Liu, Chen Chen:
Enhanced skeleton visualization for view invariant human action recognition. 346-362

manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.