default search action
Fengbin Tu
Person information
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [j22]Weiwei Wu, Fengbin Tu, Xiangyu Li, Shaojun Wei, Shouyi Yin:
SWG: an architecture for sparse weight gradient computation. Sci. China Inf. Sci. 67(2) (2024) - [j21]Fengbin Tu, Zihan Wu, Yiqi Wang, Weiwei Wu, Leibo Liu, Yang Hu, Shaojun Wei, Shouyi Yin:
MulTCIM: Digital Computing-in-Memory-Based Multimodal Transformer Accelerator With Attention-Token-Bit Hybrid Sparsity. IEEE J. Solid State Circuits 59(1): 90-101 (2024) - [j20]Jiajun Zhou, Jiajun Wu, Yizhao Gao, Yuhao Ding, Chaofan Tao, Boyu Li, Fengbin Tu, Kwang-Ting Cheng, Hayden Kwok-Hay So, Ngai Wong:
DyBit: Dynamic Bit-Precision Numbers for Efficient Quantized Neural Network Inference. IEEE Trans. Comput. Aided Des. Integr. Circuits Syst. 43(5): 1613-1617 (2024) - [j19]Xin Zhao, Liang Chang, Dongqi Fan, Zhicheng Hu, Ting Yue, Fengbin Tu, Jun Zhou:
HDSuper: High-Quality and High Computational Utilization Edge Super-Resolution Accelerator With Hardware-Algorithm Co-Design Techniques. IEEE Trans. Circuits Syst. I Regul. Pap. 71(4): 1679-1692 (2024) - [c32]Jingyu He, Fengbin Tu, Kwang-Ting Cheng, Chi-Ying Tsui:
AdaP-CIM: Compute-in-Memory Based Neural Network Accelerator Using Adaptive Posit. DATE 2024: 1-2 - [c31]Zhiheng Yue, Huizheng Wang, Jiahao Fang, Jinyi Deng, Guangyang Lu, Fengbin Tu, Ruiqi Guo, Yuxuan Li, Yubin Qin, Yang Wang, Chao Li, Huiming Han, Shaojun Wei, Yang Hu, Shouyi Yin:
Exploiting Similarity Opportunities of Emerging Vision AI Models on Hybrid Bonding Architecture. ISCA 2024: 396-409 - [c30]Zhiheng Yue, Xujiang Xiang, Fengbin Tu, Yang Wang, Yiming Wang, Shaojun Wei, Yang Hu, Shouyi Yin:
15.1 A 0.795fJ/bit Physically-Unclonable Function-Protected TCAM for a Software-Defined Networking Switch. ISSCC 2024: 276-278 - [c29]Ruiqi Guo, Lei Wang, Xiaofeng Chen, Hao Sun, Zhiheng Yue, Yubin Qin, Huiming Han, Yang Wang, Fengbin Tu, Shaojun Wei, Yang Hu, Shouyi Yin:
20.2 A 28nm 74.34TFLOPS/W BF16 Heterogenous CIM-Based Accelerator Exploiting Denoising-Similarity for Diffusion Models. ISSCC 2024: 362-364 - [c28]Yiqi Wang, Zhen He, Chenggang Zhao, Zihan Wu, Mingyu Gao, Huiming Han, Shaojun Wei, Yang Hu, Fengbin Tu, Shouyi Yin:
ETCIM: An Error-Tolerant Digital-CIM Processor with Redundancy-Free Repair and Run-Time MAC and Cell Error Correction. VLSI Technology and Circuits 2024: 1-2 - [c27]Ruiqi Guo, Xiaofeng Chen, Lei Wang, Fengbin Tu, Shaojun Wei, Yang Hu, Shouyi Yin:
A 28nm 4170-TFLOPS/W/b and 195-TFLOPS/mm2/b Multiply-Free Fully-Digital Floating-Point Compute-In-Memory Macro with Mitchell's Approximation. VLSI Technology and Circuits 2024: 1-2 - 2023
- [j18]Fengbin Tu, Yiqi Wang, Zihan Wu, Ling Liang, Yufei Ding, Bongjin Kim, Leibo Liu, Shaojun Wei, Yuan Xie, Shouyi Yin:
ReDCIM: Reconfigurable Digital Computing- In -Memory Processor With Unified FP/INT Pipeline for Cloud AI Acceleration. IEEE J. Solid State Circuits 58(1): 243-255 (2023) - [j17]Fengbin Tu, Zihan Wu, Yiqi Wang, Ling Liang, Liu Liu, Yufei Ding, Leibo Liu, Shaojun Wei, Yuan Xie, Shouyi Yin:
TranCIM: Full-Digital Bitline-Transpose CIM-based Sparse Transformer Accelerator With Pipeline/Parallel Reconfigurable Modes. IEEE J. Solid State Circuits 58(6): 1798-1809 (2023) - [j16]Ling Liang, Jilan Lin, Zheng Qu, Ishtiyaque Ahmad, Fengbin Tu, Trinabh Gupta, Yufei Ding, Yuan Xie:
SPG: Structure-Private Graph Database via SqueezePIR. Proc. VLDB Endow. 16(7): 1615-1628 (2023) - [j15]Fengbin Tu, Yiqi Wang, Ling Liang, Yufei Ding, Leibo Liu, Shaojun Wei, Shouyi Yin, Yuan Xie:
SDP: Co-Designing Algorithm, Dataflow, and Architecture for In-SRAM Sparse NN Acceleration. IEEE Trans. Comput. Aided Des. Integr. Circuits Syst. 42(1): 109-121 (2023) - [j14]Yiqi Wang, Fengbin Tu, Leibo Liu, Shaojun Wei, Yuan Xie, Shouyi Yin:
SPCIM: Sparsity-Balanced Practical CIM Accelerator With Optimized Spatial-Temporal Multi-Macro Utilization. IEEE Trans. Circuits Syst. I Regul. Pap. 70(1): 214-227 (2023) - [j13]Shaojun Wei, Xinhan Lin, Fengbin Tu, Yang Wang, Leibo Liu, Shouyi Yin:
Reconfigurability, Why It Matters in AI Tasks Processing: A Survey of Reconfigurable AI Chips. IEEE Trans. Circuits Syst. I Regul. Pap. 70(3): 1228-1241 (2023) - [j12]Weiwei Wu, Fengbin Tu, Mengqi Niu, Zhiheng Yue, Leibo Liu, Shaojun Wei, Xiangyu Li, Yang Hu, Shouyi Yin:
STAR: An STGCN ARchitecture for Skeleton-Based Human Action Recognition. IEEE Trans. Circuits Syst. I Regul. Pap. 70(6): 2370-2383 (2023) - [c26]Jia Chen, Fengbin Tu, Kunming Shao, Fengshi Tian, Xiao Huo, Chi-Ying Tsui, Kwang-Ting Cheng:
AutoDCIM: An Automated Digital CIM Compiler. DAC 2023: 1-6 - [c25]Yu Zhu, Zhenhua Zhu, Guohao Dai, Fengbin Tu, Hanbo Sun, Kwang-Ting Cheng, Huazhong Yang, Yu Wang:
PIM-HLS: An Automatic Hardware Generation Tool for Heterogeneous Processing-In-Memory-based Neural Network Accelerators. DAC 2023: 1-6 - [c24]Fengshi Tian, Xiaomeng Wang, Jinbo Chen, Jiakun Zheng, Hui Wu, Xuejiao Liu, Fengbin Tu, Jie Yang, Mohamad Sawan, Chi-Ying Tsui, Kwang-Ting (Tim) Cheng:
BIOS: A 40nm Bionic Sensor-defined 0.47pJ/SOP, 268.7TSOPs/W Configurable Spiking Neuron-in-Memory Processor for Wearable Healthcare. ESSCIRC 2023: 225-228 - [c23]Siqi Li, Fengbin Tu, Liu Liu, Jilan Lin, Zheng Wang, Yangwook Kang, Yufei Ding, Yuan Xie:
ECSSD: Hardware/Data Layout Co-Designed In-Storage-Computing Architecture for Extreme Classification. ISCA 2023: 58:1-58:14 - [c22]Fengbin Tu, Zihan Wu, Yiqi Wang, Weiwei Wu, Leibo Liu, Yang Hu, Shaojun Wei, Shouyi Yin:
MuITCIM: A 28nm $2.24 \mu\mathrm{J}$/Token Attention-Token-Bit Hybrid Sparse Digital CIM-Based Accelerator for Multimodal Transformers. ISSCC 2023: 248-249 - [c21]Fengbin Tu, Yiqi Wang, Zihan Wu, Weiwei Wu, Leibo Liu, Yang Hu, Shaojun Wei, Shouyi Yin:
TensorCIM: A 28nm 3.7nJ/Gather and 8.3TFLOPS/W FP32 Digital-CIM Tensor Processor for MCM-CIM-Based Beyond-NN Acceleration. ISSCC 2023: 254-255 - [c20]Jinyi Deng, Xinru Tang, Jiahao Zhang, Yuxuan Li, Linyun Zhang, Boxiao Han, Hongjun He, Fengbin Tu, Leibo Liu, Shaojun Wei, Yang Hu, Shouyi Yin:
Towards Efficient Control Flow Handling in Spatial Architecture via Architecting the Control Flow Plane. MICRO 2023: 1395-1408 - [i5]Jiajun Zhou, Jiajun Wu, Yizhao Gao, Yuhao Ding, Chaofan Tao, Boyu Li, Fengbin Tu, Kwang-Ting Cheng, Hayden Kwok-Hay So, Ngai Wong:
DyBit: Dynamic Bit-Precision Numbers for Efficient Quantized Neural Network Inference. CoRR abs/2302.12510 (2023) - [i4]Jinyi Deng, Xinru Tang, Jiahao Zhang, Yuxuan Li, Linyun Zhang, Fengbin Tu, Leibo Liu, Shaojun Wei, Yang Hu, Shouyi Yin:
Towards Efficient Control Flow Handling in Spatial Architecture via Architecting the Control Flow Plane. CoRR abs/2307.02847 (2023) - [i3]Xiaomeng Wang, Fengshi Tian, Xizi Chen, Jiakun Zheng, Xuejiao Liu, Fengbin Tu, Jie Yang, Mohamad Sawan, Kwang-Ting Cheng, Chi-Ying Tsui:
A 137.5 TOPS/W SRAM Compute-in-Memory Macro with 9-b Memory Cell-Embedded ADCs and Signal Margin Enhancement Techniques for AI Edge Applications. CoRR abs/2307.05944 (2023) - 2022
- [j11]Liu Liu, Zheng Qu, Zhaodong Chen, Fengbin Tu, Yufei Ding, Yuan Xie:
Dynamic Sparse Attention for Scalable Transformer Acceleration. IEEE Trans. Computers 71(12): 3165-3178 (2022) - [j10]Ling Liang, Zheng Qu, Zhaodong Chen, Fengbin Tu, Yujie Wu, Lei Deng, Guoqi Li, Peng Li, Yuan Xie:
H2Learn: High-Efficiency Learning Accelerator for High-Accuracy Spiking Neural Networks. IEEE Trans. Comput. Aided Des. Integr. Circuits Syst. 41(11): 4782-4796 (2022) - [j9]Jianxun Yang, Fengbin Tu, Yixuan Li, Yiqi Wang, Leibo Liu, Shaojun Wei, Shouyi Yin:
GQNA: Generic Quantized DNN Accelerator With Weight-Repetition-Aware Activation Aggregating. IEEE Trans. Circuits Syst. I Regul. Pap. 69(10): 4069-4082 (2022) - [c19]Zheng Qu, Liu Liu, Fengbin Tu, Zhaodong Chen, Yufei Ding, Yuan Xie:
DOTA: detect and omit weak attentions for scalable transformer acceleration. ASPLOS 2022: 14-26 - [c18]Haiyang Lin, Mingyu Yan, Duo Wang, Mo Zou, Fengbin Tu, Xiaochun Ye, Dongrui Fan, Yuan Xie:
Alleviating datapath conflicts and design centralization in graph analytics acceleration. DAC 2022: 901-906 - [c17]Ling Liang, Zhaodong Chen, Lei Deng, Fengbin Tu, Guoqi Li, Yuan Xie:
Accelerating Spatiotemporal Supervised Training of Large-Scale Spiking Neural Networks on GPU. DATE 2022: 658-663 - [c16]Jilan Lin, Ling Liang, Zheng Qu, Ishtiyaque Ahmad, Liu Liu, Fengbin Tu, Trinabh Gupta, Yufei Ding, Yuan Xie:
INSPIRE: in-storage private information retrieval via protocol and architecture co-design. ISCA 2022: 102-115 - [c15]Fengbin Tu, Yiqi Wang, Zihan Wu, Ling Liang, Yufei Ding, Bongjin Kim, Leibo Liu, Shaojun Wei, Yuan Xie, Shouyi Yin:
A 28nm 29.2TFLOPS/W BF16 and 36.5TOPS/W INT8 Reconfigurable Digital CIM Processor with Unified FP/INT Pipeline and Bitwise In-Memory Booth Multiplication for Cloud Deep Learning Acceleration. ISSCC 2022: 1-3 - [c14]Fengbin Tu, Zihan Wu, Yiqi Wang, Ling Liang, Liu Liu, Yufei Ding, Leibo Liu, Shaojun Wei, Yuan Xie, Shouyi Yin:
A 28nm 15.59µJ/Token Full-Digital Bitline-Transpose CIM-Based Sparse Transformer Accelerator with Pipeline/Parallel Reconfigurable Modes. ISSCC 2022: 466-468 - [i2]Haiyang Lin, Mingyu Yan, Duo Wang, Mo Zou, Fengbin Tu, Xiaochun Ye, Dongrui Fan, Yuan Xie:
Alleviating Datapath Conflicts and Design Centralization in Graph Analytics Acceleration. CoRR abs/2202.11343 (2022) - 2021
- [j8]Fengbin Tu, Weiwei Wu, Yang Wang, Hongjiang Chen, Feng Xiong, Man Shi, Ning Li, Jinyi Deng, Tianbao Chen, Leibo Liu, Shaojun Wei, Yuan Xie, Shouyi Yin:
Evolver: A Deep Learning Processor With On-Device Quantization-Voltage-Frequency Tuning. IEEE J. Solid State Circuits 56(2): 658-673 (2021) - [j7]Fengbin Tu, Weiwei Wu, Yang Wang, Hongjiang Chen, Feng Xiong, Man Shi, Ning Li, Jinyi Deng, Tianbao Chen, Leibo Liu, Shaojun Wei, Yuan Xie, Shouyi Yin:
Erratum to "Evolver: a Deep Learning Processor With On-Device Quantization-Voltage-Frequency Tuning". IEEE J. Solid State Circuits 56(9): 2895 (2021) - [c13]Xinhan Lin, Liang Sun, Fengbin Tu, Leibo Liu, Xiangyu Li, Shaojun Wei, Shouyi Yin:
ADROIT: An Adaptive Dynamic Refresh Optimization Framework for DRAM Energy Saving In DNN Training. DAC 2021: 751-756 - [c12]Hussam Amrouch, Jian-Jia Chen, Kaushik Roy, Yuan Xie, Indranil Chakraborty, Wenqin Huangfu, Ling Liang, Fengbin Tu, Cheng Wang, Mikail Yayla:
Brain-Inspired Computing: Adventure from Beyond CMOS Technologies to Beyond von Neumann Architectures ICCAD Special Session Paper. ICCAD 2021: 1-9 - [i1]Ling Liang, Zheng Qu, Zhaodong Chen, Fengbin Tu, Yujie Wu, Lei Deng, Guoqi Li, Peng Li, Yuan Xie:
H2Learn: High-Efficiency Learning Accelerator for High-Accuracy Spiking Neural Networks. CoRR abs/2107.11746 (2021) - 2020
- [c11]Feng Xiong, Fengbin Tu, Man Shi, Yang Wang, Leibo Liu, Shaojun Wei, Shouyi Yin:
STC: Significance-aware Transform-based Codec Framework for External Memory Access Reduction. DAC 2020: 1-6 - [c10]Liu Liu, Zheng Qu, Lei Deng, Fengbin Tu, Shuangchen Li, Xing Hu, Zhenyu Gu, Yufei Ding, Yuan Xie:
DUET: Boosting Deep Neural Network Efficiency on Dual-Module Architecture. MICRO 2020: 738-750
2010 – 2019
- 2019
- [j6]Shouyi Yin, Shibin Tang, Xinhan Lin, Peng Ouyang, Fengbin Tu, Leibo Liu, Shaojun Wei:
A High Throughput Acceleration for Hybrid Neural Networks With Efficient Resource Management on FPGA. IEEE Trans. Comput. Aided Des. Integr. Circuits Syst. 38(4): 678-691 (2019) - [j5]Fengbin Tu, Shouyi Yin, Peng Ouyang, Leibo Liu, Shaojun Wei:
Reconfigurable Architecture for Neural Approximation in Multimedia Computing. IEEE Trans. Circuits Syst. Video Technol. 29(3): 892-906 (2019) - [j4]Shouyi Yin, Shibin Tang, Xinhan Lin, Peng Ouyang, Fengbin Tu, Leibo Liu, Jishen Zhao, Cong Xu, Shuangchen Li, Yuan Xie, Shaojun Wei:
Parana: A Parallel Neural Architecture Considering Thermal Problem of 3D Stacked Memory. IEEE Trans. Parallel Distributed Syst. 30(1): 146-160 (2019) - [c9]Feng Xiong, Fengbin Tu, Shouyi Yin, Shaojun Wei:
Towards Efficient Compact Network Training on Edge-Devices. ISVLSI 2019: 61-67 - [c8]Weiwei Wu, Shouyi Yin, Fengbin Tu, Leibo Liu, Shaojun Wei:
MoNA: Mobile Neural Architecture with Reconfigurable Parallel Dimensions. NEWCAS 2019: 1-4 - 2018
- [j3]Shouyi Yin, Peng Ouyang, Shibin Tang, Fengbin Tu, Xiudong Li, Shixuan Zheng, Tianyi Lu, Jiangyuan Gu, Leibo Liu, Shaojun Wei:
A High Energy Efficient Reconfigurable Hybrid Neural Network Processor for Deep Learning Applications. IEEE J. Solid State Circuits 53(4): 968-982 (2018) - [j2]Jiale Yan, Shouyi Yin, Fengbin Tu, Leibo Liu, Shaojun Wei:
GNA: Reconfigurable and Efficient Architecture for Generative Network Acceleration. IEEE Trans. Comput. Aided Des. Integr. Circuits Syst. 37(11): 2519-2529 (2018) - [c7]Xinhan Lin, Shouyi Yin, Fengbin Tu, Leibo Liu, Xiangyu Li, Shaojun Wei:
LCP: a layer clusters paralleling mapping method for accelerating inception and residual networks on FPGA. DAC 2018: 16:1-16:6 - [c6]Fengbin Tu, Weiwei Wu, Shouyi Yin, Leibo Liu, Shaojun Wei:
RANA: Towards Efficient Neural Acceleration with Refresh-Optimized Embedded DRAM. ISCA 2018: 340-352 - [c5]Jianxin Guo, Shouyi Yin, Peng Ouyang, Fengbin Tu, Shibin Tang, Leibo Liu, Shaojun Wei:
Bit-width Adaptive Accelerator Design for Convolution Neural Network. ISCAS 2018: 1-5 - [c4]Zhihui Wang, Shouyi Yin, Fengbin Tu, Leibo Liu, Shaojun Wei:
An Energy Efficient JPEG Encoder with Neural Network Based Approximation and Near-Threshold Computing. ISCAS 2018: 1-5 - 2017
- [j1]Fengbin Tu, Shouyi Yin, Peng Ouyang, Shibin Tang, Leibo Liu, Shaojun Wei:
Deep Convolutional Neural Network Architecture With Reconfigurable Computation Patterns. IEEE Trans. Very Large Scale Integr. Syst. 25(8): 2220-2233 (2017) - [c3]Shibin Tang, Shouyi Yin, Shixuan Zheng, Peng Ouyang, Fengbin Tu, Leiyue Yao, JinZhou Wu, Wenming Cheng, Leibo Liu, Shaojun Wei:
AEPE: An area and power efficient RRAM crossbar-based accelerator for deep CNNs. NVMSA 2017: 1-6 - 2015
- [c2]Fengbin Tu, Shouyi Yin, Peng Ouyang, Leibo Liu, Shaojun Wei:
RNA: a reconfigurable architecture for hardware neural acceleration. DATE 2015: 695-700 - [c1]Fengbin Tu, Shouyi Yin, Peng Ouyang, Leibo Liu, Shaojun Wei:
Neural approximating architecture targeting multiple application domains. ISCAS 2015: 2509-2512
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-10-21 21:28 CEST by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint