default search action
Yida Wang 0003
Person information
- affiliation: Amazon Web Services, Inc., East Palo Alto, CA, USA
- affiliation: Intel Corporation, Parallel Computing Lab, Santa Clara, CA, USA
- affiliation: Princeton University, Department of Computer Science, NJ, USA
Other persons with the same name
- Yida Wang — disambiguation page
- Yida Wang 0001 — Technical University of Munich, CAMP, Germany (and 1 more)
- Yida Wang 0002 — Nanyang Technological University, School of Computer Engineering, Centre for Advanced Information Systems, Singapore
- Yida Wang 0004 — Army Engineering University of PLA, College of Communications Engineering, Nanjing, China
- Yida Wang 0005 — Beihang University, School of Computer Science and Engineering, Beijing, China
- Yida Wang 0006 — Jilin University, Key Laboratory of Geophysical Exploration Equipment, Changchun, China
- Yida Wang 0007 — Shanghai Jiao Tong University, Shanghai, China
- Yida Wang 0008 — Chinese Academy of Sciences, CSSAR, China (and 1 more)
- Yida Wang 0009 — Tsinghua University, Tsinghua-Berkeley Shenzhen Institute, CoAI group, DCST, Institute for Artificial Intelligence, China
- Yida Wang 0010 — East China Normal University, Shanghai Key Laboratory of Magnetic Resonance, China
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [j4]Xueying Wang, Guangli Li, Zhen Jia, Xiaobing Feng, Yida Wang:
Fast Convolution Meets Low Precision: Exploring Efficient Quantized Winograd Convolution on Modern CPUs. ACM Trans. Archit. Code Optim. 21(1): 5:1-5:26 (2024) - [c26]Hongzheng Chen, Cody Hao Yu, Shuai Zheng, Zhen Zhang, Zhiru Zhang, Yida Wang:
Slapo: A Schedule Language for Progressive Optimization of Large Deep Learning Model Training. ASPLOS (2) 2024: 1095-1111 - [c25]Xinwei Fu, Zhen Zhang, Haozheng Fan, Guangtai Huang, Mohammad El-Shabani, Randy Huang, Rahul Solanki, Fei Wu, Ron Diamant, Yida Wang:
Distributed Training of Large Language Models on AWS Trainium. SoCC 2024: 961-976 - [c24]Chenyu Jiang, Zhen Jia, Shuai Zheng, Yida Wang, Chuan Wu:
DynaPipe: Optimizing Multi-task Training through Dynamic Pipelines. EuroSys 2024: 542-559 - [c23]Youngsuk Park, Kailash Budhathoki, Liangfu Chen, Jonas M. Kübler, Jiaji Huang, Matthäus Kleindessner, Jun Huan, Volkan Cevher, Yida Wang, George Karypis:
Inference Optimization of Foundation Models on AI Accelerators. KDD 2024: 6605-6615 - [c22]Chenyu Jiang, Ye Tian, Zhen Jia, Shuai Zheng, Chuan Wu, Yida Wang:
Lancet: Accelerating Mixture-of-Experts Training via Whole Graph Computation-Communication Overlapping. MLSys 2024 - [c21]Ye Tian, Zhen Jia, Ziyue Luo, Yida Wang, Chuan Wu:
DiffusionPipe: Training Large Diffusion Models with Efficient Pipelines. MLSys 2024 - [c20]Jun Huang, Zhen Zhang, Shuai Zheng, Feng Qin, Yida Wang:
DISTMM: Accelerating Distributed Multimodal Model Training. NSDI 2024 - [i21]Haozheng Fan, Hao Zhou, Guangtai Huang, Parameswaran Raman, Xinwei Fu, Gaurav Gupta, Dhananjay Ram, Yida Wang, Jun Huan:
HLAT: High-quality Large Language Model Pre-trained on AWS Trainium. CoRR abs/2404.10630 (2024) - [i20]Chenyu Jiang, Ye Tian, Zhen Jia, Shuai Zheng, Chuan Wu, Yida Wang:
Lancet: Accelerating Mixture-of-Experts Training via Whole Graph Computation-Communication Overlapping. CoRR abs/2404.19429 (2024) - [i19]Ye Tian, Zhen Jia, Ziyue Luo, Yida Wang, Chuan Wu:
DiffusionPipe: Training Large Diffusion Models with Efficient Pipelines. CoRR abs/2405.01248 (2024) - [i18]Youngsuk Park, Kailash Budhathoki, Liangfu Chen, Jonas M. Kübler, Jiaji Huang, Matthäus Kleindessner, Jun Huan, Volkan Cevher, Yida Wang, George Karypis:
Inference Optimization of Foundation Models on AI Accelerators. CoRR abs/2407.09111 (2024) - 2023
- [j3]Y. Peeta Li, Yida Wang, Nicholas B. Turk-Browne, Brice A. Kuhl, J. Benjamin Hutchinson:
Perception and memory retrieval states are reflected in distributed patterns of background functional connectivity. NeuroImage 276: 120221 (2023) - [c19]Yaoyao Ding, Cody Hao Yu, Bojian Zheng, Yizhi Liu, Yida Wang, Gennady Pekhimenko:
Hidet: Task-Mapping Programming Paradigm for Deep Learning Tensor Programs. ASPLOS (2) 2023: 370-384 - [c18]Aashiq Muhamed, Christian Bock, Rahul Solanki, Youngsuk Park, Yida Wang, Jun Huan:
Training Large-scale Foundation Models on Emerging AI Chips. KDD 2023: 5821-5822 - [c17]Bojian Zheng, Cody Hao Yu, Jie Wang, Yaoyao Ding, Yizhi Liu, Yida Wang, Gennady Pekhimenko:
Grape: Practical and Efficient Graphed Execution for Dynamic Deep Neural Networks on GPUs. MICRO 2023: 1364-1380 - [c16]Zhuang Wang, Zhen Jia, Shuai Zheng, Zhen Zhang, Xinwei Fu, T. S. Eugene Ng, Yida Wang:
GEMINI: Fast Failure Recovery in Distributed Training with In-Memory Checkpoints. SOSP 2023: 364-381 - [i17]Hongzheng Chen, Cody Hao Yu, Shuai Zheng, Zhen Zhang, Zhiru Zhang, Yida Wang:
Decoupled Model Schedule for Deep Learning Training. CoRR abs/2302.08005 (2023) - [i16]Cody Hao Yu, Haozheng Fan, Guangtai Huang, Zhen Jia, Yizhi Liu, Jie Wang, Zach Zheng, Yuan Zhou, Haichen Shen, Junru Shao, Mu Li, Yida Wang:
RAF: Holistic Compilation for Deep Learning Model Training. CoRR abs/2303.04759 (2023) - [i15]Chenyu Jiang, Zhen Jia, Shuai Zheng, Yida Wang, Chuan Wu:
DynaPipe: Optimizing Multi-task Training through Dynamic Pipelines. CoRR abs/2311.10418 (2023) - 2022
- [j2]Zhen Zhang, Shuai Zheng, Yida Wang, Justin Chiu, George Karypis, Trishul Chilimbi, Mu Li, Xin Jin:
MiCS: Near-linear Scaling for Training Gigantic Model on Public Cloud. Proc. VLDB Endow. 16(1): 37-50 (2022) - [c15]Bojian Zheng, Ziheng Jiang, Cody Hao Yu, Haichen Shen, Joshua Fromm, Yizhi Liu, Yida Wang, Luis Ceze, Tianqi Chen, Gennady Pekhimenko:
DietCode: Automatic Optimization for Dynamic Tensor Programs. MLSys 2022 - [c14]Lianmin Zheng, Zhuohan Li, Hao Zhang, Yonghao Zhuang, Zhifeng Chen, Yanping Huang, Yida Wang, Yuanzhong Xu, Danyang Zhuo, Eric P. Xing, Joseph E. Gonzalez, Ion Stoica:
Alpa: Automating Inter- and Intra-Operator Parallelism for Distributed Deep Learning. OSDI 2022: 559-578 - [i14]Lianmin Zheng, Zhuohan Li, Hao Zhang, Yonghao Zhuang, Zhifeng Chen, Yanping Huang, Yida Wang, Yuanzhong Xu, Danyang Zhuo, Joseph E. Gonzalez, Ion Stoica:
Alpa: Automating Inter- and Intra-Operator Parallelism for Distributed Deep Learning. CoRR abs/2201.12023 (2022) - [i13]Zhen Zhang, Shuai Zheng, Yida Wang, Justin Chiu, George Karypis, Trishul Chilimbi, Mu Li, Xin Jin:
MiCS: Near-linear Scaling for Training Gigantic Model on Public Cloud. CoRR abs/2205.00119 (2022) - [i12]Yaoyao Ding, Cody Hao Yu, Bojian Zheng, Yizhi Liu, Yida Wang, Gennady Pekhimenko:
Hidet: Task Mapping Programming Paradigm for Deep Learning Tensor Programs. CoRR abs/2210.09603 (2022) - 2021
- [c13]Jian Weng, Animesh Jain, Jie Wang, Leyuan Wang, Yida Wang, Tony Nowatzki:
UNIT: Unifying Tensorized Instruction Compilation. CGO 2021: 77-89 - [c12]Cody Hao Yu, Xingjian Shi, Haichen Shen, Zhi Chen, Mu Li, Yida Wang:
Lorien: Efficient Deep Learning Workloads Delivery. SoCC 2021: 18-32 - [c11]Guangli Li, Zhen Jia, Xiaobing Feng, Yida Wang:
LoWino: Towards Efficient Low-Precision Winograd Convolutions on Modern CPUs. ICPP 2021: 81:1-81:11 - [c10]Haichen Shen, Jared Roesch, Zhi Chen, Wei Chen, Yong Wu, Mu Li, Vin Sharma, Zachary Tatlock, Yida Wang:
Nimble: Efficiently Compiling Dynamic Neural Networks for Model Inference. MLSys 2021 - [i11]Jian Weng, Animesh Jain, Jie Wang, Leyuan Wang, Yida Wang, Tony Nowatzki:
UNIT: Unifying Tensorized Instruction Compilation. CoRR abs/2101.08458 (2021) - [i10]Zhi Chen, Cody Hao Yu, Trevor Morris, Jorn Tuyls, Yi-Hsiang Lai, Jared Roesch, Elliott Delaye, Vin Sharma, Yida Wang:
Bring Your Own Codegen to Deep Learning Compiler. CoRR abs/2105.03215 (2021) - 2020
- [c9]Lianmin Zheng, Chengfan Jia, Minmin Sun, Zhao Wu, Cody Hao Yu, Ameer Haj-Ali, Yida Wang, Jun Yang, Danyang Zhuo, Koushik Sen, Joseph E. Gonzalez, Ion Stoica:
Ansor: Generating High-Performance Tensor Programs for Deep Learning. OSDI 2020: 863-879 - [c8]Yuwei Hu, Zihao Ye, Minjie Wang, Jiali Yu, Da Zheng, Mu Li, Zheng Zhang, Zhiru Zhang, Yida Wang:
FeatGraph: a flexible and efficient backend for graph neural network systems. SC 2020: 71 - [c7]Zhen Zhang, Chaokun Chang, Haibin Lin, Yida Wang, Raman Arora, Xin Jin:
Is Network the Bottleneck of Distributed Training? NetAI@SIGCOMM 2020: 8-13 - [i9]Haichen Shen, Jared Roesch, Zhi Chen, Wei Chen, Yong Wu, Mu Li, Vin Sharma, Zachary Tatlock, Yida Wang:
Nimble: Efficiently Compiling Dynamic Neural Networks for Model Inference. CoRR abs/2006.03031 (2020) - [i8]Lianmin Zheng, Chengfan Jia, Minmin Sun, Zhao Wu, Cody Hao Yu, Ameer Haj-Ali, Yida Wang, Jun Yang, Danyang Zhuo, Koushik Sen, Joseph E. Gonzalez, Ion Stoica:
Ansor : Generating High-Performance Tensor Programs for Deep Learning. CoRR abs/2006.06762 (2020) - [i7]Zhen Zhang, Chaokun Chang, Haibin Lin, Yida Wang, Raman Arora, Xin Jin:
Is Network the Bottleneck of Distributed Training? CoRR abs/2006.10103 (2020) - [i6]Animesh Jain, Shoubhik Bhattacharya, Masahiro Masuda, Vin Sharma, Yida Wang:
Efficient Execution of Quantized Deep Learning Models: A Compiler Approach. CoRR abs/2006.10226 (2020) - [i5]Yuwei Hu, Zihao Ye, Minjie Wang, Jiali Yu, Da Zheng, Mu Li, Zheng Zhang, Zhiru Zhang, Yida Wang:
FeatGraph: A Flexible and Efficient Backend for Graph Neural Network Systems. CoRR abs/2008.11359 (2020)
2010 – 2019
- 2019
- [c6]Leyuan Wang, Zhi Chen, Yizhi Liu, Yao Wang, Lianmin Zheng, Mu Li, Yida Wang:
A Unified Optimization Approach for CNN Model Inference on Integrated GPUs. ICPP 2019: 99:1-99:10 - [c5]Yizhi Liu, Yao Wang, Ruofei Yu, Mu Li, Vin Sharma, Yida Wang:
Optimizing CNN Model Inference on CPUs. USENIX ATC 2019: 1025-1040 - [i4]Leyuan Wang, Zhi Chen, Yizhi Liu, Yao Wang, Lianmin Zheng, Mu Li, Yida Wang:
A Unified Optimization Approach for CNN Model Inference on Integrated GPUs. CoRR abs/1907.02154 (2019) - 2018
- [i3]Linpeng Tang, Yida Wang, Theodore L. Willke, Kai Li:
Scheduling Computation Graphs of Deep Learning Models on Manycore CPUs. CoRR abs/1807.09667 (2018) - [i2]Yizhi Liu, Yao Wang, Ruofei Yu, Mu Li, Vin Sharma, Yida Wang:
Optimizing CNN Model Inference on CPUs. CoRR abs/1809.02697 (2018) - 2017
- [j1]Krzysztof J. Gorgolewski, Fidel Alfaro-Almagro, Tibor Auer, Pierre Bellec, Mihai Capota, M. Mallar Chakravarty, Nathan William Churchill, Alexander Li Cohen, R. Cameron Craddock, Gabriel A. Devenyi, Anders Eklund, Oscar Esteban, Guillaume Flandin, Satrajit S. Ghosh, J. Swaroop Guntupalli, Mark Jenkinson, Anisha Keshavan, Gregory Kiar, Franziskus Liem, Pradeep Reddy Raamana, David Raffelt, Christopher John Steele, Pierre-Olivier Quirion, Robert E. Smith, Stephen C. Strother, Gaël Varoquaux, Yida Wang, Tal Yarkoni, Russell A. Poldrack:
BIDS apps: Improving ease of use, accessibility, and reproducibility of neuroimaging data analysis methods. PLoS Comput. Biol. 13(3) (2017) - [c4]Dipanjan Sengupta, Yida Wang, Narayanan Sundaram, Theodore L. Willke:
High-Performance Incremental SVM Learning on Intel® Xeon Phi™ Processors. ISC 2017: 120-138 - 2016
- [b1]Yida Wang:
Large-scale analyses of functional interactions in the human brain. Princeton University, USA, 2016 - [c3]Michael J. Anderson, Mihai Capota, Javier S. Turek, Xia Zhu, Theodore L. Willke, Yida Wang, Po-Hsuan Chen, Jeremy R. Manning, Peter J. Ramadge, Kenneth A. Norman:
Enabling factor analysis on thousand-subject neuroimaging datasets. IEEE BigData 2016: 1151-1160 - [c2]Yida Wang, Bryn Keller, Mihai Capota, Michael J. Anderson, Narayanan Sundaram, Jonathan D. Cohen, Kai Li, Nicholas B. Turk-Browne, Theodore L. Willke:
Real-time full correlation matrix analysis of fMRI data. IEEE BigData 2016: 1242-1251 - [i1]Michael J. Anderson, Mihai Capota, Javier S. Turek, Xia Zhu, Theodore L. Willke, Yida Wang, Po-Hsuan Chen, Jeremy R. Manning, Peter J. Ramadge, Kenneth A. Norman:
Enabling Factor Analysis on Thousand-Subject Neuroimaging Datasets. CoRR abs/1608.04647 (2016) - 2015
- [c1]Yida Wang, Michael J. Anderson, Jonathan D. Cohen, Alexander Heinecke, Kai Li, Nadathur Satish, Narayanan Sundaram, Nicholas B. Turk-Browne, Theodore L. Willke:
Full correlation matrix analysis of fMRI data on Intel® Xeon Phi™ coprocessors. SC 2015: 23:1-23:12
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-12-04 21:10 CET by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint