default search action
1. COMHPC@SC 2016: Salt Lake City, UT, USA
- First International Workshop on Communication Optimizations in HPC, COMHPC@SC 2016, Salt Lake City, UT, USA, November 18, 2016. IEEE 2016, ISBN 978-1-5090-3829-9
- Richard L. Graham, Devendar Bureddy, Pak Lui, Hal Rosenstock, Gilad Shainer, Gil Bloch, Dror Goldenberg, Mike Dubman, Sasha Kotchubievsky, Vladimir Koushnir, Lion Levi, Alex Margolin, Tamir Ronen, Alexander Shpiner, Oded Wertheim, Eitan Zahavi:
Scalable Hierarchical Aggregation Protocol (SHArP): A Hardware Architecture for Efficient Data Reduction. 1-10 - D. Brian Larkins, James Dinan:
Extending a Message Passing Runtime to Support Partitioned, Global Logical Address Spaces. 11-16 - Cy P. Chan, John D. Bachan, Joseph P. Kenny, Jeremiah J. Wilke, Vincent E. Beckner, Ann S. Almgren, John B. Bell:
Topology-Aware Performance Optimization and Modeling of Adaptive Mesh Refinement Codes for Exascale. 17-28 - Ching-Hsiang Chu, Khaled Hamidouche, Hari Subramoni, Akshay Venkatesh, Bracy Elton, Dhabaleswar K. Panda:
Efficient Reliability Support for Hardware Multicast-Based Broadcast in GPU-enabled Streaming Applications. 29-38 - Grey Ballard, James Demmel, Andrew Gearhart, Benjamin Lipshitz, Yishai Oltchik, Oded Schwartz, Sivan Toledo:
Network Topologies and Inevitable Contention. 39-52 - Huansong Fu, Swaroop Pophale, Manjunath Gorentla Venkata, Weikuan Yu:
DISP: Optimizations towards Scalable MPI Startup. 53-62 - Emmanuel Jeannot, Guillaume Mercier, Francois Tessier:
Topology and Affinity Aware Hierarchical and Distributed Load-Balancing in Charm++. 63-72 - Francois Tessier, Preeti Malakar, Venkatram Vishwanath, Emmanuel Jeannot, Florin Isaila:
Topology-Aware Data Aggregation for Intensive I/O on Large-Scale Supercomputers. 73-81
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.