Yusuke Nagasaka

Profile

I'm a Ph.D Student at Satoshi Matsuoka Laboratory, Dept. of Mathematical and Computing Science, Tokyo Institute of Technology.

Research Interests

Acceleration of Sparse Matrix Computation on Many-core Processors

Sparse Matrix Vector Multiplication (SpMV)

Sparse General Matrix-Matrix Multiplication (SpGEMM)

Publication

Lists

Google Scholar

T2R2 (Tokyo Tech Research Repository)

Papers

Yusuke Nagasaka, Akira Nukada and Satoshi Matsuoka, "Cache-aware sparse matrix formats for Kepler GPU", 20th IEEE International Conference on Parallel and Distributed Systems (ICPADS), Hsinchu, Taiwan, December 2014. [DOI] [Slides]

Yusuke Nagasaka, Akira Nukada and Satoshi Matsuoka, "Adaptive Multi-level Blocking Optimization for Sparse Matrix Vector Multiplication on GPU", International Conference on Computational Science 2016 (ICCS 2016), San Diego, California, USA, June 2016. [DOI] [Slides]

Yusuke Nagasaka, Akira Nukada and Satoshi Matsuoka, "High-performance and Memory-saving Sparse General Matrix-Matrix Multiplication for NVIDIA Pascal GPU", International Conference on Parallel Processing 2017 (ICPP 2017), Bristol, UK, August 2017. [DOI] [Slides]

長坂侑亮, 額田彰, 松岡聡, "GPUのキャッシュを考慮した疎行列ベクトル積計算手法の性能評価", 情報処理学会研究報告 HPC-144, 神奈川, 2014年5月. [DOI]

長坂侑亮, 額田彰, 松岡聡, "疎行列ベクトル積計算を対象としたGPU向けメモリアクセス削減手法", 情報処理学会研究報告 HPC-151, 沖縄, 2015年9月. [DOI]

長坂侑亮, 額田彰, 松岡聡, "メモリ使用量を抑えた疎行列疎行列積計算のGPU高速化", 情報処理学会研究報告 HPC-156, 北海道, 2016年9月. [DOI]

Posters

Yusuke Nagasaka, Akira Nukada and Satoshi Matsuoka, "Cache-aware Sparse Matrix Format for GPU", International Superconputing Conference (ISC'14) HPC in Asia Posters, Leipzig, Germany, June 2014.

Yusuke Nagasaka, Akira Nukada and Satoshi Matsuoka, "Multi-Level Blocking Optimization for Fast Sparse Matrix Vector Multiplication on GPUs", The International Conference for High Performance Computing, Networking, Storage and Analysis (SC15) Technical Program Posters, Austin, Texas, USA, November 2015. [Link]

Yusuke Nagasaka, "Fast Sparse Matrix Vector Multiplication with Highly-Compressed Sparse Format", GPU Technology Conference (GTC2016), San Jose, CA, USA, April 2016. [Link]

Yusuke Nagasaka, Akira Nukada and Satoshi Matsuoka, "Fast Sparse General Matrix-Matrix Multiplication on GPU with Low Memory Usage", The International Conference for High Performance Computing, Networking, Storage and Analysis (SC16) Technical Program Posters, Salt Lake City, Utah, USA, November 2016. [Link]

Yusuke Nagasaka, "Fast and Memory-saving SpGEMM Algorithm for New Pascal Generation GPU", GPU Technology Conference (GTC2017), San Jose, CA, USA, May 2017. [Link]

長坂侑亮, "GPUでのキャッシュ再利用性を考慮した列分割型疎行列フォーマットの性能評価", GPU テクノロジ・カンファレンス(GTC Japan 2014), 東京, 2014年7月.

長坂侑亮, "多段階ブロッキングによるメモリアクセス量削減を図ったGPU向け疎行列ベクトル積計算手法の性能評価", GPU テクノロジ・カンファレンス(GTC Japan 2015), 東京, 2015年9月.

Talk

Yusuke Nagasaka "Exploiting GPU Caches in Sparse Matrix Vector Multiplication", GPU Technology Conference (GTC2015), San Jose, CA, USA, March 2015. [Link]

Award

2015年度情報処理学会コンピュータサイエンス領域奨励賞. [Link]

Software

nsparse

Fast Sparse Matrix Library for GPU. Supporting SpMV with AMB format and Hash-table based SpGEMM.

[GitHub Link]

Experiences

Teaching Assistant

2016

Working as teaching assistant of the courses on computer system and high-performance computing at Tokyo Institute of Technology.

Internship

May - July, 2017

Summer internship at Lawrence Berkeley National Laboratory (LBNL).

Education

Bachelor of Science

Tokyo Institute of Technology, Dept. of Information Science

April 2010 - March 2014

Bachelor Thesis: "キャッシュを考慮したSPMVのGPU高速化"
Supervisor: Satoshi Matsuoka

Master of Science

Tokyo Institute of Technology, Dept. of Mathematical and Computing Sciences

April 2014 - March 2016

Master Thesis: "メモリアクセス量削減による疎行列ベクトル積計算のメニーコア向け高速化"
Supervisor: Satoshi Matsuoka

Ph.D Course

Tokyo Institute of Technology, Dept. of Mathematical and Computing Science

April 2016 -

Contact

E-mail: nagasaka.y.aa [at] m.titech.ac.jp