Distinguished ACM Speaker:
Based in China
Yu Wang received his B.S. degree in 2002 and Ph.D. degree (with honor) in 2007 from Tsinghua University, Beijing. He is currently an Associate Professor with the Department of Electronic Engineering, Tsinghua University.
His research interests include brain inspired computing, application specific hardware computing, parallel circuit analysis, and power/reliability aware system design methodology. Dr. Wang has authored and coauthored over 130 papers in refereed journals and conferences. He has received Best Paper Award in ISVLSI 2012 and Best Poster Award in HEART 2012 with 6 Best Paper Nominations (ASPDAC 2014, ASPDAC 2012, 2 in ASPDAC 2010, ISLPED 2009, CODES 2009). He is a recipient of IBM X10 Faculty Award in 2010 (one of 30 worldwide). He served as TPC chair for ICFPT 2011 and Finance Chair of ISLPED 2012-2016, and served as program committee member for leading conferences in these areas, including top EDA conferences such as DAC, DATE, ICCAD, ASP-DAC, and top FPGA conferences such as FPGA and FPT. Currently he serves as Associate Editor for IEEE Transactions on CAD and Journal of Circuits, Systems, and Computers. He also serves as guest editor for Integration, the VLSI Journal and IEEE Transactions on Multi-Scale Computing Systems. He has given 23 invited talks in industry/academia. He is an ACM member, and IEEE Senior Member.
To request a single lecture/event, click on the desired lecture and complete the Request this Lecture Form.
- Deep Learning on Hardware: Compression and Acceleration:
Deep learning, and especially convolutional neural networks
(CNN), has been the most successful and powerful techniques in computer vision.
Applications of CNN range from visual recognition to image classification,
- Hardware Acceleration for Big Data Era:
Integration of more processing element and memory is an
important way to integrate more transistors, so that one single IC can have
more functions. However, how to map different applications to multi/many core
system or dire...
- Neural Network on Emerging Memory: From Applications to Circuits on ReRAM:
The world is experiencing a data revolution to discover
knowledge in big data. Large scale neural networks are one of the mainstream
tools of big data analytics. However, these methods causes much more energy for
To request a tour with this speaker, please complete this online form.
If you are not requesting a tour, click on the desired lecture and complete the Request this Lecture form.
All requests will be sent to ACM headquarters for review.