ACM Distinguished Speakers Program:  talks by and with technology leaders and innovators

Computing in Memory for Data-Intensive Applications

Speakers: Swarup Bhunia
Topic(s): Computer Systems,Embedded Computer Systems,Emerging Technologies

 


Abstract
Energy-efficiency has emerged as a major barrier in performance scalability for applications that require handling large volume of data including analytics and informatics applications. For these applications, energy dissipation is primarily contributed by transportation of the data from off-chip memory to on-chip computing elements – a limitation referred to as the Von-Neumann bottleneck. In such a scenario, traditional approach to parallel computing or hardware acceleration inside a processor brings only minor improvement in total energy and throughput. Hence, there is a critical need to develop efficient hardware accelerator for the ever-growing set of data-intensive applications. This talk focuses on a novel scalable memory-centric reconfigurable accelerator architecture, referred to as MAHA, for data-intensive applications with associated application mapping software framework tailored to the features of the architectural fabric. MAHA is a spatio-temporal mixed-granular malleable hardware reconfigurable framework, which utilizes the memory for both storage as well as computation (hence, malleable). It exploits high-density and low access-time/energy of nanoscale memory and implements a distinct instruction-set architecture optimized for data-intensive applications including support for lookup and complex fused operations. 

The talk will cover application of MAHA to text analytics applications, such as Lucene as well as several common analytics kernels (such as, naive Bayesian classifier, k-Means clustering) and discuss the effect of in-accelerator compression/decompression to further improve energy-efficiency in MAHA. Finally, we will present the development of a novel Multifunctional Memory (MFM) unit, where a high-density two-dimensional memory array can be configured to realize different operating modes including neuromorphic computing through design or run-time configuration. Such a memory-centric computing fabric provides high flexibility and energy-efficiency for data-intensive applications by customizing to application requirements. Applications of the computing in memory paradigm to emerging non-volatile memory technologies, including resistive and spintronic memory will be discussed. 

 


About this Lecture

Number of Slides: 50
Duration: 75 minutes
Languages Available: English
Last Updated: 09-12-2017
Request this Lecture

To request this particular lecture, please complete this online form.
Request a Tour

To request a tour with this speaker, please complete this online form.


All requests will be sent to ACM headquarters for review.
Featured Speaker


Keith Cheverst
Lancaster University

Get Involved!
Help improve the DSP by nominating a speaker or providing feedback to ACM.