Krishna Tapped for Google Faculty Research Award

Dateline

Images

Tushar Krishna has been named as one of the recipients of the Google Faculty Research Award (FRA). Krishna is an assistant professor in the Georgia Tech School of Electrical and Computer Engineering (ECE).

The Google FRA program focuses on funding world-class technical research in Computer Science, Engineering, and related fields. Among 910 proposals from 40 countries and over 320 universities submitted this year, 158 projects were selected for funding. The goal of the Google FRA is to identify and strengthen long-term collaborative relationships with faculty working on problems that will impact how future generations use technology. The award is highly competitive – only 15 percent of applicants receive funding – and each proposal goes through a rigorous Google-wide review process.

The title of Krishna’s award-winning proposal was "Using ML to Design ML Accelerators." With the end of Moore’s Law, the performance of conventional CPUs has stagnated. The growing computing demands from applications, such as Machine Learning (ML), has led to an explosion of custom hardware accelerators (across several companies and startups) for running Machine Learning algorithms at real-time latency and high energy-efficiency. 

A key challenge, however, is that the design space of these accelerators is extremely huge due to an increasing and rapidly evolving suite of Artificial Intelligence/ML models, different requirements for training vs. inference, a plethora of dataflow approaches for minimizing data movement, and varying area-power budgets depending on the target device. Krishna’s proposal seeks to enable rapid design and deployment of ML accelerators by leveraging ML algorithms to efficiently represent and search through this hardware design space.

An ECE faculty member since 2015, Krishna leads the Synergy Lab at Georgia Tech. He and his team focus on architecting next-generation intelligent computer systems and interconnection networks for emerging application areas such as machine learning. Krishna also received an NSF CISE Research Initiation Initiative Award in 2018. He recently had one of his papers selected as an IEEE Micro Top Pick from computer architecture conferences and a second paper was selected as an Honorable Mention; Krishna’s work will be acknowledged in the May/June 2019 issue of IEEE Micro.

Krishna Wins Facebook Research Faculty Award for Second Straight Year

Dateline

Images

Tushar Krishna has been chosen as one of the recipients of the Facebook Research Faculty Award for AI System Hardware/Software Co-Design. Krishna was among the nine winners who were selected from 132 worldwide submissions. This is the second year in a row that Krishna has won this award.

The title of Krishna’s award-winning project is “HW/SW co-design of next-generation training platforms for DLRMs.” DLRMs stand for Deep Learning Recommendation Models and are used within online recommendation systems, such as ranking of search queries in Google, friend suggestions on Facebook, and job advertisements from LinkedIn. DLRMs are very different from Deep Learning models used for computer vision and natural language processing as they involve both continuous (or dense) features and categorical (or sparse) features. For example, the date and time for clicks on a webpage by a user can be used as dense features, while the representation of the user based on all the webpages visited by him/her in the past 48 hours can be used as sparse features for training recommendation models. The dense features are processed with multilayer perceptrons (MLPs) while the sparse features are processed using a technique called embeddings.

Training DLRMs constitutes more than 50 percent of the training demand in companies like Facebook. This is because storing the embeddings requires significant memory capacity, on the order of 100s of gigabytes to a few terabytes, which is more than the memory available on a single accelerator (GPU or TPU) node. Thus, DLRMs require clever partitioning and distribution of the model across multiple accelerator nodes. This naturally makes it crucial to optimize the communication between these nodes to reduce overall training time.

As part of the award, Krishna will explore mechanisms for efficient distributed training of recommendations models. The research will develop techniques involving co-design across software and hardware to enable scalability across 100s-1000s of accelerator nodes. The research effort will leverage ASTRA-sim, a distributed DL training simulator developed by Krishna and his Ph.D. student Saeed Rashidi in collaboration with Facebook and Intel.

Krishna is an assistant professor in the School of Electrical and Computer Engineering at Georgia Tech. He also holds the ON Semiconductor Junior Professorship. Krishna has a Ph.D. in Electrical Engineering and Computer Science from MIT (2014), a M.S.E. in Electrical Engineering from Princeton University (2009), and a B.Tech. in Electrical Engineering from the Indian Institute of Technology, Delhi (2007). Krishna’s research spans computer architecture, interconnection networks, networks-on-chip (NoC), and deep learning accelerators – with a focus on optimizing data movement in modern computing systems. Three of his papers have been selected for IEEE Micro’s Top Picks from Computer Architecture, one more received an honorable mention, and three have won best paper awards. He received the National Science Foundation CRII award in 2018 and both a Google Faculty Award and a Facebook Faculty Award in 2019.

Subscribe to ai