Yu to Receive Inaugural SRC Young Faculty Award

Dateline

Keywords

SRC

Images

Shimeng Yu

Shimeng Yu has been named as the recipient of the inaugural Semiconductor Research Corporation (SRC) Young Faculty Award. 

An associate professor in the Georgia Tech School of Electrical and Computer Engineering (ECE), Yu will be presented with the award at the annual SRC TECHCON meeting, to be held September 9-10, 2019 in Austin, Texas. He is also a member of the Institute for Electronics and Nanotechnology. This new award is presented to an untenured full-time faculty member who is a principal investigator (PI) or co-principal investigator working on research that greatly enriches the SRC research agenda.

Yu has been a member of Tech’s ECE faculty since August 2018, where he leads the Laboratory for Emerging Devices and Circuits. Yu is involved in several SRC projects. 

  • Yu is a member of the Applications and Systems-Driven Center for Energy-Efficient Integrated NanoTechnologies (ASCENT), which is part of the SRC/DARPA Joint University Microelectronics Program. ASCENT's mission is to provide breakthrough advances in integrated nanoelectronics to sustain the promise of Moore’s Law. Led by the University of Notre Dame, along with 13 partner universities and 29 principal investigators, the Center is funded for $49 million over five years. Yu's specific research within ASCENT develops emerging nanoelectronic devices that emulate the synapses and neurons to build hardware platforms for machine learning and neuromorphic computing. 
  • He is a member of the SRC nanoelectronic COmputing REsearch (nCORE) program, in particular the Energy-Efficient Computing: From Devices to Architectures (E2CDA) program. In this effort, jointly funded with the National Science Foundation, Yu is developing a software simulation framework to benchmark the emerging device technology's impact on artificial intelligence across the layers from algorithms, computer architecture, and circuit and chip design down to devices and materials. 
  • Yu is a PI of the SRC Global Research Collaboration (GRC) program on a project for hardware security. He and his colleagues are designing a fingerprint of microchips with emerging nanoelectronic devices for authentication and encryption. 

The SRC is a global industrial technology research consortium. With its highly regarded university research programs, SRC plays an indispensable part in the R&D strategies of some of industry's most influential entities.Companies who are SRC members include Intel, IBM, Micron, Samsung, ARM, and Taiwan Semiconductor Manufacturing Company, Ltd.

Krishna Named to ON Semiconductor Junior Professorship

Dateline

Images

Tushar Krishna has been appointed to the ON Semiconductor Junior Professorship, effective September 1, 2019. A professorship for untenured faculty members in the Georgia Tech School of Electrical and Computer Engineering (ECE), this position was previously held by ECE Professor Arijit Raychowdhury. 

Krishna joined the ECE faculty in August 2015, after working as a postdoctoral researcher at MIT and a research engineer at Intel in Hudson, Massachusetts. He is a member of the Computer Systems and Software technical interest group and holds an adjunct faculty appointment with Tech’s School of Computer Science. 

Krishna’s research spans the areas of computer architecture, interconnection networks, networks-on-chip (NoC), and deep learning accelerators. Working amongst these research areas, he and his team of seven graduate students in the Synergy Lab focus on optimizing data movement in modern computing systems. Krishna has graduated four master’s degree students and has also had several undergraduate researchers working in his lab. An excellent classroom instructor, he teaches Advanced Computer Architecture; Architecture, Concurrency, and Energy in Computation; Interconnection Networks; and Hardware Accelerators for Machine Learning.

Krishna received the B.Tech. degree in Electrical Engineering with honors from the Indian Institute of Technology Delhi in 2007, the M.S.E. degree in Electrical Engineering from Princeton University in 2009, and the Ph.D. degree in Electrical Engineering and Computer Science from MIT in 2014. 

Krishna has published more than 50 refereed journal and conference papers. In 2018, he won the NSF CISE Research Initiation Initiative Award, and in 2019, he won both the Google Faculty Research Award and Facebook Research’s Faculty Award for AI System Hardware/Software Co-Design. Earlier this year, Krishna had one of his papers selected as an IEEE Micro Top Pick and a second paper was chosen as an Honorable Mention in the May/June 2019 issue of the journal. 

Krishna’s Research to be Featured in IEEE Micro Top Picks Issue

Dateline

Images

Tushar Krishna will have one of his recent research papers featured in the IEEE Micro “Top Picks from Computer Architecture Conferences,” to be published in the May/June 2020 issue. 

Krishna is an assistant professor in the Georgia Tech School of Electrical and Computer Engineering, where he leads the Synergy Lab. This is the second year in a row that one of Krishna’s papers has been chosen as an IEEE Micro Top Pick.

Every year, IEEE Micro publishes this special issue, which recognizes the year’s top papers in computer architecture that have potential for long-term impact. In order for a paper to be considered for a top pick, it must first have been accepted in a major computer architecture conference that year and that have acceptance rates of ~18-22%. Out of 96 submissions this year, twelve were selected as "Top Picks." 

Krishna's paper was titled "Understanding Reuse, Performance, and Hardware Cost of DNN Dataflows: A Data-Centric Approach.” The co-authors were his Ph.D. student Hyoukjun Kwon; Vivek Sarkar, a professor from the School of Computer Science; Sarkar's Ph.D. student Prasanth Chatarasi; and two NVIDIA collaborators, Michael Pellauer and Angshuman Parashar. 

Deep Learning is being deployed at an increasing scale—across the cloud and IoT platforms—to solve complex regression and classification problems in image recognition, speech recognition, language translation, and many more fields, with accuracy close to and even surpassing that of humans. Tight latency, throughput, and energy constraints when running Deep Neural Networks (DNNs) have led to a meteoric increase in specialized hardware–known as accelerators–to run them.

Running DNNs efficiently is challenging for two reasons. First, DNNs today are massive and require billions of computations, and secondly, DNNs have millions of inputs/weights that need to be moved from memory to the accelerator chip which consumes orders of magnitude more energy than the actual computation. DNN accelerators try to address these two challenges by mapping these computations in parallel across hundreds of processing elements to improve performance and by reusing inputs/weights on-chip across multiple outputs to improve energy efficiency. Unfortunately, there can be trillions of ways of slicing and dicing the DNN (also known as “dataflow”) to map it over the finite compute and memory resources within an accelerator.

Krishna’s paper demonstrates a principled approach and framework called MAESTRO to estimate data reuse, performance, power, and area of DNN dataflows. MAESTRO enables rapid design-space exploration of DNN accelerator architectures and mapping strategies, depending on the target DNNs or domain (cloud or IoT). MAESTRO is available as an open-source tool at http://synergy.ece.gatech.edu/tools/maestro, and it has already seen adoption within NVIDIA, Facebook, and Sandia National Labs.

Location

Atlanta, GA

Email

jackie.nemeth@ece.gatech.edu

Contact

Jackie Nemeth

School of Electrical and Computer Engineering

404-894-2906

Krishna Wins Facebook Research Faculty Award for Second Straight Year

Dateline

Images

Tushar Krishna has been chosen as one of the recipients of the Facebook Research Faculty Award for AI System Hardware/Software Co-Design. Krishna was among the nine winners who were selected from 132 worldwide submissions. This is the second year in a row that Krishna has won this award.

The title of Krishna’s award-winning project is “HW/SW co-design of next-generation training platforms for DLRMs.” DLRMs stand for Deep Learning Recommendation Models and are used within online recommendation systems, such as ranking of search queries in Google, friend suggestions on Facebook, and job advertisements from LinkedIn. DLRMs are very different from Deep Learning models used for computer vision and natural language processing as they involve both continuous (or dense) features and categorical (or sparse) features. For example, the date and time for clicks on a webpage by a user can be used as dense features, while the representation of the user based on all the webpages visited by him/her in the past 48 hours can be used as sparse features for training recommendation models. The dense features are processed with multilayer perceptrons (MLPs) while the sparse features are processed using a technique called embeddings.

Training DLRMs constitutes more than 50 percent of the training demand in companies like Facebook. This is because storing the embeddings requires significant memory capacity, on the order of 100s of gigabytes to a few terabytes, which is more than the memory available on a single accelerator (GPU or TPU) node. Thus, DLRMs require clever partitioning and distribution of the model across multiple accelerator nodes. This naturally makes it crucial to optimize the communication between these nodes to reduce overall training time.

As part of the award, Krishna will explore mechanisms for efficient distributed training of recommendations models. The research will develop techniques involving co-design across software and hardware to enable scalability across 100s-1000s of accelerator nodes. The research effort will leverage ASTRA-sim, a distributed DL training simulator developed by Krishna and his Ph.D. student Saeed Rashidi in collaboration with Facebook and Intel.

Krishna is an assistant professor in the School of Electrical and Computer Engineering at Georgia Tech. He also holds the ON Semiconductor Junior Professorship. Krishna has a Ph.D. in Electrical Engineering and Computer Science from MIT (2014), a M.S.E. in Electrical Engineering from Princeton University (2009), and a B.Tech. in Electrical Engineering from the Indian Institute of Technology, Delhi (2007). Krishna’s research spans computer architecture, interconnection networks, networks-on-chip (NoC), and deep learning accelerators – with a focus on optimizing data movement in modern computing systems. Three of his papers have been selected for IEEE Micro’s Top Picks from Computer Architecture, one more received an honorable mention, and three have won best paper awards. He received the National Science Foundation CRII award in 2018 and both a Google Faculty Award and a Facebook Faculty Award in 2019.

Subscribe to computer architecture