GTC 2019: Chief Scientist Bill Dally Provides Glimpse into Nvidia Research Engine
Amid the frenzy of GTC this week – Nvidia’s annual conference showcasing all things GPU (and now AI) – William Dally, chief scientist and SVP of research, provided a brief but insightful portrait of Nvidia’s research organization. It’s perhaps not gigantic by large company standards, roughly 175 full-time researchers worldwide, but still sizable and quite impactful. At GTC, the exhibit hall was packed as usual with sparkly new products in various stages of readiness. It’s good to remember that many of these products were ushered into existence or somehow enabled by earlier work from Dally’s organization.
“We have had many successes, only a small number of them are listed here (in his presentation), and in my view what we really do is invent the future of Nvidia,” said Dally during his press briefing.
Nvidia must agree and politely declined to share Dally’s slides afterward. Perhaps a little corporate wariness is warranted. No matter – a few phone pics will do. In his 20-minute presentation, Dally hardly gushed secrets but did a nice job of laying out Nvidia’s research philosophy, broad organization, and even discussed a few of its current priorities. It’s probably not a surprise that optical interconnect is one pressing challenge being tackled and that work is in progress on “something that can go to 2 terabits per second per millimeter off the chip edge at 2 picojoules per bit.” More on that project later.
Presented here are most of Dally’s comments (lightly edited). They comprise an overview of Nvidia’s approach to thinking about and setting up the research function in a technology-driven company. Some of the material will be familiar; some may surprise you. Before jumping in it’s worth noting that Dally is well-qualified for the job. He was recruited from Stanford University in 2009 where he was chairman of the computer science department. He is a member of the National Academy of Engineering, a Fellow of the American Academy of Arts & Sciences, a Fellow of the IEEE and the ACM, and received the 2010 Eckert-Mauchly Award. There’s short bio at the end of the article.
- Philosophy – What is Research’s Role at Nvidia?
To give you an idea of what we do I’ll give you our philosophy. Our goal is to stay ahead of most of Nvidia and try to do things that can’t be done in the product groups but will make a difference for the product groups. When I was talked into leaving the academic world in 2008 by Jensen and starting Nvidia research in its current incarnation, I spent time surveying a lot of other industrial research labs and found the most of them do great research or have huge impact on the company, but almost none of them did both. The ones that did great research tended to publish lots of papers but were completely disconnected from their company product groups. Others wound up sort of being consultants for the product groups and doing development work but not really doing much research. So I set as a goal for Nvidia research to walk this very narrow line between these two chasms, either of which will swallow you, to try to do great research and make a difference for the company.
Our philosophy on how to do research is to maximize what we learn for the minimum amount of effort put in. So in jumping into a project we basically look at the risk reward ratio, what would be the payout if we succeed and how much effort is it going to take to do this? We try to do experiments that are cheap, [require] very little effort but if successful will have a huge impact.
Another thing we do is we involve the product groups at the beginning of research projects rather than at the end and this has two really valuable consequences. One is that by involving them at the beginning they get sense of ownership in it. When we get done we’re not just popping up with something that they have never seen before but it is something that they have been sort of a god parent to from day one incubating the technology along. Probably more important though is we wind up, by getting them involved in the beginning, solving those real problems not some artificial academic imagined problem that we pose for ourselves. We wind up having the technology much easier for them to adopt.
- Organization – Nvidia Tackles Supply and Demand
Roughly we organize Nvidia research into two sides, a supply side that supplies technology to make GPUs better. So we have a group that does circuits, a group that does design methodologies for VLSI, architecture for GPUs, networks, and programming systems. Programming systems really sort of spans both sides. [The other side is] the demand side of Nvidia research, which is a part of the research lab that drives demand for GPUs. All of our work in AI – in perception of learning, applied deep learning, algorithms group in our Toronto and Tel Aviv AI labs – all drive demand by creating new algorithms for deep learning.
We also have three different graphics groups that drive demand in graphics. One of our goals is to continue to raise the bar of what good graphics is because if it ever stays stationary we would get eaten from below by commodity graphics. We recently opened a robotics lab. Our goal is to basically develop the techniques that will make robots of the future work closely with humans and be our partners and [that] Nvidia will be powering the brains for these robots. We have a lab that is looking into that.
The question mark down here (see slide) is our moonshot projects. So often we will basically pull people out of these different groups and kick off a moonshot. We had [one] a number of years ago to do a research GPU called Einstein and Einstein morphed into Volta and that wound up being a great success. Then we had one a few years after that were we wanted to make real time ray tracing a reality. We pulled people out of the graphics groups and architecture groups and kicked off a project that we basically internally called the TTU for the tree traversal unit and it became the RT cores in Turing. So we have been able to have a number of very successful integration projects across these different groups.
We are geographically diverse with many locations in the U.S. [and] Europe. We just opened a lab in Israel. I’d very much like to start a lab in Asia and it really requires finding the right leader to start the lab. We tend to build labs around leaders rather than picking a geography and then trying to something there.
- Engaging the Community – Publishing Ensures Quality; Open Sourcing Mobilizes Community Development
We publish pretty openly. Our goal is to engage the outside community and publishing serves as number of functions [such as] quality control. One of the things I have observed is the research labs that don’t publish quickly wind up doing mediocre research because the scrutiny of peer review, while harsh at times, really is a great quality control measure. If your work is not good enough to get into as top tier conference like NeurIPS, ICLR in AI or ISCA in architecture, or SIGGRAPH in graphics, then it’s not good. And you have to be honest with yourself about that. 5:21
In addition to publishing we file a number of patents building intellectual property for the company. We release a lot of open source software products and this is in many ways a higher impact thing than a publication because people immediately grab this open source software. Much of the GAN (generative adversarial network) work we’ve done with progressive GANs, by open sourcing it people immediately grab it and build on it and start doing really interesting work. [It’s] a way of having that community feed itself and it’s a way of making progress very rapidly. A small listing of some of the ore recent papers we’ve published in leading venues are here.
- Successes – So How’s It Working?
We’ve had a lot of really big technology transfer successes to the company and I have just listed a few of them here. Almost all of the work that Nvidia does in ray tracing started in Nvidia research. This includes our optics ray tracing product which is sort of the core of our professional graphics. That started as a project when Steve Parker, who is now our general manager of professional graphics, was a research director reporting to me. After it became a successful research project we basically moved Steve and his whole team into our content organization and turned it into product.
Then, as I said, we had a moonshot that developed that became the RT core in Turing and actually a lot of the algorithmic things that underlie that. Our very fast algorithms, [the] BVH trees are important in sampling to decide what the right directions are to cast rays, all started out as projects in Nvidia research. DGX-2 is based on a switch we developed, the NVSwitch. That started as a project in Nvidia research. We had a vision of building what are essentially large virtual GPUs, a bunch of GPUs all sharing their memories with very fast bandwidth between them, and we were building a prototype of this in research based on FPGAs. The product group got wind of it and basically grabbed it out of our hands before we were ready to let it go. But that’s an example of successful development and transfer.
CuDNN [was developed] back when Bryan Catanzasro (vice resident applied deep learning ) was in our programming systems research group. I started a project, a joint project with Andrew Ng (Baidu) and recruited Bryan to be the Nvidia representative. The software that came out of that really sort of launched Nvidia into deep learning. Then a bunch of applications in deep learning, image inpainting, noise-to-noise denoising, and progressive GAN which really was the first GAN to crack the problem of producing a good high resolution images but trying not to learn everything at once but training the GAN progressively, starting with low resolution 4×4 images and slowly working your way up to 1k or 4K images. Here’s a more complete list (see slide). I won’t go into it because I want to leave time for the interesting stuff later.
- FutureScan – What’s on the Slab in the Lab
I picked three projects from other parts of Nvidia research to give you a flavor for the breadth of what we do. So one project we just kicked off. This is a collaboration with Columbia University and SUNY Poly to build a photonic version of NVLink and NVSwitch. Very often we gauge what research we do by trying to find gaps by projecting our [current] technologies forward and looking to where we are going to come up short. One place we are going to come up short is in off-chip bandwidth, which is constrained both by bits per second per millimeter of chip edge, how many bits you can get on and off the chip given a certain parameter, and energy, picojoules per bit, getting that off the chip. So electrical signaling is pretty much on its last legs. We are going to be revving future versions of NVLink to 50 gigabits per second, 100 gigabits per second, per pin, but then we’re kind of out of gas. What can we do beyond that?
What we do is we go to optics and our plan is to produce something that can go to 2 terabits per second per millimeter off the chip edge at 2 picojoules per bit which is about an order of magnitude better than what we do today on both of those dimensions. The way we are able to do this is by producing a comb laser source. Basically it is a bunch of different frequency tones that we then can modulate each tone so we are able to put about 400 gigabits per second on a single fiber by having that broken up into 32 different wavelengths we can individual turn on and off. We can then connect up a DGX-like box with switches that have fibers coming in and out of them and GPUs that have fibers coming in and out of them and build very large DGX systems with very high bandwidth and very low power.
We are also experimenting with scalable ways of doing deep learning. This also demonstrates a lot of work we do in building research prototypes. We basically recently taped out and evaluated what we call RCA 18 – research chip 2018 – which is a deep learning accelerator that can be scaled from a very small size, [from] a single one of these PEs up to 16 PEs on a small die. We have integrated 36 die on a single organic substrate and the advantage here is it demonstrated a lot of technologies one of which is very efficient signaling from die to die on an organic substrate, one of which is a scalable deep learning architecture. This has an energy per op, let’s see if I can get the numbers right, I believe of about 100 femtojoules per op doing deep learning inferences so it’s actually quite a bit better than most other deep learning inference engines that are currently available and it is efficient in a batch size of one. We can have very low latency inference as well. The technology for signaling between the dies is something called ground reference signaling and that’s probably about the best you can do electrically before we have to jump to the optical thing I showed you previously.
One thing I am very excited about is it’s not so much a project as a new lab we started. We opened a lab in Seattle, a picture of the building down there on the left and the beautiful view out the window and the café right above it, to basically invent the future of robotics (see slide). Robots today are basically very accurate positioning machines. They tend to operate [an] open loop. You know they move a spot welder or spray gun to a preprogrammed position so that that part you are operating on is where it is supposed to be. You’ll get very repeatable welds or painting or whatever it is you’ve programmed the robots to do but that’s not the future. That’s the past.
The future of robotics is robots that can interact with their environment and the thing that makes this future possible is deep learning. By building perception systems based on deep learning we can build roots that can react to things not being where they are supposed to be. They estimate their pose and plan their paths accordingly. We can have them work with humans avoiding injuring the humans in the process. So our view is that the future of robotics are robots interacting with their environment we are going to invent a lot of that future.
By using the kitchen [as an example], it’s hard to see most of it here, but this is a little Segway base with the robot arm on it is operating in a kitchen working along side a human to carry out tasks such as preparing a meal. If you think about it that’s a hard thing for a robot. You are dealing with awkward shapes. You are dealing with people who move things around in odd ways. So it is really stressing the capabilities of what robots can do. A lot of people have done interesting demos trying to use reinforcement learning training robots end-to-end. That doesn’t work in such a complex environment.
We are actually having to go back and look at sort a lot of classic robotics tasks like path finding and we recently came up with a way of doing path finding using Riemannian Motion Policies (RMP) policies. It’s able to better deal with an unknown environment and maneuvering the robot arm to avoid striking things. To do pose estimation, we use neural networks to estimate say if a box of noodles you want to cook on the counter, to estimate the pose of that box. We do that by estimating the pose, rendering an image of what that box will look like in that pose, comparing it to the real image, and iterating that to refine the pose estimate. It’s really very accurate and by putting those pose estimates together we’ve been able to have these robots carry out very interesting tasks in our kitchen.
- Wrap-up – Pick Good Projects; Don’t Waste Resources; Make a Productive Impact
To sort of summarize, Nvidia research is kind of like a big sandbox to play in. We get to play with neat technology that if successful will have positive impact one the company. We span from circuits at the low end, better signaling, both on that organic substrate, I consider the photonics also circuits, all the way up to applications and graphics, perception and learning and robotics. Our goal is to learn as much with the least amount of effort, to optimize our impact on the company and as part of doing that second thing we involve the product people from day one in a new research project so that they both influence that project and have more impact on the company and gather ownership in it. We have had many successes, only a small number of them are listed here, and in my view what we really do is invent the future of Nvidia.
Dally Bio from Nvidia web site:
Bill Dally joined NVIDIA in January 2009 as chief scientist, after spending 12 years at Stanford University, where he was chairman of the computer science department. Dally and his Stanford team developed the system architecture, network architecture, signaling, routing and synchronization technology that is found in most large parallel computers today. Dally was previously at the Massachusetts Institute of Technology from 1986 to 1997, where he and his team built the J-Machine and the M-Machine, experimental parallel computer systems that pioneered the separation of mechanism from programming models and demonstrated very low overhead synchronization and communication mechanisms. From 1983 to 1986, he was at California Institute of Technology (CalTech), where he designed the MOSSIM Simulation Engine and the Torus Routing chip, which pioneered “wormhole” routing and virtual-channel flow control. He is a member of the National Academy of Engineering, a Fellow of the American Academy of Arts & Sciences, a Fellow of the IEEE and the ACM, and has received the ACM Eckert-Mauchly Award, the IEEE Seymour Cray Award, and the ACM Maurice Wilkes award. He has published over 250 papers, holds over 120 issued patents, and is an author of four textbooks. Dally received a bachelor’s degree in Electrical Engineering from Virginia Tech, a master’s in Electrical Engineering from Stanford University and a Ph.D. in Computer Science from CalTech. He was a cofounder of Velio Communications and Stream Processors.
The post GTC 2019: Chief Scientist Bill Dally Provides Glimpse into Nvidia Research Engine appeared first on HPCwire.