NVIDIA Corporation (NASDAQ:NVDA) Cowen 43rd Annual Healthcare Conference March 7, 2023 2:10 PM ET
Company Participants
Kimberly Powell – Vice President, Healthcare
Conference Call Participants
Matt Ramsay – Cowen
Matt Ramsay
Anyway thank you for coming and welcome to what I think should be a really exciting and maybe different type of session than you guys are used to in. My name is Matt Ramsay. I lead the semiconductor research practice at Cowen. And I guess we’ll start by addressing the elephant in the room why the hell you listening to a semiconductor analyst at a healthcare conference.
But I’m really excited to talk about the topic of AI and healthcare with the leading AI company in the industry, NVIDIA. I started working with the NVIDIA folks about 10 years ago and they were a gaming company about $15 billion market cap and have in the last 10 years basically invented the science of artificial intelligence and accelerated computing and are now sixth or seventh largest company in the stock market. And one of the biggest areas of focus not just in Big Data, not just in automotive is health care. And really excited to have Kimberly Powell come and spent some time with us. She leads the Healthcare AI business at NVIDIA across all of these segments. I’m on with the guys that you probably know. Steve [ph], Dan [ph], thank you for co-hosting with me and we’ll have hopefully a good discussion.
I’m going to spend a little bit of time with NVIDIA to start with and have Kimberly introduce what NVIDIA is doing in healthcare, what the business looks like how across different health care verticals they’re using AI computing for to help companies speed innovation and do things that are disruptive and exciting. So anyway and then Dan and Steve will introduce the other panelists and kind of go from there.
So if there’s — if you guys have any questions that are about healthcare, I’m not your guy but if it’s said anything about AI computing then we’ll go from there. But Kimberly thank you. Thank you all so much for being on the panel. And if you want to spend a couple of minutes just introducing to this audience what NVIDIA is doing in health care and then we can go from there.
Kimberly Powell
Absolutely. So if you don’t mind, I will kick it off with as a reminder this presentation contains forward-looking statements and investors are advised to read our reports filed with the SEC for information related to risks and uncertainties facing our business.
Now that we got that over with. Thank you for joining us. I’m glad we have a packed house. I’ve been at NVIDIA for 15 years. And as Matt said, we are known as a gaming company. We aren’t known as being in healthcare sector. But I started the practice 15 years ago, because there was this revelation that said in order for us to do efficient computing, computing that is going to take us through the next 10, 20, 30 years and beyond. Moore’s Law is ending and we need this idea of accelerated computing.
The problems and challenges that were coming to the industry needed a paradigm shift. So we took that gaming and that graphics technology and we invented a new paradigm called, accelerated computing. We’ve since gone through our next transition, which is into an AI computing company.
So if you think about the three most advanced computing approaches in the world today; computer graphics, accelerated computing and artificial intelligence. Our job in the healthcare business unit is to essentially make that accessible to healthcare and healthcare broadly.
So some of the first things that we ever worked on was actually here in Boston. Radiologists who are inventing new mathematical approaches to things like CT reconstruction. In order to reduce the radiation in order to have it clinically viable where you could turn around these images in a time where a patient is still with a doctor in critical care, you needed accelerated computing. And so it absolutely triggered us to say what we are building is going to be applicable broadly.
So we made our first journey in medical devices. It is the core of our business today. Brad and his team are inventing some of the most important new tools and platforms helping us understand biology to such an exciting time because it takes something, I’m deeply passionate about which is imaging and it marries to something that the world has conquered and really used as a new tool which is genomics and the insights we can really pull out of biology and it marries those two in. And I think the next 10 years that this technology is going to enable is going to take us to places we’ve never seen before.
In this journey of an accelerated computing company, we were discovered by the computer scientists who are looking at deep neural networks that this was the right platform. This is the right architecture in order to do this new thing called deep learning, where it’s reinvented for the second time thing called deep learning. So we had the opportunity to now look at how deep learning was transforming and how it could be applied into healthcare.
We build right now the industry standard tools for developing AI for all imaging types, whether that be radiology, pathology, surgical video. And it’s taken us along on this journey to say the world — the number of applications, if you think about driving a car for example, maybe you’re looking at about 12, maybe 24 algorithms to drive the car. That’s usually why you can get your license at 16. Well, the number of algorithms that a clinical potentially need to use in order to do his or her job is on the order of hundreds, thousands, if not hundreds of thousands.
So we need to create the capabilities of giving the scientists the subject matter experts, the clinicians, the ability to develop AI applications. And so we build those tools of software and these services now to put it in the power of the hands of the healthcare industry. And then in this new realm, the ChatGPT realm if you will, it has been discovered and highly leveraged generate being a company who’s put it to fantastic use where we have the ability invented many, many years ago, decades ago to represent chemicals and proteins in a sequence that looks just like characters that are the same thing you feed into a ChatGPT model. And it can reason about it, they can generate new ideas and it can bring us into yet the next I think paradigm shift in healthcare.
And so these are the — some of the areas that NVIDIA focuses on. This is how we sort of understand the market by listening to the innovators in the market, the academic communities, building the hardware, but more importantly, the software platforms and services to put it into the hands of all of the passionate and innovative thinkers in the industry so that we can really see a future we haven’t seen before where we can start to take medicine as a science and more and more push it into the realm of an engineering practice.
So that is a bit of a summary of where we are and what we’re doing and happy to pass it on.
Unidentified Company Representative
Great. My name is Dan Brennan [ph]. I cover life science tools and diagnostics with Steve Mah and the third colleague and I follow NanoString who is up here in podium. And for those of you who aren’t in the healthcare field, so NanoString is amongst a handful of companies tackling new field single-cell spatial biology. 10 years ago single-cell — sequencing analysis became a new technology and it’s really proliferated. But the spatial context of looking at single-cells is brand new. NanoString is one of the leaders rolling out their first platform a few years ago. And now they’re just embarking upon their newest platform which is truly single-cell institute in kind of in the native tissue and it themselves and a handful of companies going after.
So there’s a tremendous amount of excitement, but there’s also uncertainty from investors about really where does this go and is it a cool discovery tool? Does it really lead to some new insights that you can find new drug signatures and really bend the curve, if you will? So it was interesting. I just hosted Brad on the prior panel, and they have their COSMIC platform and they’ve made a big, big deal out of their informatics approach to their single cell technologies. It’s Atomic [ph] and it’s a cloud-based approach whereas the other players really haven’t gone that route. So it’s very interesting to ask Brad now with that long introduction about and Atomic, I believe is obviously incorporating video.
Question-and-Answer Session
A – Unidentified Company Representative
So Brad, maybe walk us through a little bit about when NanoString, kind of how you came into start to work with NVIDIA. When you made this decision? How you decided that you needed to harness their technology? And what it’s going to hopefully allow you to do with Atomics?
Unidentified Company Representative
Yes. Thanks Dan. Well, there’s a lot of bases in here. I don’t know how familiar our management. So let me back up and explain a little bit about what spatial biology is. Spatial biology, kind of we said, well, it’s kind of the marriage of traditional imaging technologies, look at tissue like anybody would through a microscope when you’re in high school and genomic technologies that look molecule by molecule at every RNA or protein in a cell. And spatial biology allows us to see how cells are talking to each other all the different unique cell types, what the physical architecture of a tissue is at a molecular and cellular level.
So think of it as tissue is almost like it’s made of LEGOs. And every cell is — it might look the same, but they’re actually unique LEGO blocks and they serve different functions. And in the past, biology is basically to ground up that tissue and built a big pile of LEGOs and said, is it mostly orange or green or red. It’s not even really looking at the shapes.
And then, single cell biology came along and allowed us to capture cells and droplets and look at each LEGO piece individually and say, there’s cubes here and there’s long skinny wins and there are some circular ones and these are different. They serve different functions. Well, now spatial biology allows us to look at the tissue on an intact basis. You see all those LEGOs, how they fit together and what they’re doing.
With spatial biology, the first applications of spatial biology and research. So it’s a new tool discovery researchers are figuring out basic questions about how tissue works. And then later translational researchers will help take those insights and make them into new diagnostics and new drugs. But it’s — we’re at the very beginning of what will, without a doubt be a 10-year revolution in the field of biology.
The customers who are applying it today are academic medical centers, like Harbor and the Broward here in town, biotech companies and the like. That’s who our customers are. So, what are some of the opportunities and challenges unique to spatial biology that got us into informatics? Our cosmic system is capable of taking a tissue of a million cells and visualizing 1,000 different RNAs in each of those cells with its X, Y and Z coordinates.
So it can generate a data set from a single one centimeter by one centimeter piece of tissue box like a mold that you might have removed or a tumor that was biopsied, that’s about 0.5-gigabyte in raw data. And that’s at 1,000 plex. We announced just last month that we’ll be going to 6,000 plex of 6,000 unique molecules tons of million cells, which could be 3-terabytes to 6-terabytes of data for a single sample.
So that’s an unprecedented amount of data and the nature of the data is such that the human mind can’t really make sense of all the different relationships and signals that are happening. And so we began to realize as a company several years ago that suddenly, a bunch of engineers and chemists, we’re embarking on one of the biggest informatics challenges in healthcare.
So we added two amazing women to our Board of Directors to help us with that the Chief Data Officer of the Moffitt Cancer Center, Dana Rollison and woman name Janet George who now runs the cloud computing portion of Intel that was at the time the AI/ML leader article to help us think about how are we going to help our scientific customers actually get insights out of all the data and how are we going to prepare for that.
Around the same time we embarked on two aspects of spatial biology. One is building compute an unprecedented amount of compute power into our actual systems to do raw image analysis on the system and then secondarily streaming to the cloud to store and analyze and work together on the data sets using the elastic compute power and storage capabilities to the cloud.
So NVIDIA was the company whose GPUs we selected to build into our cost mix spatial molecular imager. The GPU architecture they offered was five times faster at doing a very important function which is identifying the cell boundaries so that every RNA molecule can be assigned to the right cell called cell segmentation. It’s an AI/ML based algorithm that runs on an NVIDIA card inside our latest product COSMIC patient molecular imager. And it’s fundamental.
If we assign the molecules to the wrong cell we’ll absolutely get misleading results. So that’s critical. And now once we get those terabytes and terabytes of data into the cloud there’s a whole another set of opportunities to apply informatics and AI/ML to look at those data sets, to learn from them, to see patterns that will be hard to see without those tools. And we have an opportunity to work with NVIDIA on how we optimize those algorithms for NVIDIA processors in the cloud taking full advantage of their road map of ever-increasing power in GPUs.
Matt Ramsay
So now Brad maybe as a question. You know, you’re only as good as what you’ve done today. And you’re talking about increasing the number of things you can look at by six-fold over the next year and you’re probably talking about tripling that again at some point due to the whole transcriptome to 20000 genes. So not only doing more but then you’re also going to get a lot more insights and you probably want to speed it up which is one of the factors we hear a lot and it’s probably not the informatics side. So long way of saying and maybe Kimberly like what is the road map between what you’re doing today and how important is further progress using the existing chips or future chips that will allow you to scale and do kind of bigger, faster, better.
Unidentified Company Representative
Yes I’ll start and Kimberly can build on if she likes. So for us the number one thing we can do for scientists is give them tools that extract the maximum amount of insights from the precious tissue samples they have. And so driving up the plex from one marker that you might look at a state or a microscope through conventional means to the 1,000 we offer today, to the 6,000 offer next year, to the 20,000 that we cover the entirety of the human genome. That is our mission and we’re going to push it as far and as fast as we possibly can.
Each time we’re successful, we make the informatics challenge harder though on a kind of geometric way. So we’re pushing the chemical and engineering and optical ability to extract data from the tissue, but we’re going to need Kimberly’s help to end and taking advantage of NVIDIA and others kind of compute resources and to make sense of all of that over time. So we’re experts on the first piece, but we rely on partnership and outside help on the informatics piece.
Kimberly Powell
Yes. I think the way we think about the problem is we think about it in what we call a full stack. You can think about NVIDIA as a layered cake. Again a lot of people know us for the chip and the chip architecture itself and that is absolutely fundamental. For example, in our latest architecture we have hardware silicon that it’s called the Transformer core that is vital to being able to do these large language models like ChatGPT.
However, you need to be able to expose that technology not only all the way out to the application developer, but across GPUs in a single node, across nodes inside of a data center and do it at 1000 times scale.
So these models take many, many thousands 5,000 to 10,000 GPUs working in unison across an entire data center, network with NVIDIA’s networking and doing that level of processing for weeks and months at a time.
And so in order for us to give Brad and his team, who are not computer architects and we don’t want them to have to be the ability to access this, we have to look at these application challenges at every single level.
At the individual chip level, the system level, the data center level and the ability for these applications that they’re trying to do like cell segmentation at scale, as he’s going to 60 times his data set and being able to do that at scale.
So we take a full stack approach. And it’s quite a unique position for our company to not only be designing and architecting the silicon itself but also, have all the components to go at a data center scale. And then the software investments that NVIDIA has made over the last two decades allows for these application development to happen at a very rapid pace.
So for example, I think we all have heard of Moore’s Law and the end of Morris law. With full stack optimization, we can see, speed up million times over a 10-year period where Moore’s Law would be many orders of magnitude lower than that.
And you can’t just do that on Chip Architecture alone. It’s just not possible anymore. So this full stack optimization is super important. And then the other aspect of what Brad said is, Machine Learning and Artificial Intelligence is part of those x-factors.
If you had to compute everything in its full physics-based computation, it would be too computationally intense and too expensive. But Artificial Intelligence can help us augment that, can help us essentially be a functional approximator of some of these more difficult calculations that is also part of that x-factor.
And so, we work with the leaders in the field, working in pioneering approaches that I’ve never done before because they exercise our full stack more so than we could ever do just sitting in our own back office.
And that is largely the chart of our team is to partner up with innovators to make sure that we apply that whole stack for them to achieve their life goal which is Spatial Genomics and elucidating all the amazing parts of biology we don’t even understand that.
Kimberly Powell
Steve, do you want to go ahead and introduce yourself and Molly and the generate team.
Unidentified Company Representative
Sure. Yeah. So Stephen, [indiscernible] one of the analysts at TD Cowen covering life science tools and diagnostics specifically covering data-driven drug discovery and also Synthetic biology.
So I have my pleasure to introduce Molly Gibson, CEO of Generate Biomedicines. So our thesis and this is derived from a recent multi-analyst piece we did which is the primer on AI and data-driven drug discovery. So basically you do still down the thesis of that piece, as the return on investment dollars in using the traditional drug discovery model hasn’t been realized, right?
So even though there’s been multiple advances in R&D technologies, multiple advances in terms of understanding hematology has not been reflected in increased clinical successes. So our thesis is that, by applying Artificial Intelligence. So these vast amounts of data that NanoString is generating and others to be able to increase the probability of clinical success and deliver better drugs and at a lower cost. So maybe with that kind of introduction Molly maybe can you spend a few minutes on generate and how you’re using AI?
Kimberly Powell
Absolutely. So generally, we focus on using ML and AI to generate novel proteins. And so proteins were really interested in them because almost everything in biology happens because of proteins. So if you think about DNA is really kind of the software proteins are how that software is executed. And so to us historically, proteins have been discovered from all purposes through this process of threat discovery. And it’s really the discovery work that is so interesting to us from the perspective that it’s really empirical process, it’s very bespoke. And like was mentioned it really leads to challenges in economies of scale. It’s not a scalable approach.
And so if you think about what could actually give us the returns on investment that we want and we actually lead to successful molecules, having an understanding of how protein leads to a particular function and to a particular clinical outcome is really a spectrum. And that’s something that’s just not seen today. And a lot of that comes down to the fact that we’re discovering these through these really empirical processes trial in her essentially these medicines.
And at the end of the day, you get out whatever comes out at the end of the tunnel now what you want, not what you’ve specified. And so to us the question is, can you actually flip that on its head. Can you actually say this is the molecule I want? Now what is the sequence the protein sequence the immunosequence that would fold into the right protein and gives us the right function.
And if you think about the common and torics of this it’s really — it blows up and same to a place where really AI and the only way we can imagine being able to do that. So standard protein is about 100 different amino acids each amino acid can be 20 different letters. And so if you think about any different combination of that is that combination is more than the adder squared. So the comment to work is the same. And so we’re asking that question, can you actually start to use ML to understand the rules by which these proteins function be able to specify a particular problem and then you will not just learn those rules but then to generate novel proteins.
And so we’re applying this to problems where we can actually say, I want an antibody for example like anything like the antibodies that protect us against COVID or HUMIRA for example we want to antibody does. And because of that we’re going to be able to dose this drug at a reasonable interval maybe six months of the year, we’re going to be able to give it to these patient populations that are in high need high risk very safely so more effective more safe molecules for people faster
And to me the thing that gets really exciting about this is one of the challenges in drug discovery is that we haven’t been able to see the same types of economies of scale that you see an attack in history. And so, you’ve not seen those kinds of returns that you have on your technology. And so really being the technology first in the biology. We’re able to flip that in and think about these economies of scale that we want to see in drug discovery and more importantly, the successes that we want to see a lot of the challenges which are discovery today is not just discovering the molecules, but making sure that they’re successful. So, being able to do this more drugs faster and more successfully.
Unidentified Analyst
Okay. Thanks for that. And maybe this is a question for the whole panel here. We’ve heard people say that they have the best algorithms. My personal opinion is that, it is table stakes. And where the rubber hits the road is, how you’re actually taking your functional data or your mass data and then using that to train your machine learning algorithms. I’m curious to get your guys as perspective on that? And especially with you, Molly, how you’re generating data to train your machine learning algorithms?
Unidentified Company Representative
Yes. So, to me, this is kind of a conversation we have all the time, which is, like, out of all the components of compute algorithms and data, what’s the most important, at the end of the day it’s all of them, its of course, if you’re thinking about all components of that.
And so, we’ve thought about, for data, it’s not just any data that’s important. It’s the data that’s going to be able to give the most information, the most high-quality information at scale. And so, to us, what I — when we think about the question of, can you get to how protein functions in the body. We know that sequence dictates structure of a protein, the three-dimensional structure of the protein then dictates function.
And so, the most important things — pieces of data for us are two things: one is structure, which is incredibly generalizable across any type of data that you — any type of protein that you want. And so, we’ve built out CryoEM 4, four different microscopes, is one of the largest in the country of being able to produce structural data in mass to test things that we actually generate [indiscernible].
So instead of prediction — structure prediction where you already have the answer of the structure that you want to test. For us, when you generate something new, there’s no answer. You don’t have the answer. So you have to actually test. And so, we’re doing that with our CryoEM 4.
And then there’s also function. So high-throughput functional data is also the second thing that we want to be able to do. So we’re constantly generating a fully automated lab where we can generate the function of the molecule itself. And so, it’s not just that we have more data it’s that we have the right type of data for the questions that we’re answering, which is really important to us and how we’re investing in data.
Kimberly Powell
Yes. I think, Steve, to comment on this is there’s a few fundamental things to think about. What Molly described in so many of the tech bios, we call them, who have realized that, it’s not just a single algorithm. And what I — my version of what Molly said is, two things. One it’s about the method. And two it’s about turning it into an engineering process. It’s repeatable, it’s auditable, it’s studiable. And that is really a fundamental difference.
And so, all three are absolutely important. No person with the most chips wins, no person with only the data wins and actually no person with just an algorithm wins, because that algorithm and the data is ever changing. I mean, as we know health care, the platform technologies are not standing anywhere near still. And so, the methods development is super, super critical across the gamut here.
And the other thing to realize, what’s different now is, not just that we have the ability to build something like ChatGPT, which honestly was able to be born because of the Internet, let’s say. But now through lab automation and through platforms and digital biology, we now have the necessary data feeders.
And so, it’s — we — this is new. This is within the last five years that terabytes of CryoEM data is coming out. These things are winning the Nobel Prize, new microscopes, new platforms are entering the market and getting more accessible all the time and they were able to automate them. And so the scale of the data are completely unprecedented. And then, this meeting of the capability of generative models in AI is going to be able to help us make sense of it reason through this data.
Discover Things nature has never seen before, right, which is what the Croma has done is nature would go through four billion years of evolution and say okay that work on to the next job. It didn’t sit there and say what other proteins might work. No, it just had to move on, right? It had to be very efficient. We’ll hear, because we can generate data, we can encode this data into these models, we can explore like we could never explore before. And not in a $4 billion evolutionary time scale, we’re talking now within a week or two weeks or a month. So this is what’s super transformational is this new method, and the ability to generate data at the scale and at this resolution and fidelity.
Unidentified Company Representative
Just build on I think both made great points. Our – I want to go back to the question of kind of the relative value and importance of data algorithm and compute we – our tools are the data-generating tools. So obviously, if you haven’t tested in biology, if you haven’t tested the right sample with the right tool, the data doesn’t exist. There’s no way you can get any insights no matter how much your computer or algorithms. But our goal is to help people generate the most interesting data, to put it into the cloud, where as algorithms evolve and as compute power evolves it can be queried over and over and over again and have a long life of generating insight by insight.
So this is why having a data-intensive production extracting the maximum amount of data out of tissue getting that to the place where it can be shared amongst collaborators all around the world Actually, we haven’t – that place in the cloud is call our atomic spatial informatics platform. And the critical aspect of it is because algorithm innovation is such a huge part of what’s happening in science on these new data sets.
We have a totally open-source approach, the algorithms that we provide on day one are open source you’re going to be able to look at the code and all Python to figure out exactly what we’re doing. There’s no Black Fox. And as you are an academic who invent a new meta you’re going to be able to upload it straight to our cloud, and take advantage of the compute power and architecture and digitalization tools, and the residency of the data to innovate on algorithms.
We’re not going to be. NanoString technology is not going to be the algorithm indenture of the world, the huge academic environment is. And then the beauty of having it all in the cloud is, as successive generations of technology and GPUs and CPUs is come out, we can ever take and manage of those improvements in speed and processing power et cetera, without a scientist ever having to throw out their old server and buy and do them. We’ve totally taken that out of their hands. They – they have access to NVIDIA’s entire road map of technology by virtue of putting it in the cloud, and letting the data centers deal with the upgrade cycles.
Q –Unidentified Analyst
I think that’s a really sort of interesting segue to something that, I wanted to ask. And I talked to NVIDIA’s founder Jensen Huang is sort of the founder of AI, so to speak along with their Chief Scientist Bill Dally, and we talk a lot about pushing. What you guys have been talking about here pushing the boundaries of more data, more compute intensity higher up the AI stack. And then there’s a separate vector which is democratizing AI and actually getting these sort of compute intensive, cost incentive models to be accessible by the masses. And you touched on it briefly by putting things in the cloud. I remember going to demonstrations three or four years ago were taking ultrasounds and enhancing them through AI and then piping it back down into the doctor’s office and having things where you can infer 50, 100 times resolution. And all of those things are really cool but they’re deployed in how many hospitals in the world.
Just because most folks can’t afford a $300,000 supercomputer let alone they will know what they have to do with it if they had it, right? So, I guess what I want to talk a little bit about Kimberly is just NVIDIA announced a couple of weeks ago and this has been coming for a while. So, the DGX in the cloud. right? So, imagine that example of instead of doing ultrasound piping into the cloud, piping it back in real-time, taking the ultrasound, saving it as a video file, uploading it with a few doctors’ notes, and then getting something back from the AI that you can use and that can be done on a subscription basis or a per query basis.
Anyway if you could spend a little bit of time about that other vector of democratizing AI or things maybe we’ve solved, but are deployed in a fraction of where they should be deployed globally in the health care space.
Kimberly Powell
Yes, it’s a super important point, which is why our move to the cloud is so important. And if you go back just five, 10 years ago, supercomputing centers are largely funded by large government entities and there’s 10 of them in the United States programmable by another 10 people in the United States. And so that’s really been one of our core missions is how do we make that more and more accessible.
One way you make that more and more accessible is through software and applications. So, I’ve spent the last 15 years of my life with my entire team partnering with application developers, codifying what we can into libraries that are reusable by everybody, right? Working with the open source community such that these frameworks that helped develop all the algorithms are accelerated and are available everywhere.
So, the software piece of it has been ongoing for a long, long time. We decided to put this CUDA, which is our — essentially our way to program a GPU on every GPU that NVIDIA ever made all the way into our gaming GPUs that back when we did it I didn’t know you were going to run generative AI algorithms on your gaming rig, but now you do because you can create gaming animation characters with it.
And so that was just an amazingly wise decision, but part of the democratization and the ability for this large software ecosystem to abound and really democratize it from that regard.
Now, the second thing that Matt described is we invented the DGX, which is essentially our AI supercomputer. One for our own AI scientists, but also for a lot of the academic godfathers in the world who said in order to further this research, we need to have very dense very high-performance computing.
And so AI has triggered not only does the astronomy team at Oak Ridge National Labs need a high-performance computer. But now with ChatGPT like generative AI every enterprise needs access to this level of supercomputing.
So, the only way you can do that is not by building more data centers. Actually in some countries, there was some moratorium on building more data centers because we’re going to hit a power limit very, very quickly here. And so no, as to how do we make this supercomputing superpower technology for doing very now common generative AI tech in the cloud so that once again it’s completely democratized to every enterprise. You don’t want just one company who can afford the computer to win, we need all companies to be able to exercise this technology and push our industries forward.
So, that was a big conscious move is to make the computers, because it is one of the three ingredients tremendously accessible. And this architecture that we’ve built in DGX and our data center scale computing architecture needs to be really available in the public cloud and it is business now.
Matt Ramsay
Totally makes sense. I don’t know Molly, Brad if you guys want to spend a little bit of time on how you’ve felt the process has been onboarding with NVIDIA? I mean, is the stuff accessible have the experience been your competitor doing things in traditional methods rather than in an AI compute method. Like why are they doing that? I mean, it seems — as a computer architect and a semiconductor person, it seems very, very obvious to me. I mean, we used to talk about the companies that had data or the ones that had the advantage. Now it’s the ones that can actually munge the data and get conclusions out of it that have an advantage rather than just collecting data for data safe. I’m just wondering what your experience has been onboarding with NVIDIA and competitors of yours that might not be — why not. Molly, you want to kick it off, go for it.
Unidentified Company Representative
Sure. Yes. So I mean, this is something that we’ve been talking about a lot is one of the things that what generally is very good and we’re experts at is building algorithms that allow us to generate novel proteins. And we can do that well with access to the hypotheses have today. But when we think about the genetic scale to devote a capability, beyond our pipeline to others in the field, academics that have hypotheses maybe that they want to test and do this at scale and really allow the world to have access to these types of technologies, being able to do that in a scalable way becomes compute and it becomes a bigger engineering problem where the type 2 experts that NVIDIA has type of hardware that NVIDIA has, will enable us to do that type of scale and capability to not only just generate 100 molecules that we could potentially test the lab, but hundreds of millions of molecules that we can go and fill out the test across many different types of targets, many different types of diseases and really start to tackle the drug discovery problem engineering traction, which is one of the places that is kind of an expert of what we’re working on.
Unidentified Company Representative
Yes. So my engineering teams really great things about working with NVIDIA. I mean I think the first way we began working with NVIDIA was the selection of one of their cards. I think it’s the 84K as the GPU processor on our new instrument, which was selected because the CUDA instruction set was super easy to work with for the image analysis algorithms we need an intent right in the box. It was five times faster to 20 times faster depending on the situation of anything else we tested and the interchangeability and knowing that, that kind of CUDA library set was going to be on every idea chip.
It gave us an idea that, hey, over time, we can actually slot out of this version and then to new versions and upgrade to compute in a really seamless way on future generations and the cost system. So that’s been positive. I think the next step, which we haven’t fully realized will be to take our cloud-based computing, work together to optimize the code for NVIDIA GPUs and direct the compute resources towards taking advantage of those. And I think we’re just at the outset of Atomics. I’m sure opportunities like that will present themselves over time. And I think NVIDIA has made it really easy for companies to engage on.
I think one of the things we’ve been focusing on genomics and drug discovery here, but just to up level the conversation and get 2 million opportunity to do that and talk about the other verticals of health care that NVIDIA is attacking, I think you guys have identified sort of 20 independent verticals that have teams that are working across all of those — some of the things are similar in terms of onboarding, offloading things for the cloud, getting AI compute in the hands of all of these researchers, but I think each vertical market also bring its own unique total industry challenges and so — and there is investors here the shareholders across companies and lots of verticals in healthcare. So may be some of the investments that your team is making? What’s the scale of the team now, if folks aren’t using AI-based computing or maybe on the fence about it. Just kind of give the picture of what NVIDIA is doing to set them up to onboard?
Kimberly Powell
Yes. Sure. Yes. And I’ll give the team of experts that we’ve been so lucky to hire. I think it’s a fundamental differences. We have lots of PhDs in genomics in computational chemistry. We have cardiothoracic surgeons on staff. And it’s really so that we can have a really deep application level. There’s no way you can go the full stack if one — you don’t have somebody translating the biology application or the clinical application to all the way down to that chip level to really have this — these many orders of magnitude x factors.
So it’s one of the things we’re really, really proud of is attracting this amazing talent that might have studied to be a surgeon her whole life, but now can imagine how instead of touching maybe 1,000 patients’ lives, how can I touch 100,000 or one million patient lives through by way of technology.
Some of the other areas if I could draw some analogies what Brad is describing to you in Bionano is exactly what any medical device company can and should be doing and are at different paces I would say. And the idea is this, any imaging is core to the entire healthcare delivery process from screening all the way through to robotic surgery and image-guided therapy. And so there’s so much AI enhancements that can be made and/or optimization on the sensor technology itself.
If we want to make an ultrasound machine that is super cheap or an MRI on wheels like Hyperfine has done. You reduce the cost of the technology of the sensor, you reduce its footprint you make it more accessible, but you have to apply a lot more computation on the back end to recover the lack of sensor abilities and then also to guide the user who is not maybe a trained sonographer or a trained technician. And to me that two-thirds of the world that doesn’t have access to proper surgery or diagnostic and medical imaging could now potentially have it.
And it’s this mix of the ability to now put this computational ability in the instrument itself do things in real time, but always be connected to a cloud resource to enhance on an even more cost-effective analysis manner and pipe that back down to the user. This is no different than the car industry. You do some decision-making on the car. You do some up in the cloud.
And so medical devices and this idea of creating that capability as one. It’s essentially how can as a medical device can I become software-defined? How can I as a medical device become much more cheaper and express it well while by way using this technology.
And then thirdly as a medical device maker which many of these are large companies who’ve acquired lots of different companies they now have an opportunity to optimize a lot of engineering. We have a general-purpose compute platform that can plug into any sensor on the planet.
We have a general-purpose software platform that can run any AI developed on the planet. And so they can now realize an AI platform, their own API AI platform run it on instrument or on cloud wherever they see fit for the application energy support. And so I see that as becoming the real transition in healthcare is a lot of AI platforms, GE Healthcare for example their Edison platform is an initial realization of this. Siemens and their RAD companion, these are realizations of software-defined AI platforms as a service that can be connected to all of their devices and rapid — much more rapidly bring innovation to market. So that’s what I’m super excited about. And it’s known architecture. It’s a known need of this hybrid compute in sensor, in cloud and being able to draw real-time insights.
So another area that is very, very exciting would be exactly what we’re experiencing in our own consumer life with ChatGPT. I mean natural language processing and truly pushing the next paradigm of being able to sift through whether it be payer, provider or clinical trial information and draw new insights to be able to design and predict clinical trials on a much more effective basis, being able to predict readmission rates in your patient population and develop new operational efficiencies in the healthcare practice.
Being able to reimagine call centers and processing of new payer systems. This natural language in this capability is going to be completely transformational. But we all know it speaks a slightly different language than ChatGPT does today. And so that’s the whole idea to be able to customize these models, so that they’re fit for function for either the operational purpose or for the clinical or biomedical purpose is another super exciting area that really cuts across all of healthcare in earnest and really can I think make these electronic health records and the operations of what’s happening in health in the hospitals themselves truly transformational.
Matt Ramsay
Go ahead, Steve.
Unidentified Analyst
I guess, one thing I wanted to ask about and this comes up in the AI work that NVIDIA is doing in the automotive industry towards autonomous driving, right? And the technology moves at one speed and the regulators move at a different speed let’s just say to be kind. And as I started off this conversation, I probably know less about healthcare than most of the people in this room but I’ve seen the way that regulation pushes back against innovation in the auto space even as simple things of taking away a visual rearview mirror and replacing it with a camera-based one, which doesn’t seem like that big of a deal, but got a massive regulatory pushback in the auto industry and took a long time to happen.
There’s a lot of things that three of you have been talking about and the pace of computing innovation is going to accelerate massively. That’s my own view in a lot of different verticals healthcare in particular. How do the regulate — I mean this is a big broad conversation, but I can believe your teams like — and how do you interact with the regulators, like, what is their thought of a simulated protein versus traditional methods and that can matter across a whole bunch of areas, right? The imaging of like we have the AI machine that’s going to diagnose brain cancer rather than radiologists actually doing it and how does that pass the FDA? How does it pass in liability lawsuits? I don’t know, maybe I’m opening up a big can of worms here, but I think that’s a super interesting topic that I get asked about in the automotive space a lot and I’m quite certain it applies to healthcare. So anybody wants to take a crack at that one.
Kimberly Powell
Yeah. I’ll take. It is a huge can of worms without a doubt. But I think it behooves in and is upon all of us to have a relationship with the regulatory bodies. Just as we need to educate ourselves about different fields so that we can make a difference, we need to help educate the FDA and also be educated on the FDA. And so we’ve gone off and built processes into our own platforms to adhere to medical-grade hardware and software platforms. A lot of that has to do with things like documentation sounds simple, but it is absolutely imperative to have traceability back to all of the software which by the way now is a lot to run a single algorithm. There’s a lot of software that is running that.
And finally, documenting that and passing that on in a way that can be audited is something that we’re doing now. So we’re taking the onus upon ourselves because we feel like otherwise we could be in the critical path. We can’t just have all these layered cake that I just described and say “Good luck gee or good luck brand-new start-up that has an amazing idea. I mean it’s more about making pace with the innovation.
So a couple of things. One is learn it and if you can from a product perspective apply yourself, you should. And I think the other thing that’s also I think the world is learning in real time and the automotive industry learned is what are other methods that might be able to be use to help. One of the methods that had to be discovered in self driving cars was simulation, because NVIDIA creates games that simulate the natural world, you can apply that same technology to driving your car in a simulated digital physically accurate world.
So you can not only create the training data synthetic data generation to find that corner case of an unfortunate child who might run in front of the car, but be able to generate enough of those scenarios that now you have a lot more confidence that your AI is going to catch it, so synthetic data generation. And then to replay that in the hardware in the loop world, if my car actually saw this how would it react? And so these systems are ways that we can get an auditable trail, enhance our data that the corona cases that the world doesn’t see very often and build more robust development system. So it comes back to that notion of the method.
And so our methods are going to have to evolve. We don’t just make an AI algorithm that can circle a lung module, because guess what when people had COVID and they might have had lung cancer it’s a completely — it’s — this creates the problem to some degree. So you have to retrain the algorithms for COVID presenting patients. And so, it has to be this ever-evolving loop and we have to figure out processes and tools in order to facilitate it. But so one I think it’s just be active and learn both directions; and two think about how technology can actually be applied to make it easier.
Unidentified Company Representative
I think I’d expand on that and just kind of emphasize the education component of it. To us as we’ve been thinking about developing and actually taking AI-generated molecules into the clinic and into people, a lot of what we’re realizing is that for the FDA they’re just behind the technology actually is. And so it’s not that they’re worried about anything in particular of an AI-generated molecule. And in a lot of ways we have lots of reasons to be that they’re going to be safer for people. It’s just a fear of the unknown. And so there’s a lot of education like cumulate both ways of us educating the FDA and the regulatory bodies as well as learning from them what areas that they need more confidence in. And so from the beginning, we’ve been working on those types of questions since the day we were founded. As you can imagine our — if you’re creating new molecules, new proteins that have never been seen by nature before, our own immune systems have never seen those proteins before. So, you could imagine a scenario in which those — you have a large immune reaction to these molecules. But if we understand that process, we can actually reduce that immunogenicity to a place well below what molecules that nature discover has.
So, we’ve actually been able to show that we can learn how the immune system responds to proteins and be able to reduce that to a safer levels than what our traditional discovery methods have been able to do. And so as long as we can in every possible preclinical model that we can demonstrate and share that and continue to educate the regulatory bodies.
I mean where we actually believe that we should be safer in almost every way than a molecule that was discovered through traditional random mechanisms will be good. I can imagine a world in which in the future today we’re using all of the same safety and regulatory processes improve clinical models and animal models that traditionally are seeing before drugs go into people.
I mentioned a world in which we’ll change that completely where you could actually get rid of animals and start to do things like either simulations or you could do things where you can actually have human organoid models where you don’t need animals anymore but through the combination of data AI and new experimental methods you can start to more easily represent what the human body is like mouse is not a human, but we use it all the time to simulate what a human looks like and we can do better.
And so being able to have that conversation back and forth as to what we should be believing and what we believe these methods can do for us and then also learn where the potential pitfalls are or the potential concerns are and we can address those rapidly through this iterative process the better we’ll be.
Unidentified Company Representative
I’ll just say we’re — in the life tools industry we’re lucky enough not to be regulated. And our scientific customers are incredibly fast-moving and embrace innovation and change. Faster fastest areas of application within health care for these new compute and AI approaches.
Matt Ramsay
So, Brad actually talking about that, I heard spatial being essentially used for companion diagnostics. Given maybe you use AI to discover patterns or whatnot there should be any issues with them as long as you go through the regulatory process of validating the markers and–
Unidentified Company Representative
Yes, I think the next several years of spatial biology are largely focused on discovery. Discovering patterns in tissue that might predict who responds to a drug or who doesn’t. And AI will help us make those discoveries. They may or may not be required to scale them into diagnostics. I think we just don’t know that yet. But I think without a doubt it can be a helpful tool in the discovery endeavor and that of course has no regulatory issue whatsoever.
Matt Ramsay
Okay. And then a question for you Kimberly. In healthcare, you have a unique perspective because you can actually see your customers and I know there’s some sensitivities potentially, but are you saying most of the interest from the discovery aspect or in the translational where you’re taking discovery and then trying to move them into clinic, or is it in clinical trials or…
Kimberly Powell
Yeah, I will say that, early discovery and discovery has been the very active part. But as of late, there’s a couple of other areas that are picking up. So early discovery which Molly in our platform are really all about is you can apply these generative AI methods to target discovery to identification of a lead to now the optimization of these interactions and understanding how they’re going to interact in the body. And so literally, every stage of drug discovery has now fully been affected by the generative AI era. There are models that are being applied to each and the combination of these models can potentially replace what’s known today is what we’re doing in virtual screening.
And so the early discovery is – I think it’s a well-known future that, it will be heavily in silico informed by of course wet lab, but you might go from a lot of the early discovery being – I don’t know what the ratio is. But 80% lab – wet lab 20% compute to 80% compute 20% wet lab. Now some of the other tools that, I was describing later on are becoming super useful in clinical trials. I mean, you have pharma companies who are sitting on decade’s worth of images.
You can use these same approaches where unlabeled data can now be shoved into these algorithms and they can be trained to do pretty phenomenal things and things that are a lot cheaper than hiring a radiologist to annotate clinical trial data, and frankly probably more precise, because when you’re looking at whether patients are responding in their tumor response, the accuracy of that measurement can be a responder and non-responder.
So there’s a lot of potential, I think opportunity in the imaging space. And then just as DALI and some of these other algorithms have showed us being able to build multimodal algorithms from patient outcomes and their phenotypic image data is going to produce a lot more translational applications as well as clinical trial opportunities and efficiencies and improving that, we know that’s the most expensive part of developing drugs and we know that we’re not very good with this 10% — success rate, after we get into patients.
So that’s a function of garbaging from early discovery and garbage out, but its also obviously a huge function of how we are choosing and monitoring the patients and trial. And then, I will go further to say on a commercialization side, of course national language processing is a huge tool there, to be able to constantly, to be able to analyze what’s going on in the real world data and be able to discover the goods and the bads of your medicines and be able to feed that back into the whole loop.
So while today I think there’s a lot of pent-up focus on early. The other two are really coming in earnest, especially after ChatGPT moment has happened, and now the ubiquitousness of being able to apply AI to images. And partially I feel like we’ve made a great contribution in some of the open source tools we’ve made in the imaging space that the field has really said, we’ve got a lot more we can do with the image data that we have.
Matt Ramsay
Right, I think we have just a couple of minutes left. And again this has been super interesting and I very much appreciate you guys all being part of this. And so I have to sort of ask the devil’s advocate question at the end just to make this fun. One of the things that we spend my team and Semis has spent a lot of time in the autonomous car arena. Different levels of automated driving and all of the different things that have gone into that as a science and as a technology application into a very slow-moving industry in the automotive space.
And if you had a conversation with NVIDIA’s founder Jensen, 10 years ago, I think he — his timeline for when we were all going to be sitting around and not having steering wheels and cars would have been a little bit off. He’s admitted that. And the realities of moving and transforming a huge incumbent industry take time, right?
So my gut tells me, there’s going to be some of that in the healthcare world. There’s huge, huge potential here. I think Jensen has described, AI and healthcare is the next billion-dollar business for NVIDIA. Just for all three of you, what’s like hindering the pace of adoption and innovation is it — do we have too much data and not know what to do with it? Do we not have enough data scientists? Is it the cost of the hardware?
Is it regulation, just I don’t know, how you’re thinking factors things could move a heck of a lot faster, because the two of your companies are making things move quickly but you’re the outlier rather than maybe everyone doing it at the pace you are. So let’s just use that analogy comparing to the auto industry that just took longer. What do you think are the impediments to this AI innovation in healthcare?
Unidentified Company Representative
I mean, I think, one of the things our customers are scientists and they’re — every scientist’s individual-scale of data generated is pretty small. And machine learning algorithms need massive training sets, more than honestly can be generated by any one individual scientist.
So I think what’s required to truly unlock machine learnings potential on scientific data sets is a paradigm of sharing and pooling data that honestly is new to that community. Most scientist have view their data is precious to themselves. It’s proprietary. They want to go back and look at it over-and-over again. It’s very, harmed that we generate. They don’t have a single repository where all that data goes.
So I think that’s one of the rate limiting steps in the scientific field. Now, certain areas we’re working hard is by creating a single repository in the data like for all the Spatial Biology data ever generated on our platforms to go, to help create a situation where the data is all in one place and allow us to get to that scale. But I do think in the scientific world finding data sets that into scale on algorithms as part of what is rate limited.
Kimberly Powell
Yeah. I’d agree with that. I think it comes down to a cultural component in the scientific community. And this is something that surprisingly doesn’t just persist between companies and even we think the scientist individually in their own company, which I think is probably quite surprising to people who aren’t familiar with the field is that there is this kind of ownership over data in — through scientific area.
It’s one of the things that generate and we spent a lot of time building from the very beginning of the data is a collective assets. And by having that mantra over and over and over again, that you can’t even really see your data until it goes into a central repository itinerary, it was something that was — it doesn’t sound like a big step. But in many companies, many biotech companies that you would see, that’s not happening. Many companies, you’ll have data on an individual computer. It might be in an Excel spreadsheet and if someone leaves the company, who knows where that data goes.
And so there’s a lot of cultural components to it. And I think that extends beyond the biotechs to the pharmas that are then eventually the real commercialize developers of the drugs, like we spend a lot of time talking to pharma companies about their view on the type of technology that we’re developing to generate and how they would use it.
One of the challenges is that each of those were generating novel proteins, protein engineering has been a field since ancient user. The protein engineering has been a field for decades. There are people who have trained to engineer proteins for drug discovery. And so one of the cultural changes is getting those people on board the fact that there’s a new way to do things, because they’ve been training for the last 20 years in their way of doing something. So there’s multiple cultural elements to this that I think will be the bigger barriers than all of the other things that are still challenges like generating lots of data, getting the right regulatory – as you’ve seen it.
But I think those things will move faster, and then getting the full community to change into this way of state-of-the books to asset integration between computation and experimentation, and the view there is a new way to do it in a field that has been around for decades.
Unidentified Company Representative
Yeah. And I guess I’ll kind of clean it up to some extent. In the healthcare delivery side of things, I mean, I think we can all recognize we have a bit of a systemic problem in the healthcare system itself where as I’m talking to a friend on the soccer field, and last year, we helped achieve the Guinness or record for the fastest time to sequencing diagnosis with Stanford on Oxford Nanopore’s genomics platform.
And this helped a child know that he was going to be on death step soon because of a heart failure, he was put on a transplant and he is alive and well today, or seizing child who was able to be sequenced within hours of its life knowing he just needs that vitamin D supplement and his dear friend on the soccer field talking about his mom going through this diagnostic odyssey and she actually has a mutation that is 80% treatable. But hasn’t known that for the last six month.
So part of it to me is, we’re making this technology more readily accessible that genome is going down to $100. Can we — why and can’t we make it more a standard of care. And so that’s why we continuously partner with these clinicians who have a dream of changing that standard of care and showing that it’s actually plausible and being able to kind of create that inside the healthcare system. That’s truly a challenge.
I think other ideas are, one, I think, unique vantage point NVIDIA does have is, because we work across so many different industries, you can see how a car and a spatial genomics platform are not that dissimilar in what they need and how they can become more software defined. As Brad is telling you, informatics is going to be his innovation engine in the not-too-distant future.
So you have these 40-year-old or more medical sensing device companies that have built the platform but now how can they innovate in a software-defined way and realize huge economies. Some of that is a problem of reimbursement. If the reimbursement isn’t there, what would drive them to sort of disrupt their own innovators’ dilemma kind of a thing? So that’s — so on healthcare, those are not that solvable but the more we can make these world records known and advertised to really see the possibilities, you hope that it can start to have some formidable change in healthcare delivery.
And then I have to agree with Molly just us being a tech company, I went to JPMorgan several years ago, where I had a conversation I won’t say with who a large pharma and their CIO saying, we’re never going to do AI in-house. We’re going to farm it all out. And Jensen, of course, respectfully disagreed and said, why would you ever say that. This is the biggest technology breakthrough of our time. How silly would it be for you not to become well versed in how to use this technology and apply it everywhere?
You can — of course, drug discovery, I mean Pharmaceuticals is one of the most challenging industries on the planet Earth, but get educated on how to use AI, invest in to some degree and partner on another degree. You’re not doing some of it internally how would you even pick a good partner. So that was just a little bit of humorousness that I think, is starting to change pretty rapidly here.
But it’s that notion of change management and culture, tech-first biology seconds in some ways. I know that’s a little hard for people to hear maybe but — and a little self-serving perhaps. But it’s — we believe in it and we hope to contribute to it. And I think that the end of last year in generative and biology was absolutely a milestone. It was an eye-opener milestone and I’m excited to see what this year brings.
Matt Ramsay
Well, thank you everybody for — hopefully, you guys found this conversation unique and interesting. And thank you to the three panelists, Steve, your partnership and pulling this together. And thank you all so much.
Kimberly Powell
Thank you.