Arista Networks, Inc. (NYSE:ANET) The JMP Securities Technology Conference 2023 Call March 6, 2023 12:30 PM ET
Company Participants
Martin Hull - Vice President of Engineering and Platforms
Liz Stine - Director of Investor Relations
Conference Call Participants
Erik Suppiger - JMP Securities
Erik Suppiger
Well, thank you for joining this morning. It's great to see a good crowd here. My name is Erik Suppiger. I'm the infrastructure analyst for JMP Securities. And kicking off this session, we have Arista Networks. And to my right is Martin Hull, Vice President of Engineering and Platforms. And to the right of Martin is Liz Stein, Director of Investor Relations.
So, I want to encourage all of you to feel free to ask questions. The object here is to give you an opportunity to take care of any questions that you might have.
But first, I think I'll just let Martin introduce himself a little bit. Give us a little bit of background of your background.
Martin Hull
Absolutely. So, for those of you who listen for more than a minute will probably realize I don't have the traditional American accent. Born and raised in the U.K. and moved to the States in 2006.
I've been Arista since 2011. And at that time, we were a relatively small scrappy start-up competing with the large incumbents. So, I joined as an individual contributor, systems engineering, moved into product management, which is more of my goal's desire. And since then, the company has grown, went through IPO. And at this point in time, I run the Platform Product Management team. So, the way I like to think about that is everything that Arista makes that you can touch is my problem. Everything we make that you can't touch, which effectively is a software, I have a counterpart or a peer who's responsible for the software code development and product management of that. So, if you can touch it, it's my problem. If you can't touch it, it's not my problem.
Question-and-Answer Session
Erik Suppiger
All right. So, let me just start off with a broader question. We've seen tech spending slowdown across the sector. Just curious, should investors think of Arista as particularly impacted by tech spending? Or is Arista more insulated? What have you seen from a general demand trends perspective?
Liz Stine
Yes. I think from kind of a general macro perspective, I don't know that we're seeing much difference in the business. I think if you look at the kind of parts our business with the cloud guys, we continue to engage with them, have these deep partnerships with them that continue to move forward on the enterprise side. We are the share takers, share gainers. It's about new logos. It's about building out the opportunities within those logos and we continue to work with our enterprise customers as well. I'm not sure that we see much difference. We're very cognizant of it. Obviously, we're watching and a lot will depend on how deep and how long this macro event may be. But I think from a day-to-day perspective, it's business as usual moving forward.
Erik Suppiger
Okay. Very good. Martin, on the last earnings call, I'm not sure if it was Jayshree or who, but somebody described Arista as basically at the start of the journey in its migration from best-of-breed products to best-of-breed platforms. Can you describe what that means?
Martin Hull
Yes. So, as I said, I joined Arista in 2011 and really the start -- the story starts from 2010, so slight before I got there. If you go back to that 2010-2011 time, we were making high-performance switches, which were effectively single purpose focused on some data center top-of-rack use cases and low latency financial trading, the automated trading applications. So, we had effectively a set of relatively narrow niche products.
But at that time, what we had was this thing called EOS, the Extensible Operating System, software. The core of the company is based around that software architecture. That has been extended over the last 12, 13 years. So, now when you look at the products that we have and the use cases we address and the solutions we can offer to our customers, it's far broader. So, we have our management playing, which is called CloudVision, it's still based on the EOS. We have switches and routers and security capabilities, again, all based on that EOS. So, what we've done over the last 12 years is we've grown from product to systems, to solutions, and that has been a journey. And we see that journey continuing.
Some other companies out there in industry have more of a bolt-on capability. So, they have different architectures, different software stacks, and they'll offer a different solution, different vertical. I think we're alone in the industry, at this point in time, having a pure operating system software code. And why does that matter? Well, the code is high quality, it's reliable, it's scalable, it's open, it's programmable, all the things that customers come to us saying, "Have you got one of these?"
So, the quality, the underlying architecture, the EOS operating system, is what we've kind of built everything else around from products to systems to solutions.
Erik Suppiger
Why are you at the start? It sounds like this has been a strategy for a while. Is it the additional products that are being incorporated? Or what is incremental?
Martin Hull
We say it's a start, it's a continuing evolution. So, I don't think we're done, right? By no means is this game over, in any sense. We continue to grow the number of customer use cases that we address. So, we went from switching to routing. We've most recently expanded into corporate enterprise networks, the campus. So that evolution is part of that. But there are a number of other technologies that aren't yet in scope for Arista as a systems and solutions company, which over time I think you will see us start to address.
Erik Suppiger
Okay. So, one of the big emerging trends is around AI these days. Describe a little bit about the opportunity that AI networks represents for Arista? And then, what technology advantage does Arista have in an AI environment?
Martin Hull
Yes. So, you can't go more than a couple of day without reading some new article about AI. Firstly, I've never heard of it. So, yeah, AI is the buzz word of this year, maybe last year, but certainly AI is coming up everywhere. And AI is going to affect every one of us in our day-to-day life. You can look at the investments that are Microsoft or Meta are making, you also kind of have to step back and say, well, AI is going to affect the creative arts. It's going to affect financial services. It's going to affect medical, healthcare, probably federal, defense, if it's not already, we just don't know about it. So, AI technology, I think, is going to impact all of us in our day-to-day life.
But if you then think about what it is that these large cloud titans are doing in investments they're making, they are a fundamental shift, and we are very early in it. And yes, the articles are talking about the investments that they're making, but those investments have to get deployed and then show the use cases. So, as we expect those customers to see the business value to them, in Meta's case, optimized recommendation engines, which get better granularity for the recommendations they're providing, which means that their customers are then able to have a better product. If you look at the Microsoft use case, driving Bing-type automations, but also embedding it in their Azure Stack to be able to give people better access to AI as a service, [AIAN] (ph). So, AI as a service, these are all growth opportunities for those two hyperscalers. And there's many other large tech companies out there that will be investing in internal AI clusters. Financial services industries, it's likely they're going to have an internal AI cluster that works on recommendations, that works on analytics and back-end workflows.
So, what does that do for networking? So, every one of these AI clusters is a series of very high-performance compute nodes filled with GPUs and TPUs, but they all need to talk to each other. They do that at the beginning of the training process. They do it at the inference state. There's a compute cluster going on. And these compute clusters are very expensive, but we don't sell them. What's very important is to run to peak efficiency. So, how do you measure peak efficiency? Well, you've got to make sure that the network utilization is less than 100%. If the network gets in the way, the network is bottleneck. So, what do we make? We make very high-performance network infrastructure. It isn't just a case of speeds and feeds, it's the architecture of the platforms that are built up into that. Very high density, very high performance with features and capabilities that are able to have the visibility of the traffic, distribute the traffic flows, so that the AI clusters can run at peak efficiency.
And we've transitioned -- Liz and I have been doing this for couple of decades now. We've transitioned from a world where customers would build an oversubscribed network because it was going to get peak utilization at very few times. AI clusters are running at peak utilization more than they're not. And so, it makes sense to actually overinvest in the network infrastructure to build out more interconnect, more IO between these compute nodes and they can use. If there's a failure, a link goes down, a system has to recorrect its traffic patterns. There's more capacity built in the network. So, we're building out non-blocking or essentially undersubscribed networks.
And then, if you look at the systems that we propose into these solutions, our 7800R Series is optimized for the most demanding workloads that are out there. We said this since we first introduced the R Series probably a decade ago. Back then, it was big data. It was Hadoop-type clusters. It was file-based systems for back-end media environment. So, if you're a film producer, you're either rendering or 3D graphics, then you're going to be able to take advantage of these high-performance systems. These same high-performance systems are now applicable to AI, and we're talking about 400 gigs of interconnect. And just as soon as we're able to produce something that's higher than 400 gigs, then we'll have an appetite to consume it.
Erik Suppiger
All right. I don't know if you saw 60 minutes. Actually, it had a session about AI and they interviewed Microsoft and some others. Question I have is one of the things that they realized is there's a lot of wrong answers that come out of an AI environment like Bing. How do you see the investments in AI going at this point? Have they ramped up significantly this year with Chat -- with GPT chat, like how do you see the investment cycle developing?
Martin Hull
So, first of all, we're not responsible for the long answers. We just move the packets. So, this is a journey that I think those organizations have been on for a year or so. I don't want to comment on what they might be talking about their investment cycles. We have been on a journey with them to understand their requirements from a couple of years ago. We're now delivering the systems, as I say, we're talking about this calendar year '23 into calendar year '24. Yes, we see a ramp up of AI deployments. But some of those investments or some of those planning cycles have been underway for, I would say, more than a year and perhaps when they start to come as the public now.
Erik Suppiger
And they're starting to come out now.
Martin Hull
They're having the public conversations.
Erik Suppiger
Okay. All right. Any questions on the AI front? InfiniBand is something that has traditionally been perceived as a stronger protocol for real high-performance networks. Why is Ethernet getting adopted for AI and what's changed there?
Martin Hull
So, I won't say that InfiniBand is not getting deployed for AI. I think as a relatively small or medium-sized AI cluster, InfiniBand can get deployed or Ethernet can get deployed. If you look out and you see there's an organization called HPC top500.org. So, they calibrate the top 500 super computers in the world, do it every six months. And if you look at the infrastructure that is deployed in the top 500 supercomputers around the world, it's about a 50-50 split between the Ethernet and InfiniBand. And that 50-50 split has been there for the last few years. So, InfiniBand and Ethernet compete, if you will, for top 500 supercomputers. Those are dedicated special purpose machines that effectively, in our own plate, they're closed systems. As soon as you want to talk from one of those top 500 supercomputers to the outside world, you're using Ethernet. InfiniBand isn't going beyond the super compute infrastructure.
If you're deploying large scale AI clusters, you're going to want to be able to leverage the investments you've made in Ethernet for all your front-end networks, your edge, your core, the IP protocols, the Ethernet technologies. Best-of-breed multi-vendor, it's open, programmable, and has a significant multiple technology company investment in it. InfiniBand is, relatively speaking, a niche, single technology company. And so, I don't see the InfiniBand will go away, but I don't see it growing from here on out, whereas as these AI clusters get larger and larger and larger, we're talking about thousands of compute nodes, you're building out multi-tier networks, then you're probably going to want to prefer Ethernet.
Then the question is, who's Ethernet? And of course, I would say the Arista Ethernet. Based on our product technologies, we do believe we've got some advantages again around the operating system, around the internal capability of those systems to handle the workloads. But generally speaking, I think that Ethernet becomes preferred over InfiniBand for scale out AI clusters. And then, I'll make my own pitch for Arista versus anybody else.
Liz Stine
If you think along those lines, if you look at some of the initiatives for these large -- larger customers, they don't want to build single use networks, right?
Erik Suppiger
Yes,
Liz Stine
They don't want to be able to expand this and have it be part of their general data center compute. And I think a lot of them have talked about that in driving efficiency. And when you think about it, if you're going to be deploying these AI workloads, alongside your general data center compute, Ethernet is the commonality there, and being able to manage it, being able to tune it, et cetera, with the same tool sets, et cetera, are extremely beneficial in that case.
Erik Suppiger
Okay. Meta was a star customer of yours, went from less than 10% of revenue, I think, in 2020 -- 2021 to 25% in 2022. Talk a minute about what changed? Why that happened? And then Meta has made comments more recently that they've basically streamline their future data centers to a new architecture, which will be more cost efficient. Is that something -- and they've talked about cutbacks on their CapEx? Is that something that investors should be concerned about?
Martin Hull
So, let's talk about how we got here first. So, yes, in 2021, Meta was a less than 10% customer. So, there was no direct reporting of what that revenue was. And this year, yes, it was 26%. What happened was the fruition of many years of work from people like myself, Anshul Sadana, and our sales and engineering team, engaging with that customer to understand what their requirements were years ahead of needing the equipment. It's a joint development.
And if you go to shows like the OCP, Open Compute Programs, you'll see joint announcements or you'll see very similar types of products. Meta has that version of something, and then they have the Arista version sitting side by side with it. And you can tell the difference. But when they get deployed in their infrastructure, they are functionally the same to the point where one can be taken out and replace with a different one. So, supply chain related, opportunity, their growth, their investment, all led to that very large revenue number last year. So, we are fundamental and essential to their day-to-day data center operations.
Where I think Meta has spoken about what their plans are with the CapEx is about efficiency. It's about efficiency, but it's also may be pulling back on some of their more nice-to-have-type projects. We believe we are key to everything they're doing that's essential to their business. And we've talked about AI. And they have said that AI is fundamental to their business. Yes, it's about efficiency. That also goes back to what Liz was saying in terms of having one consistent network architecture that being leveraged for all their different business stacks that run over those data centers.
So, rather than building out islands where maybe the budget or the CapEx was, "Hey, you can do what you need to do, just get me the results," now it's like, "Let's be more conservative with our CapEx, let's leverage the investments we've already made in Ethernet, in high-speed switching, and let's make sure any of the next-generation architectures we put out there can be interchangeably used for different use cases." So, there may be some pullback on their CapEx side of it. We think that our position within that share of the CapEx spend is relatively secure, but of course, we have let the future play out.
Erik Suppiger
Does that mean that they reuse equipment in which case they would extend the lifetime of a switch that Arista might sell them?
Martin Hull
So, it's not so much reuse as extending the life cycle of that equipment. So, the last -- it's literally almost three years to the day since I was sent home from the office beginning of the COVID, it was March 9, so we're not quite there. But what we've seen across a lot of very large customers is the beginning of COVID was, "How the heck do we deal with this?" And then there was a scramble in enterprises for effectively a move to the cloud, not specifically for matters benefit, those move to the cloud. And what we saw across a number of our customers was that they couldn't get their hands on enough infrastructure to be able to grow their data center footprints fast enough.
What they weren't doing was going back to existing data centers and refreshing them, spending them over, taking out equipment that's three-years-old and replacing it. So, there has been a small amount of technical debt built up in terms of equipment sitting there that's been installed for a number of years. When they look at that infrastructure and they question what the useful life cycle is, adding a year, adding two years to that, probably already happened for a lot of the equipment. So, there will be some degree of technical debt. They've got to go back and pay down to refresh existing installed base. But that means that the equipment they put in for the last three years will then have an extended life cycle as well, because you can only catch up at a certain pace. So, there will be an extension of the life cycles on some of the equipment that have been deployed, but at the same time, if you think about the efficiency gains and performance gains and if it is the year of efficiency, if I can produce a business case that says that if you allow us to refresh this location or this environment to the next generation, I'll get you a reduction in OpEx that will feed through the bottom line, then those are the conversations that these organizations are having internally about where do they make that right investment to get the right upgrade cycle.
We're on the path to 400 gig. Meta, Microsoft and the other hyperscalers have been deploying 400 gig networks and 100 gig networks for the last few years. That growth is still happening. If you've got existing 100 gig systems, that will be better if there were 400 gig. You have a decision to make, do I leave it in place for one more year and effectively depreciate that asset over a longer cycle? Or do I make the investment now to get the benefit that feed to the bottom line? So, there's no single answer to how they do this. There are multiple scenarios.
But again, we have the systems. Our 7388 and our 7800R Series that are optimized for deployment in these next-generation systems. And as they get to the refreshes based on the technical debt, we believe we're very well placed to take share there.
Erik Suppiger
So, we shouldn't necessarily think Meta improving efficiencies is Meta pushing out equipment from Arista, it can be actually incorporating the new -- the refreshed version of Arista?
Martin Hull
Yes, I think efficiency can take many turns. And not spending money isn't necessary inefficiency. It's making sure you apply that money in the right ways to get the peak efficiency. Let's say they've pulled back, I think, on some of the research-y projects, their nice-to-haves, the pie in the sky, the bluesky projects, but the core fundamentals, I think, are still there.
Erik Suppiger
Okay. We're down to our last three minutes, so I will take -- want to make sure if we have any questions. More than glad to field any questions here.
So, let me ask how much do you see of your competitors in the cloud titan space? Has that changed in the last couple or few years? And are there any areas within the cloud titans that your competitors have done reasonably well?
Martin Hull
So, the market itself is growing. And so, we don't believe we're losing market share within that sector, but that's not to say that the Tier 1 competitors aren't also growing their business as well. Top-line growth, we can all take a fair share of that. Where do we see them? We see them in a dual-sourcing strategy. Every one of the hyperscalers wants to make sure they're not single-sourced, whether it's a compute, a storage, networking, racks, power, geolocations. And so, there is a dual-sourcing strategy, so you're always going to see your competitors deployed alongside you, above you, below you. And we're an open-standard industry. We're an open-standard company. And so, we have no problem with them being in that data center. We prefer there was less of them and more than us.
But it's a competitive world, highly competitive. We have seen one or two of the other competitors take some interesting approaches in terms of selling effectively components rather than systems. And there's also been a churn in that some of those competitors come out with "an all-new architecture" that solves all the problems from the previous generation that they themselves chose to sell. So, when they deploy that into an existing customer, I'm sure they can call that a new design win. And really what they're doing is they're refreshing their own existing equipment.
Arista, going back to when I started the olden days, we were putting single chip systems out there with 64 ports of 10 gig, 640 gigs. The latest state of the art silicon is now 51.2 terabits. That's almost 100-fold increase in the performance of the silicon that goes inside these systems in a little bit over 10 years.
So, there's always going to be an arms race between us and our competition to get to market, to be able to get tested, qualified and deployed in these infrastructures. And if you stop the music and freeze at any point in time, one company might have the nose slightly ahead of the other one, and then you roll back six months and it's going to go. So, it's always going to be an interesting market. Competition is what makes life fun. But I don't see our competition gaining market share from us at this point in time.
Erik Suppiger
All right. Well, we're down to our last seven seconds. So, I'm going to thank Martin for your time. And thank you, Liz, for your time. This has been a great help. And thank you all for joining us again for Arista Networks. Thank you.
Liz Stine
Thank you.
Martin Hull
Thank you.