Before we get to our cloud news trends, take a moment and watch this interesting interview regarding Google’s Cloud.
Cloud Computing Trends 2022
Live from Boston, Massachusets. It’s theCUBE, covering OpenStack Summit 2017. Brought to you by the OpenStack Foundation, Red Hat, and additional ecosystem and support. >> Hi, welcome back, I’m Stu Miniman, joined by my co-host John Troyer and happy to welcome back to the program Brian Stevens who’s the CTO of Google Cloud. Brian, thanks for joining us. >> I’m glad too, it’s been a few years. All right, I wanted to bounce something off you. We always talk about, you know, it’s like open source. You worked for in the past what is most considered the most successful open source company for monetizing open source, which is Red Hat.
We have posited at Wikibon that it’s not necessarily the company, it’s not only the companies that sell a product or a solution that make money off it, but I said, if it wasn’t for things like Linux in general and open source, we wouldn’t have a company like Google. Do you agree with that, you look at the market cap of a Google, I said if we didn’t have Linux and we didn’t have open source, Google probably couldn’t exist today. >> Yeah, I don’t think any of the hyper-scale cloud companies would exist without open source and Linux and Intel. I think it’s a big part of the stack, absolutely. >> All right. You made a comment at the beginning about what it means to be an open source person working at Google. The joke we all used to make was the rest of us are using what Google did 10 years ago, it eventually goes from that whitepaper all the way down to some product that you used internally and then maybe gets spun off.
We wouldn’t have Hadoop if it wasn’t for Google. Just some of the amazing things that have come out of those people at Google. But what does it mean to be open source at Google and with Google? >> You get both, right? ‘Cause I think that’s the fun part is I don’t think a week goes by where I don’t get to discover something coming out of a resource group somewhere. Now the latest is machine learning, you know, Spanner because they’d learned how do to distributed time synchronization across geo data centers, like who does that, right? But Google has both the people and the desire and the ability to invest in on the research side. And then you marry that innovation with everything that’s happening in open source. It’s a really perfect combination. And so instead of building these proprietary systems, it’s all about how do we actually not just contribute to open source, but how do we actually build that interoperability framework, because you don’t want cloud to be an island, you want it to be really integrated into developer tools, databases, infrastructure, et cetera.
And a lot of that sounds like it plays into the Kubernetes story, ’cause, you know, Kubernetes is a piece that allows some similarities between wherever you place your data. Maybe give us a little bit more about what Google, you know, how do you decide what’s internal, I think about like the Spanner program, which there are some other open source pieces coming up, looks like they read the whitepaper and they’re trying to do some pieces. You said fewer whitepapers, more code coming out of people, what does that mean? It’s not that we’ll do fewer whitepapers. ‘Cause whitepapers are great for research, and Google’s definitely a research strong academic oriented company.
It’s just that you need to go further as well. So that was, you know, what I was talking about like with GRPC, creating an Apache project I think was the first time for streaming analytics, right, was the first time that I think Google’s done that. Obviously, been involved for years at the Linux kernel, compilers, et cetera. I think it’s more around what do developers need, where can we actually contribute to areas, because what you don’t want, what we don’t want is you’re on-premise and you’re using one type of system, then you move to Google Cloud and it feels like there’s impedance.
You’re really trying to get rid of the impedance mismatch all the way across the stack, and one of the best ways you can do that is by contributing new system designs. There’s a little bit less of that happening in the analytics space now though, I think the new ground for that is everything that’s happening in machine learning with Tensor Flow et cetera. >> Yeah, absolutely. There was some mention in the keynote this morning, all of the AI and ML, I mean, Google with Tensor Flow, even Amazon themselves getting involved more with open source. You said you couldn’t build the hyper scales without them, but is that the, do they start with opensource, do you see, or? Well, I think that most people are running on a Linux backplane. It’s a little bit different in Google ’cause we got an underlying provisioning system called the Borg.
And that just works, so some things work, don’t change them. Here is where you really want to be open source first are areas that are just under active evolution because then you actually can join that movement of active evolution. Developer tools are kind of like that. Even machine learning. Machine learning’s super strategic to just about every company out there. But what Google did by actually open sourcing Tensor Flow is now they created a canvas, that community, we talk about that here, but for data scientists to collaborate, and these are people that didn’t do much in open source prior, but you’ve given that ability to sort of come up with the best ideas and to innovate in code.
I wanted to ask a little bit about the enterprise, right. We can all make jokes about enterprising is what everybody should’ve been doing 10 years ago, and they’re finally getting to. But on the other hand, Red Hat, very enterprise-focused company. OpenStack, service provider and very enterprise focused. One of the things that Google Cloud is doing… Well, I guess the criticism has typically been how does Google as a company and as a culture and as a cloud-focused on the enterprise, especially bringing advanced topics like machine learning and things like that, which to a traditional IT person are a little foreign.
So I just am interested in kind of how you’re viewing, how do we approach the needs of the enterprise, meet them where they are today, while yet giving them an access to a whole set of services and tools that are actually going to take them into a business transformation stance? >> Sure. And that’s because you end up as a public cloud provider with the enterprise, you end up having multiple conversations. You certainly have one of your primary audiences, the IT team, right. And so you have to earn trust and help them understand the tools and your strategy and your commitment to the enterprise. And then you have CSOs, right, and the CEO, that’s worried about everything security and risk and compliance, so it’s a little bit different than your IT department. And then what’s happening with machine learning and some of the higher end services is now you’re actually building solutions for lines of business.
So you’re not talking to the IT teams with machine learning and you’re not talking to the CSOs, you’re really talking about business transformation. And when you’re actually, if you’re going into healthcare, if you’re going into financial, it’s a whole different team when you’re talking about machine learning. So what happens is Google’s really got a segmented three sort of discreet conversations that happen at separate points of time, but all of which are enterprise-focused, ’cause they all have to marry together. Even though there may be interest in machine learning, if you don’t wrap that in an enterprise security model and a way that IT can sustain and enable and deal with identity and all the other aspects, then you’ll come up short. >> Yeah. Building on that. One of the critiques of OpenStack for years has been it’s tough. I think about one of the critiques of Google is like, oh well, Google build stuff for Google engineers, we’re not Google engineers, you know, Google’s got the smartest people and therefore we’re not worthy to be able to handle some of that.
What’s your response to that? How do you put some of those together? >> Of course, Google’s really smart, but there are smart people everywhere. And I don’t think that’s it. I think the issue is, you know, Google had to build it for themselves, right, they’d build it for search and build it for apps and build it for YouTube. And OpenStack’s got a harder problem in a way when you think about it, ’cause they’re building it for everybody. And that was the Red Hat model as well, it’s not just about building it for Goldman Sachs, it’s building it for every vertical.
And so it’s supposed to be hard. This isn’t just about building a technology stack and saying we’re done, we’re going to move on. This community has to make sure that it works across the industry. And that doesn’t happen in six years, it takes a longer period of time to do that, and it just means keeping your focus on it. And then you deal with all the use cases over time and then you build, that’s what getting to a unified commoditized platform delivers. >> I love that, absolutely. We tend to oversimplify things and, right, building from the ground up some infrastructure stack that can live in any data center is a big challenge.
I wrote an article years ago about Amazon hyper optimizes. They only have to build for one data center, it’s theirs. At Google, you understand what set of applications you’re going to be running, you build your applications and the infrastructure supports it underneath that. What are some of the big challenges you’re working on, some of the meaty things that are exciting you in the technology space today? >> In a way, it’s similar. In a way, it’s similar, it’s just that at least our stack’s our stack, but what happens is then we have to marry that into the operational environments, not just for a niche of customers, but for every enterprise segment that’s out there. What you end up realizing is that it ends up becoming more of a competency challenge than a technology issue because the cloud is still, you know, the public cloud is still really new.
It’s consolidating but it’s still relatively new when you start to think about these journeys that happen in the IT world. So a lot of it for us is really that technical enablement of customers that want to get to Google Cloud, but how do you actually help them? And so it’s really a people and processes kind of conversation over how fast is your virtual machine. >> One of the things I think is interesting about that Google Cloud that has developed is the role of the SRE. And Google has been, has invented that, wrote the book on it, literally, is training others, has partnerships to help train others with their SREs and the CRE program. So much of the people formerly known as sysadmins, in this new cloud world, some of them are architects, but some of them will end up being operators and SREs.
How do you see the balance in this upscaling of a kind of the architecture and the traditional infrastructure and capacities and app dev versus operations, how important are operations in our new world? >> It’s everything. And that’s why I think people, you know… What’s funny is that if you do this code handoff where the software developers build code and then they hand it to a team to run and deploy. Developers never become great at building systems that can be operationally managed and maintained.
And so I think that was sort of the aha moment, as the best I understand the SRE model at Google is that until you can actually deliver code that can be maintained or alive, well then the software developer owns that problem. The SRE organization only comes in at that point in time where they hand up there, and they’re software developers. They’re every bit as skilled software developers as the engineers are that are building the code, it’s just that’s the problem they want to decode, which I think is actually a harder problem than writing the code.
‘Cause when you think about it for a public cloud, its like, how do you actually make a change, right, but keep the plane flying? And to make sure that it works with everything in an ecosystem. At a period of time where you never really had a validation stage, because, in the land of delivering ISV software, you always have the six months, nine-month evaluation phase to bring in a new operating system or something else, or all the ecosystem tests around that.
Cloud’s harder, the magic of cloud is you don’t have that window, but you still have to guarantee the same results. One of the things that we did around that was we took the page out of the SRE playbook, which is how does Google do it, and what we realized is that, even though public cloud’s moved the layers up, enterprises still have the same issue. Because they’re deploying critical applications and workloads on top. How do they do that and how do they keep those workloads running and what are their mechanisms for managing availability, service level objectives, share a few dashboards, and that’s why we created the CRE team, which is customer reliability engineering, which is a playbook of SRE, but they work directly with end users.
And that’s part of the how do we help them get to Google Cloud, part of it’s like really understanding their application stacks and helping them build those operational procedures, so they become SREs if you will. >> Brian, one of the things I, if you look at OpenStack, it’s really, it’s the infrastructure layer that it handles, when I think about Google Cloud, the area that you’re strongest and, you know, you’re welcome to correct me, but it’s really when we talk about data, how you use data, how analytics, your leadership you’re taking in the machine learning space. Is it okay for OpenStack to just handle those lower levels and let other projects sit on top of it? And curious as to the developing or where Google Cloud sits. >> I think that was a lower level aha moment for me, even prior to Google, as it was, I did have a lens and it was all about infrastructure. And I think the infrastructure is every bit as important as it ever was. But the fact that some of these services that don’t exist in the on-premise world that live in Google Cloud are the ones that are transformative change, as opposed to just giving you operational, easing the operational burden, easing the security burden.
But it’s some of these add-on services that are the ones that really changed here, bring around business transformation. The reason we have been moving away from Hadoop as an example, not entirely but just because Hadoop’s a batch-oriented application. >> Could go to Spark, Flink, everything beyond that. >> Sure, and also now when you get to real-time and streaming image, you can have adjusted data pipelines, data come from multiple sources. But then you can act on that data instantly, and a lot of businesses require, or ours certainly does and I think a lot of our customers’ businesses do, the time to action really matters, and those are the types of services that, at least at scale, don’t really exist anywhere else and machine learning, the ability of our custom ASICs to support machine learning. But I don’t think it’s a one versus the other, I think that brings about how do you allow enterprises to have both. And not have to choose between public cloud and on-premise, or doing (mumbles) services or (mumbles) services, because if you ask them, the best thing they can have is actually how do you marry the two environments together so they don’t look, again, back to that impedance differences.
Yeah, and I think that’s a great point, we’ve talked OpenStack is fitting into that hybrid or multi-cloud world a bunch. The challenge I guess we look at is some of those really cool features that are game changers that I have in public cloud that I can’t do in my own data center, how do we bridge that? Started to see the reach or the APIs that do that, but how do you see that playing out? >> Because you don’t have to bring them in. Because if you think about the fabric of IT, the fabric of IT is that Google’s data center in that way just becomes an extension of the data center that a large enterprise is already using anyway.
So it’s through us. So they aren’t going to the lines of distinction, only we and sort of the IT side see that. There isn’t going to be seen, as long as they have an existing platform and they can take advantage of those services, and it doesn’t mean that their workload has to be portable and the services have to exist in both places, it’s just a data extension with some pretty compelling services. >> I think back, you know, Hadoop was let me bring the compute to the data ’cause the data’s big and can’t be moved. Look at edge computing now, I’m not going to be able to move all that data from the edge, I don’t have the networking connectivity. There are certain pieces which we’ll come back to, you know, a core public cloud, but I wonder if you can comment on some of those edge pieces, how you see that fitting in? We’ve talked a little bit about it here at OpenStack, but ’cause you’re Google. I think it’s the evolution. When we look at, we just even see the edge of our network, the edge of our network is in, it’s 173 countries and regions globally.
And so that edge of the network is full compute and cashing. And so even for us, we’re looking at what sort of compute services do you bring to the edge of the network. We’re like, low latency really matters and proximity matters. The easiest obvious examples are gaming, but there are other ones as well, trading. But still though, if you want to take advantage of that foundation, it shouldn’t be one that you have to dive into the specificities of a single provider, you’d really want that abstraction layer across the edge, whether that’s Docker and a defined set of APIs around data management and delivery and security, that probably gives you that edge computing sell, and then you really want to build around that on Google’s edge, you want to build around that on a telco’s edge. So I don’t think it really becomes necessarily around whether it’s centralized or it’s the edge, it’s really what’s that architecture to deliver.
All right. Brian, I want to give you the opportunity, final world, things either from OpenStack, retrospectively or Google looking forward that you’d like to leave our audience with. >> Wow, closing remarks. You know, I think the continuity here is open source. And I know the backdrop of this is OpenStack, but it’s really around open source is the accepted foundation and substrate for IT computing up the stack, so I think that’s not changing, the faces may change and what we call these projects may change, but that’s the evolution and I think there’s really no turning back on that now. >> Brian Stevens, always a pleasure to catch up with you, we’ll be back with lots more coverage here with theCUBE, thanks for watching.
Cloud Computing Articles
News About Cloud Computing