Skip to Content
42:47 Webinar

September Coffee Break

Ain't Nobody Got Time for Managing Storage: A History of Simplifying VMware Storage
This webinar first aired on September 8, 2021
The first 5 minute(s) of our recorded Webinars are open; however, if you are enjoying them, we’ll ask for a little information to finish watching.
Click to View Transcript
00:00
I think we'll pull down the music. I know that Adriana will keep me honest if I, if I left off any housekeeping items, uh we've had the pleasure of being together. Oh, a bunch of times before. So Andrew Miller, Principal Technology strategist is pure. There's your free kangaroo picture for the month.
00:14
I'm not going to reintroduce myself because we've got a special guest here, Cody and actually Cody, if you don't mind, I will just turn it over to you and let you introduce yourself while I grab a cough drop because I think I'm gonna need it here. Absolutely. Sure. And I always like that kangaroo picture because I always like to crop you out of it and then say the kangaroo is Andrew Miller in the slide.
00:34
Um But yeah, it's, it's great to be here. Um um fairly easy to find on the interwebs uh at Cody Hosterman at Cody Hosterman at Cody Hosterman. Um There's only one other Cody Hosterman out there and I'm not the 28 year old girl from Wisconsin or a woman. Uh I am myself. So it's fairly easy to figure out which Cody Hoster I am.
00:54
Um at pure I am director of product management. Uh And that means two different things. Um I'm in charge of our VM Ware strategy. Um So when it comes to integrations reference architecture as the solutions team, what we're doing with VM Ware from a technology perspective. Uh And then I'm also in charge of our cloud offering in Azure and A W US uh what we call cloud block store.
01:14
Um But of course, today, we were gonna be a little bit more focused, I'd say on the VM Ware side of that fence. Um My background is I've been up here for getting close to eight years. Um And I've been in various parts of the VM Ware world um at pure for that time. And prior to that, I was at E MC, uh I was in symmetric engineering and focused on VM Ware's best
01:34
practices and solutions uh on that, on that storage platform um for, for about six years or so. And I'm currently also on the sne a board of directors. Uh So, you know, helping with storage, networking industry association around specifications, uh co coven relationships, how we can really push storage and the uh technologies around it.
01:55
NVME, et cetera, et cetera um from an industry perspective and not just from a specific vendor. So that's been a fun thing. And from my, my spare time now, recently I don't have much of it, Andrew. Um I just had my first child uh, my wife and I did, um, he is two months old and so haven't done a lot of backpacking.
02:13
Uh, but we did take him for his first hike. Um, and as a hike it's a stroll around the park. That's, you can go on some significant hikes there. We did not, but it was fun. Um, but that is something that my wife and I really like to do. Um, and that's probably my defining thing over the past few years outside of work. Um is as,
02:30
as family and in the mountains as much as possible. There's a good number of Twitter pictures I think I've seen over the years and uh I'm sure you're gonna be bringing your son. But yeah, it's the whole like it's the whole phases of life, what we do casually for fun. It just, it just changes, it's not the same and you're like,
02:45
huh? This is different now. Ok. Ok. So, so diving into the topic. Thank you. You're gonna notice already. There's, there's a history here that Cody's gonna be going through. It's in some ways a personal history as much as a VM ware history because he's lived so much of this when you were mentioning back with E MC.
03:02
I think when we were actually going and preparing for this, like there's some 45, 600 page documents that have your name on them. They're still floating around the internet somewhere, you know, they're, they're, they're older in this case and we'll talk about why that is. But to the topic, ain't nobody got time for managing storage.
03:17
You might know the meme that that goes for. There's a fun auto tune out there. Don't go listen to that. Now, please, you can listen to that afterwards if it makes you happy. But really, it's about simplifying VM ware storage because I think for many of us we started from a career standpoint. I mean, whose career here probably likely hasn't been affected by VM Ware and,
03:35
and overall positively right kind of thing. I mean, any human company and product has bugs and things and that includes Pier Two, but it just as pervasive force in the industry. So we want to follow a little bit of a similar format that we usually do. You're probably used to this somewhat. First half is gonna be more educational. We're gonna talk through,
03:51
you know, looking back history of all things. VM Ware and storage. Uh There's a pile of things in there. I won't steal the lead next, looking forward where VM Ware and storage is going. And then third, you know, we'll start to move into the pure pieces roughly around halfway through. You know, the goal is to make storage invisible
04:09
and visible storage. I want to give, I think a shout out to Marsha Pierce, one of the first folks when I heard that from, but it's a common, common uh con concept inside Peer periodically as well as then all things. Peer plus VM Ware. There are so many integrations. If you're hoping that we're gonna go deep on them.
04:22
It's not possible in 40 minutes gonna think because there's so many of them, they've grown over the years that you've, you've been here Cody. So to start out. Oh, and please, please, we have folks to help with Q and A. Um Thank you, David and Don and potentially other folks joining.
04:36
Um Please feel free to put Q and A whether it's Pure or VM Ware in general in the Q and A section and of course, use the chat. We'll do our best to watch, but there's a lot of you here. So we'll do our best but dive it in. Looking back history of all things VM Ware and storage. I almost feel like if we're not careful, we could even go forever here.
04:59
There's everything from uh performance bottlenecks, uh queuing and un map. I mean, I don't, I don't want to where you start here, but I, but I'm even thinking of, you know, when you first started at pure and you're kind of coming in the door a little bit. But, you know, wherever you want to go from a historical standpoint, please.
05:19
Well, on the first day, man Created Scuzzy. Um and, but um well, I, I think there's, there's a variety of ways to, to start this conversation and, and honestly, that's, that's where it often it often starts. Um you know, I we'll get into some of the newer technologies around NBM or fabrics and things like that. But I think understanding the value um of what
05:42
things like that bring, uh he helps to have some historical context on what's been done in the past where these bottlenecks were, what the problems are, right? In simplifying storage isn't just about performance, it isn't just about provisioning, it's also about overall management, right? Not just like, oh can I connect the volume?
06:00
How hard is it to create that volume but how hard is it to get the storage to what matters, right, the application, whatever that might be. And, and so there's been a lot of common threads that we can kind of dig into. Um but I think a place that makes sense to start, I think is performance, right? Um That's where a lot of these, these things
06:20
eventually come down to, I suppose. So, you know, in the, and we talking about block storage in particular here, right? In the early days, um there were limitations around the MFS. Uh And uh if you remember the capacity limit was two terabytes minus 5, 12 bytes, um That was the limit and why was that limit that?
06:41
Well, there were a lot of reasons um one of the reasons was locking, right? Uh Many applications even today use something called Sky Reservations, right? And what a scuzzy reservation does is when you make a change to that file system, the host takes that reservation so no one else can make a change at that time. Right. Because they're like,
06:59
I wanna make sure that nobody gets in my way when I'm doing whatever I'm doing. Yeah. Not great. Right. You know, I speak at that question a lot. I was like, you know, with active, active replication is like, how do we prevent a host from writing the same block like we don't, right. You know,
07:12
that's, that's what the file system is for. Um And that's what things like reservations are for. The problem with it is that it's, it's kind of like a, it's a fairly large gun in the sense that it locks the whole thing and that was fine for individual applications using a block volume.
07:28
Um But it wasn't great for something like VM Ware where there's like a ton of ems, right? Um And so any time you powered it on, you made a change, you edited it added a virtual disk, whatever, uh took a lock. And when that lock was there, all other hosts in that shared cluster could not do anything to that volume.
07:47
And so V MS would pause and be stunned and this is what limited cluster size, this is what limited, how many V MS you could put on. This is what limited to what you could do, how many concurrent operations, all these limits and things that were there and many of them still are really came down to that reason. Shared storage is hard, clustered file systems
08:06
are difficult and that's really where it all started. And we've been working from that problem um For many, many years, I mean, I I'm thinking of now the VMF S3, the vmfs four, there were sizing stuff there, there was locking our best practices when I was hands on implementing VM ware around, you know how many V MS per data store and the optimal data store
08:28
size and how many things do you carve up to? And then all of that ended up adding some level of overhead, much less even thinking about, you know how DRS and H A fit in there and storage DRS. But man, this is, this has taken me back in a good way, mostly a good way. Yeah, I it's, it's, it's good,
08:47
it's always fun to reminisce, right? Um And especially about problems that we've solved and, and speaking of that, right? This is where VM Ware introduced something. Um I was at E MC at the time called VA I, right. Um This was introduced in four out one. Um I remember we did a technical preview hands on lab when you did like in person, instructor led hands on labs at the M world. Um And we actually like did a hands on lab this
09:08
before it was released and I don't think we got approval to do it. Um But we did it, I guess, you know. Yeah, who, who pays attention to this thing. But anyways, we did, we did a hands on lab kind of show on these indu introductions and there was a couple things uh you know, X copy and right, same and UN map.
09:24
Um But honestly the most important one, right? Um Really to, it was something called atomic testing set. Um and it has many names and people call it compare and swap. Uh Hardware assisted locking is like the marketing term for it. Um But really what it allowed VM Ware to do is lock very small pieces of the metadata,
09:46
just the pieces of metadata that the host ESX needed to be able to make its change. And what this allow is concurrent access, right? Concurrent access means larger hosts, more V MS, higher density, etcetera, et cetera, et cetera. But you know what this does, you know what is the fundamental piece around performance management across the board is that you never really get rid of a bottleneck,
10:10
you just move it somewhere else, you make it someone else's problem. Uh Now that bottleneck might get smaller or the bottle gets neck gets larger, right? It depends on how you want to look at it, I suppose. Um But it gets moved somewhere else. And so what VM Ware did? They're like, hey, 64 terabyte data stores.
10:25
Isn't this fantastic? And the storage platform is like, no, no, no, no, no, no, no, no, no, no, no, we can't push that kind of workload that that many V MS might push to 64 terabytes. Not only because you know spindles, right? What can that support behind that? Because they were generally mapped to a certain number of spindles,
10:43
but also the complexity of provisioning the backend storage to meet an unknown workload. Right? When you put however many V MS together that workload is not known looks totally random. This was a problem. This was a problem. I'm thinking Io Blender, I like old terms like IO Blender.
11:02
I'm thinking that you already alluded to this, like all that we do in architecture is we play, move the bottleneck. That's basically the game that we just play all day, every day kind of thing. And that we sometimes we don't even know where we moved it to. It's like the bottle like disappears and then you refine it again in a different place that you weren't planning on.
11:17
Yeah, it's, it's exactly exactly what happens. And, and so this was, this was a problem and so I spent a lot of my time back then and uh other my friends and other companies at the time, spent a lot of time figuring out. All right, how do we make the best practice to say? OK, if you're doing this, you have this kind of work.
11:33
This is how you can figure your line. This is the raid you should use, this is all that type of stuff. Um, get me busy, gave me a great job, you know, whatever. Uh, and then eventually I, uh, someone at Peer at Pure reached out to me. They're like, hey, do you wanna come to Pure and do some VM ware stuff?
11:48
And I'm like, um, yeah, why not? Yeah, sounds fun. Let's do something different. Um, let's, let's make a change. Right. And so I decided to go to Pure and I, and I, and I would love to say that, you know, back then I was like, yeah, I know that Pure was gonna become what it was.
12:03
And I had, I was like, you know what, I was in my twenties, I was like, sounds fun. I mean, honestly that's about as much thought I put in it. Like sounds different. Sounds fun. I like their marketing. Let's do it, you know. Um I, I do know I can give it a try.
12:17
It'll be, it'll be all right, even if it doesn't go, well, I could find something else. I wasn't thinking, I mean, honestly, like six years before that I was still working at a video store, so I hadn't put a lot of thought in my career at that point in time anyway. So it was kind of like, well, whatever comes, um, I should wear my Hollywood Video t-shirt.
12:32
I have that, um sitting around here somewhere. Um, another time anyways anyways. Um So I came to pure and what was really interesting about pure was that everything I knew about storage I had to throw out um all this, what I spent all day long, like I have, what kind of raid do I turn on compression? Do I not? How do I design for a VD I reference
12:52
architecture? Right? Which is a whole conversation didn't matter, right? The performance wasn't tied to the a volume, it was tied to the array. And so a given volume could offer up the performance of the array when and if that performance was needed, right? There was no manual reconfiguration and changes.
13:08
And so what arrays like the flash rate did from pure is they're like, OK VM Ware point taken right back at you, here's the next bottleneck, right? We can now take that performance, we can actually fit thousands of V MS on a single data store. Uh And so that exactly, exactly. And so that really simplified it and personally
13:30
what this allowed me to be able to do um was I was like, OK, I don't need to spend my time figuring out how to provision alone anymore or rather telling people how to do it or teaching people how to do it, et cetera, I can focus and learn on new things, right? I can start learning about the V I stack and, and get deeper into how the VM ware storage
13:49
pieces work, right? Um and I can find other ways to help our customers and that kind of moved the conversation a little bit from my career perspective, um from performance management to efficiencies. How can I make capacity usage? How can I make, allow my customers to take advantage of the efficiency that things like the flash very
14:09
often? So, so I think there, I'm gonna, I'm gonna tie a little bit of a bow on number one. I feel like we can't help. All the, all the topics are interrelated. We're gonna pause a little bit on the VM ware pure storage pieces because there's so much goodness there. But if we set that to the side and come back to
14:27
it in a second for anyone listening to like, hey, are you gonna go there? We will, I promise we will. It's our job. But if we think about what the future brings from a VM ware specific standpoint, obviously, there can be more around vmfs, but frankly, it doesn't sound like that's as much of a, a bottleneck right now. I mean VMFS is pretty solid,
14:43
but then there's, but then we start thinking about and we're going to go into a little bit acronym heaven. Now, we're probably, if you're listening, you're thinking about Vivas, you're probably thinking a little bit about NVME. Maybe you're thinking about stuff in the cloud and there's storage pieces there. You mind kind of taking us forward in the future and if I cut you off too soon on number
15:01
one, you know, feel free to backtrack too. So, no, that's totally fine. Um You, you'll love it right over the plate for me and that's fine. Um I said you let me keep going and I'll just go in all kinds of weird direction. So you got to keep me on it. So I appreciate that. So, yeah, let's talk about what's on number two
15:15
before we get into some of the other, those other details. Um Yeah, so performance, things like that, like great the bottleneck was thrown over. Um But so what's the next step? Right? We're done. 64 terabytes, you can push everything on there. Absolutely. No, not really.
15:29
Uh There are other problems that now we need to focus on, right? What about, what about integration across the growing ecosystem of EMR products? Do we need to create a plug-in for every little product they every little and small and medium and big product that they have, right? How do we integrate the provisioning and
15:45
management of the underlying storage infrastructure? How do we allow V MS to escape from some file system that they're arbitrarily put together around? Uh And how do we, how do we open up the next level of performance bottleneck down to the storage? Right? And so there's a couple of pieces here that are
16:02
all con are converging, right? Um One was around the management uh management and this is where Vals came from. Uh Vals was mentioned in, I think initially in 2011 at VM world is called VM Grand of the storage. It was like a preview type of thing where the I remember sitting in that VM World session. I'm like this is perfect. This this solves the questions I get asked
16:24
every single day, right? But one it was just an idea back then and two storage platforms were also not ready because enabling Vols automated provisioning policy based management um required simplicity, the storage there because one of the thing important things around VM Ware storage, in my opinion is I don't want to make VM ware administrators also storage experts. I don't want to make them necessarily even
16:48
storage administrators, I want them to be able to leverage storage and use it, but I want them to focus on other pieces, right? And so not just shifting the onus on them to that. OK? Now you figure out all the configurations in your policy, the storage has to also be simple, right? So it's not just about getting the provision to
17:05
where the person is that's using it, but also making sure they don't have to understand every bit and byte and block size and raid protection and feature that doesn't have to do with the essentially protection level, right? Like replication beyond that, right? Um And so Vols is where VM Ware is investing V MFF is fairly feature complete, right? There's not much more you can do it.
17:30
I always look at it like this. Um, when you're looking at problems with an existing solution, um, and you come up with a, um, a solution to that problem right here is a new way that these problems no longer exist. I think sometimes what happens. Uh And I would say that admittedly this happens with evolves is that OK,
17:52
the vials is fundamentally better in all these other ways. But vmfs had some advantages still right around. Uh Well, it's supported by set recovery manager. Vivas wasn't uh it, this wasn't done. But the question is, are those step backs? Are they fundamental problems or things that can be fixed by adding support,
18:11
right, an architecture thing or an implementation over time? Exactly. And that's, and that's where the problems were right now, support of SM and things like that. But the, but the fundamental problems were with VMFS, the lack of granularity, the lack of the integration into the storage plane, that's where the fundamental problems
18:29
are. And there's not much we could do about that. And so this is why VM Ware, this is why pure we're investing heavily in, I think, you know, there um I, I don't think I've done this before. I also want to draw a parallel there between V balls. OK. We're taking out that vmfs abstraction layer kind of thing and there's so much more
18:46
granularity there. And actually I think we can draw an interesting parallel to NVME in that we're taking out an abstraction layer of scuzzy. We're letting things talk more, more directly and far more gradually because you've got, I don't know, it's like a bazillion nvme Lanes, something like that,
19:02
give or take kind of thing. Um And, and that's been playing out into VM ware storage strategy too. Right. Yeah, absolutely. NV Me is a, it's a critical part around performance scale. I, I like to, I like to talk about in a couple of ways, right? Is that putting scuzzy in front of an all flash
19:19
array is kind of like having a football stadium, soccer stadium, whatever you wanna call it with one entrance, right? Um Could you fill that stadium with one portal, one gate, whatever? Sure. Right. But it's gonna take some time, right? Uh And if you have a game starting at one
19:37
o'clock, you wanna make sure everyone's in there so you start loading it uh seven days beforehand. No, you, you start, people start coming to the game like a minute before sometimes, right? And so is kind of like that is having very limited lanes. What NVMM Fabric does, right. What NVME does really when it comes to flash uh in general,
19:56
uh NVMMR fabrics is extends it to the host uh is that it basically provides not only multiple gates or entrances to that stadium, but every seat in the stadium has their own private entrance, right? And so from a performance MVV fabric is it's just not about performance density um as it is about just general latency in performance and CPU efficiency on the host side and et cetera, et cetera.
20:21
Mhm And, and, and the cool thing there is and, and now this is just an explosive pure A but hey, we can toss it in wherever we want to I think is that part of the reason we like this is we've been experimenting with NVME where we've had need even ahead of some of the specification being as developed or, or as pervasive kind of thing inside flash ray because we had needs for that and inside flash by too,
20:39
we had actually a need sooner inside our architectures for the problems that MBME solves. But we always knew it was gonna go from NV Ram to the direct flash modules out the front end, you know, these principles are gonna apply, but that's the whole um I think it's Arthur C Clark, like the, the future is here everywhere, just unevenly, something like that. I just butchered the quote. But who wants to look it up and find it and
20:59
chat? You know, you can find it. Yeah, I mean, it's, it's, it's, it's really true and I mean, it's, it's NVME and flash, right? Um It's, they're really essentially meant for one another. And I said we, we did it across the stack because we're starting to run into the same
21:14
problem with our, with fast flash drives that we ran into, with spinning discs as an industry, right? Is that, uh because of what was in front of them, what was in front of the storage, in the case of the spinning disc, it was the spindle, right? Um In the case of this, it was the form factor and the protocol accessing it is that we
21:34
couldn't take advantage of what it had to offer. And we're like, well, we have to start giving them more capacity uh just to get more performance. It's like that's antithetical right to what we're trying to do with flash. And so removing that internally was key and critical. Uh And then also getting that access to the host was the next level of step.
21:52
And it's not just about the concurrency to the front end of the array, uh the reduction of latency because of how it's essentially it's architected. And honestly, like a lot of the significant benefits from a host perspective is reduction in CPU cycles spent waiting on the storage and managing, managing that, that that pathway. And so NVME is important for a variety of
22:10
reasons and depending on what you're trying to do, you might get different benefits out of it. There's, there's one really cool application there and I've seen this with NVME in general or even some stuff that we do around obtain anything that actually has latency reduction impact sometimes is not just about, oh, latency, you know, put on our storage propeller heads, it's actually about reducing the amount of CPU
22:30
that's stuck in IO weight state. I was thinking of like Linux and top there kind of thing. And if that happens that may drop the number of CP US you need and for very expensive per CPU licensed applications, now we're into financial impact. That's gonna be NVME, it can be obtained stuff, whatever. But there's signs this interesting thread to
22:47
pull at a financial level there beyond like, oh, parallelism. Cool tech kind of thing. Mhm. I think with that. Oh, and actually I'm kudos to Steve. I'm not, I'm just always going to be good about not paying people saying people's last names in case but he, he found the quote the future has arrived.
23:02
It's just not evenly distributed yet. So, thank you. You know, this is the power of having a lot of people on and people looking stuff up. So, thanks for playing though. If we pull the V Valls thread, I'm actually gonna use that to bridge into section number three and we were kind of going here. I think I'm gonna toss you two softballs to
23:19
start it off. You know what a surprise. Uh, first is there, is this very cool quote from a VP at VM Ware whose name escapes me? This is awful about, you know, that, that pure actually has the number one deployed V vas implementation because we've actually been able to take a good number of the things there that were challenges before and, and help make them better as well as I remember.
23:40
It really just, it, it struck me as kind of humorous Cody when you said you started and you looked at how simple flash array was and it was almost like, uh what do I do now? I kind of think like all the stuff I used to do, I don't need to do anymore. So with that, I think I'm gonna move into flash a ray, simple enterprise storage. I'm gonna wander through this in whatever order.
24:01
So back to you. Yeah. No, I it's true. I had a existential crisis around my job, right? I was like, well, I, I can't write 500 page documents about this day. So like, what, what, what anymore? Rather like, so what do I do? Right?
24:16
Um And I was realized, you know, what all these things I've tried to figure out in the past where I'm just like, well, I got to solve one problem, the complexity of the storage or maybe the complexity of the orchestration layer and I didn't know how to use API S and all that stuff. I'm like, well, you know what, I gotta focus on my job so I can't really invest much time into that stuff. Therefore, very little progress is being made,
24:34
right? Um And the simplicity around that really, really changed the story around what I could invest my time in, in the end. I think, I think that's the value that I ended up bringing to pure was greater than what I would bring to the E MC in the past just because I could focus on higher level things, right?
24:55
I could scale that stuff out so that simplicity made some particular differences in my own career and my ability to contribute because of where I focused on. Uh And so that was I think um fairly enlightening moment for me and I it's been, it's one of the reasons I'm still at pure, you know, if I will, because I get to focus on a lot of fun things and, and do some cool things.
25:12
I just notice like doing some cloud stuff as well. Um Not just, not, not, not uh not just specifically focused on VM Ware today. So in that, so there's two pieces there, one is like how it was simple enough to make that true. And then all the cool stuff that you did with the time that came from that when you were like, OK, this is what I'll do to justify, keep getting a paycheck and stay here kind of thing.
25:32
So if we start with the flash ray being simple, now, if you're looking at this, you'll notice that this cost optimized availability optimized, performance optimized, those are very intentionally chosen because sometimes when you're looking at uh some of their software to find storage or pieces where you're architect and you have to make explicit choices about which one am I optimizing for?
25:49
Right. There's underlying uh the features that you have to turn on and off for that, just how they work architecturally. So I think I'll just kind of put this up and I'll try and follow along if you don't mind. Kind of Cody just walking through, you know, each one of these in order and I'll and I'll mostly shut up here. So yeah, I mean, I think this,
26:06
I mean, this first one is, is was the key thing for me, right? Was that um the flash ray has a couple of pieces or a couple of important and critical kind of architecture designs, right? One is that data reduction dedo compression is always on and it's always happening, right? And so whether you have 10 volumes,
26:25
one volume, 100 volumes, right? The overall data reduction is the same, right? Um It's doesn't really matter what your block size is, you don't tune it to get better data action here. I get the question a lot. I was like, hey, hey, can, can my customer improve their data reduction?
26:39
It was like, no, that's what purity does, right? It will do that. There is really nothing you can do to optimize or tune it. It is what it is, right? Um And generally that ends up being 5 to 1 BD, I goes closer to 10 to 1 15 to 1, et cetera. But we do it for a couple of things. One, it's on a 5 12, 5 12 byte sliding scale. Meaning that we dedupe on a 32 K
27:00
chunk, but we analyze every 5 12 bytes to find the best match and it doesn't have to be that size, but it's somewhere in that, right? Uh And so basically we can analyze the data coming in. Is this a hit? No? OK. Let's slide it over by now. They're 5 12 bytes. Is this a hit?
27:14
No, let's slide it over, slide it over, right? And this allows us to do anywhere between 4 to 32 K on that D dupe angle um by shifting between 5 12 bytes. And so that really gives us really, really great D duke. And so whether you're, you know, you have old apps that are partitioned weirdly or you have very different IO or block size, it doesn't matter, right?
27:34
The already figures that out um that simplifies it and also the fact that it cannot be turned off, it's not a configuration you need to think about. So the other side of this too is around raid and storage protections and groups we don't have that um uh on the array either. It's all, I mean, obviously we have raid internally, but these are not options that you need to
27:54
configure and shoot, right? All these things kind of conspire together like when you create a volume on the flash array. We ask you two questions, what do you want to call it? And how big should it be? And that's been the same question we've asked since the very first day of the flash ray to now, right?
28:12
No aggregate choices, no raid groups, no pools. I'm trying to think of the terms. No T DAS without being a puts on any of their platforms. None of that, none of that. Yeah. And, and what, what that means is not only is it easy for like you me, I whoever to be able to create a volume.
28:27
But more importantly, I don't know depending on when you look at. As importantly, importantly, um automation when you're uh if the platform is easy to use it is it is much easier to automate, right? You don't have to put the intelligence into any code that someone's building the script or whatever that is to then also have to figure those things out because what would happen when it was complex.
28:49
And there are a lot of these choices. Either you, you had to put that, you had to provide those options to the person running the script or using the plug-in or whatever the case may be or you had to make assumptions one way or another, someone's gonna be wrong, right? Either the person choosing it when they provision it from the plug in or the person writing the automation chooses the wrong thing,
29:09
it becomes complex. And so that simplicity is really important in a world where automation is more and more important whether it be. Look at Tau, right, look at persistent volume claims. What do you ask for when you're provisioning from a storage class uh capacity and essentially a storage policy, a storage class maps to a policy having advanced configurations
29:29
in there is not ideal to some consumer of containers within cumbers where they have to answer these types of things or create comp particularly complex storage classes based on that, that underlying complexity. So that automation is super important or provisioning and managing storage even up to the taz layer um is also simplified. And the last bullet point there is something that I've always really endeavored to do.
29:52
Um Is I have this kind of thinking around solutions. Um I my team needs to get product X to work with pure product Y, right? Whatever. Um how do we do that? Well, the first step is can we improve our product or do we need to to make it work out of the box? Right?
30:14
Um If it works out of the box right, right away and in the best possible way? Fantastic. Let's move on. But if is there something that we can do? Great? OK. If it does, then let's do it. And then let's tell people, hey, it works.
30:27
Maybe that's not everything. All right. So maybe something needs to change in the VM Ware later, maybe VM Ware needs to understand it automatically or it needs to have a dynamic setting or it needs to uh we need to coordinate with them in another way. All right, let's work with VM Ware engineering. Let's build it into their product.
30:40
So these two things communicate and coordinate out of the box if that's not the final solution or not possible. Or the case may be this is where we build an integration and then the final step um whether all those things are true or none of them are true. That's where documentation comes from. Right. Right. So let's document what was done and what you
30:59
need to need to consider it. And so working with VM Ware to get these things done natively done simplifies the solution and makes it much easier to support, right, which I think is always a key part around these, these tightly coupled solutions. Like when you're using flash array with VM Ware, when you talk about them, you're usually talking about them both in context with one
31:18
another. I mean, the the the whole idea that simplicity takes a lot of work and I love the concept. You, you've sort this out sometimes when you presented, we were taught before about, you know, best practices. Actually, I don't want to totally jump the shark, but also almost means something is broken. Like the best practice would just be how it
31:32
works out of the box. Maybe you have some things you have to tweak, but it's not like here's this laundry list of things to make it follow best practices. That, that means we're not doing, I, I'll put this on you actually. I mean, you, you're not doing your job if that exists, you know, kind of thing. Yeah. I mean, it's true.
31:45
I mean, best practice is, I mean, I kind of joke about this a little bit but, um, it's, it's somewhat true is that I view best practices as bugs, right? If we have to tell you. Yeah. Yeah. Right. It's like if we have to tell you, you need to do these 19 things to make sure it works together, uh,
32:01
perfectly, um, or correctly or the right way for the vast majority of folks we're doing something wrong, right? And granted, is that 100% true? Are there no things that we tell you? Oh, yeah. You think about this, here's some considerations. Of course, my, we all have jobs there are still things to do.
32:17
There are still best practices out there. But I do see that as a task list for my team. What can we do, uh, to remove this? A common one was Disc disc max IO. Right. Um, Scuzzy had some limitations around IO size and so when you would use UF eu ef I boot, um, on V MS, right? Uh Secure boot, whatever you wanna call it.
32:42
Uh, and it would issue the seven mega read and that seven meg read would fill the buffers and it would purple screen right then. Not great. Or actually technically it was a windows. So, right. Um Not great. And so what you had to do is change dis do disc max io size uh to four meg. So E would split the IO that seven mega I
33:03
that's 17 mega read to a four and a three. So it wouldn't fail as, as a and so instead we, we worked at VM Ware and now ESX looks at the VPD information, the vital product data of the storage and says, tell me your Mac supported IO oh it's four. Let me split it, right? And so those settings while they're there, they're stop gaps, right? Um As much as possible.
33:26
And this is a, this is a design principle we have in general on the flash is that we set these things called tunable sometimes for customers like, OK, this behavior isn't perfect for you, let's change it. But what that tunable is, it's a placeholder, it gets spun up as engineering here. How do we make sure that purity understands this situation and behaves like this when that
33:45
happen? Um And so this is a continuous process. So I think I'm, I'm watching time a little bit here and I knew this was gonna happen at some point because it's like we just go and there's cool stuff and even like the 78 VPD. It's been a long time. So I'm gonna summarize the rest of this so I can get to section number four, then I'll give you like a chance to put in anything I left out.
34:04
So if you're thinking about so cost optimized availability, six nines of up time, new slash, I actually need to update this slide. We're actually at seven nines of up time, which is 3.5 seconds per year. That means actually more customers have 100% up time because we have any up time, down time. It's probably more than 3.5 seconds always on. And plus two data protection.
34:21
Think of it like grade six, but it's not, not from a performance standpoint data at rest encryption always on. And we actually use that without performance impact because that helps with randomizing the data for ware leveling. You're gonna get sub millisecond latency period on flash ray X. If you're thinking about flash A ray C, that's a, you know,
34:38
a different economic price point at a 2 to 4 millisecond range Q OS for noisy neighbor stuff to keep, you know, any angel from um item from boxing each other out and very, very importantly, 100% performance through failures and maintenance. We're reserving performance capacity. So when there's software or hardware upgrades, there's not a drop in performance,
34:57
not that whole, that actually could kind of be in that management section too. Cody you know, that's like the whole, do I have to manage my CPU to keep it below 50%? Like, no, we're going to take care of that for you. So you don't have to think about what's gonna go sideways. I did that in turbo mode so we can make sure you get the last section highlight much more
35:15
efficient than I would have. I would have gotten. So I do it a little bit and I was like, hm OK. So last section because we'll make sure this is now. So that was how flash ray makes storage invisible. That was some of the OK. Walk in like what do I do now? Because it's so simple.
35:32
OK. That, that's kind of proving some of that out. But then really, you know, the last 678 years, you've been wandering into cloud and cloud block store and other pieces but has been building out and having a team, not just you but a team that is building out so many different VM ware integrations. Uh I'm I'm gonna give you an almost impossible task here.
35:53
Of which couple of ones do you want to cherry pick? Because there's so much stuff here and if you want to hear more about it, like reach out, we'll talk to you happily in person, but I'll let you cherry pick. Yeah. So I think um you know, and I just realized that there's, there's new things, new things after that.
36:07
I didn't even have to add to this slide. Like I think Tanz is a really important one, but let's, let's, let's talk about one that's on the slide. And uh you know, VM Ware Cloud Foundation, right? I think BC F has been um it's been a really kind of fun project to work with VM around there. Like we, we worked with them on, we had a
36:22
design partnership on Getting Balls as principal storage um and uh working with them on getting other storage options as principal storage as well or extending it to VM. Uh I things like that. Um It's, it's the, the, the questions always come around. How do I deploy my VM ware environment? How do I deploy one quickly here?
36:42
How do I deploy one quickly here? How do I change? How do I change and move my workloads? And I think VCF is really a great tool of getting these VM ware environments up and running and managing a life cycle, but also being able to use your external storage, right? That's really the, the nice thing around VCF is that it's not a completely locked down solution
37:00
where you're like, well, if you're using this, you can't use that, you can't use that. You can't use, you have to use this, you have to use this, right? And so uh VCF is, once again, it's really a good example of VM Ware's partnership around found storage vendors like pure is that like, yeah, here, let's build the automation,
37:14
let's build the lifecycle management and let's give the storage vendors those books, right? Because I think the question I get a lot is does pure storage support V, right? Um And the answer is yes, we do actually, you know what that means vs SAN is an implementation of storage policy based management, right?
37:31
It is VM Ware's implementation of S PB M for internal storage. Vival is VM Ware's implementation of S PB M for external storage, right? And so that's what our support of S PB M is, right? It's VA is, you know, your server vendors support for S TB M uh and is your storage vendor support for storage policy based
37:50
management, right? And so basically they use the same in internal API S they say use the same interactions, but one VM Ware is doing the data path stuff and then the other one we are, right? And VCF is a really great example of their partnership across that is extending the options to the end user about how they want to use it. Um Yes, sorry, go ahead.
38:10
Oh I was just gonna toss on. I think the other theme that I may pull is a little bit of there's an operations piece that's V automate for deployment and automation. Of course, you've got to have a pile of vro workflows underneath that. I think it's 100 plus VROVR orchestrated workflows and then integration with so many different replication pieces. That's site recovery manager,
38:30
that's not our storage cluster. Uh That's even in some of the VV DS. Um Yeah, the V orchestrator I think is, is always a funny one, right? Because like it's, it's the oldest product I think in the suite that I can think of. Um And it's still the core one and comes up almost daily for us around. How do I build?
38:50
How do I add workloads into VM Ware file director A V CD. How do I get these workflows into VR A? How do I get these workflows just into V center itself? Right? And there's an extensibility. The Vrovro is a great tool where like, hey, we use vsphere plugins to integrate our stuff into the VM Ware SU I uh But what if I
39:07
want to do it in other places? What if I want to change that behavior? That's what our V realize orchestrator plug in is really about is like here's all the workflows that you use in our U I plug in, but you can copy them and change them to your heart's content, right? And so the uh vial orchestrator, one of the nice things about the eight dot X release as well is that they've extended it to
39:24
things outside of javascript, right? So traditionally, it was internally used drop javascript to scripture workflows or use built-in workflows and the plugins, they've now added powershell support, Python support. No Js support um into, into the realize orchestrator. And so if you want to use these newer languages, right?
39:42
I think powershell is to this day is still my favorite scripting language. Um That really extends that tool quite a bit. I think the uh I'll bring it last comment here and then we will do the drawing and a little bit of advertisement is uh if we almost left out just the the core piece there, VCENTER server, uh the vsphere, I'm gonna get my terms wrong.
40:02
The Vsphere plug-in, you can almost literally do all of your day to day management out of V sphere plugged into flash ray. So I mean the flash ray U I is really simple and not, not very hard. But if you want to live inside vcenter all the time and do everything there, you can, it's pretty cool. Yeah, I think, I think there's one point I want to make about that and then one other overall
40:21
one and then I'll hand it back over to you, right? This is this is why Vols has been so important to me. Not only is I uh internally here, I'm like, hey, we built features based on volumes, um data stores. V fest. This is like it's an arbitrary construct, our fault.
40:38
Our features have nothing to do with the VM they have to do with these arbitrary data stores. That's a cool piece of it. But honestly, the the bigger part of it sometimes is that V VAS is integrated into the V center engine and the V center engine informs everything to the right, right, everything.
40:55
And so integrating it into that engine is more than a U I plug in. It's all of the things, right. And so that getting the features, getting the policies, getting that management into any tool that you're using is that much easier because it's a kind of a single point of integration and then everything comes out of that and some and unlike some kind of U I tool.
41:14
And so the other thing I do wanna mention and I've seen some comments and things like that come up uh in the, the previews and so forth is is everything feature complete is NVM Ware fabric support, support, everything that no, like I said, these things are converging vols, NVM Fabrics. This is where the investment is going, this is where the storage companies, this is where VM Ware we are investing.
41:34
This is where the innovation is happening. Um Is it going to provide every feature in integration that you need today? No, there's certainly more work to be done. My team is not out of things to do and neither is VM Ware and neither are the storage vendors. Um So yeah, it's definitely pay attention to what it can do and what it cannot do.
41:50
We're aware of these things. We're working very, we have weekly meetings with VMR engineering on a variety of different topics, right? Driving, driving some of these remaining gaps that we want to fix. So certainly, look at, look at your vendor, look what they offer, look at the features that VM Ware supports around it
42:04
and understand what that means for your environment. But I think at the very least you should be thinking about these things because this is where things are going. And sooner rather than later, we're gonna start hitting things where it's only working with that. Right? And, and so it's important to at least be
42:18
concluding in your architecture or your, your, your new data centers or your updates or whatever the case may. I'm gonna ask everyone listening for the indulgence of an extra minute or two Cody. Thank you. I know it would be fun and I was, I mean, we planned it out but I don't know exactly where we'd go and that's good. Hopefully you felt that you've got a good bit
42:35
here from a historical standpoint, future standpoint, how pure can help you today as well as from an integration perspective. Uh If we go back to a little bit of the meme, ain't nobody got time for managing storage next month. Please do make sure.
  • Coffee Break
  • Webinar

Andrew Miller

Lead Principal Technologist, Pure Storage

Cody Hosterman

Sr. Director, Cloud Product Management & Virtualization, Pure Storage

Ain't Nobody Got Time for Managing Storage: A History of Simplifying VMware Storage

07/2024
Pure Storage FlashArray//X | Data Sheet
FlashArray//X provides unified block and file storage with enterprise performance, reliability, and availability to power your critical business services.
Data Sheet
5 pages
Continue Watching
We hope you found this preview valuable. To continue watching this video please provide your information below.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Your Browser Is No Longer Supported!

Older browsers often represent security risks. In order to deliver the best possible experience when using our site, please update to any of these latest browsers.