Skip to Content
55:49 Webinar

Building the Extremely Efficient Data Center

Discover how you can obtain 40% higher overall performance and up to 80% more energy efficiency.
This webinar first aired on August 10, 2023
The first 5 minute(s) of our recorded Webinars are open; however, if you are enjoying them, we’ll ask for a little information to finish watching.
Click to View Transcript
00:00
So let me introduce our speakers today from pure storage. We have Don Portman technical evangelist and we have Eugene Glick, technical marketing engineer. So now I'll turn it over to the pure team to get started. Thank you. Good morning, everybody.
00:16
I'm Don Corman. Like you heard, I'm a technical evangelist for flash array at pure storage. I'm joined by Eugene Glick, who is a technical marketing engineer for flash array. Who if you're looking closely enough, he clearly has a better beard than I, I have the saddest beard in the world and Eugene is just nothing but machismo with his
00:37
beard. So I'm a little bit jealous, but fortunately, we're here to talk about extremely efficient data centers and not facial hair powered by pure A XCR four. that lead in video I thought was kind of interesting. Eugene, the the robots were a little creepy. Did you,
00:52
did you get like a little weirded out by what those looked like? Uh I was getting uh you know, one of the movies with Will Smith, the flashbacks? Yeah, it's like II I like the video but I'm like, does this mean we're going to replace people with robots and data center operations. I hope not because there's a lot of stuff that needs to happen to get to the efficient data
01:15
center. So, uh fortunately today, we're gonna be covering a, a piece of efficient data centers. I think when you look at, when we say efficient, it's a very broad term, but it is related to the technology deploy into the data center, whether it's for power, whether it's for efficiency of operations, things along those lines. And today we're gonna focus specifically on XCR
01:41
four, which we announced at accelerate a few months ago out in Las Vegas. Um Eugene, were you part of the announcement? Uh I wasn't that accelerated but that was working a lot on all the XRCR four content. I like the uh Matt, the program manager for it had the big gold chain with the XCR four. It was like the, the gangster rapper form of it all.
02:07
So that was, that was kind of neat that we did that he was representing Colorado. OK. Colorado is known for its big gold chains. I'll have to remember that. Awesome. Excellent. So what we really want to talk about within the next 45 minutes or so is talk a little bit about Pier's approach to data storage. I think it's important to recap some of the pedigree things that,
02:31
that pure laid down when it first started as a company. Um will, will cover a little bit of the architecture of flash, then we will get into the meat of it all. And the conclusions of efficiency really are driven into the design and overall execution of XCR four. We'll talk about some of the statistics, the performance metrics that come out of it.
02:52
But you know, I I think the efficiency message needs to be be enhanced with a few things which is, you know, you love the efficiency of flash array and you want to get there. What does that look like? If, if you are in a flash array environment and you want to get the XCR four and I want to round out the conversation specifically to some software stuff.
03:16
I know this is meant to be a hardware type of thing, but the software that we bring to bear for purity and pure wine and a couple of other things I think is matched perfectly to the architecture that we've developed with our platform specifically flash array and then we'll wrap things up. I've got some poll questions in the mix that I'm gonna throw out there as well as we move on to make sure everybody's paying attention.
03:41
So let's get started. So a quick recap. Let's let's talk a little bit. I won't talk too long about some pure history as it related to pure being started back in the day. And what its strategic direction is pure really looks at its challenge, modern data management challenges in in three dimensions.
04:04
Quite honestly that there needs to be an all flash transition. We've been saying this from the beginning, Eugene, that flash memory and flash storage is is really where the puck is going. Yes, there's spinning disk out there and there's a lot of legacy storage that's still reliant on spinning disk, but just like tape, I think its relevancy is starting to fade,
04:27
has to do with price points, has to do with manufacturing has to do with the software that's, that's working inside the storage os working with that flash. So the all flash transition is already on its way. We heard our CEO say the other day that disk is dead and within five years disk will be better than you think it is. It's a great goal to set.
04:49
And I actually think that it's pretty attainable once you get to really understand what pure is doing with its platform and the software that it's putting into its ecosystem cloud operating model, this is important as well because I've always said that it and I don't know if you, you agree Eugene, but I've always said cloud is not necessarily an architecture in as much as it's a, it's an experience, you know,
05:14
when somebody goes to the cloud, they yes, they wanna swipe their credit card and get what they want. But behind the scenes, there's a very automated flexibility that comes with how a workload is deployed and where it's deployed and ultimately how it's paid for quite honestly and it, it costs are very upfront in the cloud model. It doesn't mean that the cloud model is run, but the cloud model is relevant but where you
05:41
operate it and how you operate it, I think matters. So that's, that's another big piece of pur's approach to the market. And we'll talk a little bit about that with evergreen and storage as a service later on. And then modern applications. Obviously, I we've been saying modern applications since we were banging the drum about cloud as far back as 2008 that I know.
06:03
Um And I know we are always trying to turn the corner in more modern application development with kubernetes and, and containers and things along those lines. But there's a lot, there's 80 years of legacy application build up that I don't think we're gonna turn the corner from very, very soon, which means a storage vendor like pure has to be able to dance in both camps. We've got to be able to continue to innovate
06:30
for the interest of legacy applications but also develop products in a direction that support modern applications as well. So you'll see from Pier's perspective, we really are thinking multi dimensionally about where we're headed as the tech world evolves as well. We aren't just saying all flash storage is the solution to everything we have. Other things that we are working on to address
06:58
the other dimensions that we know are out there, if you look at the history of the all flash data center. And that really is our thesis statement is the all flash data center is where we're headed. It's not just because speed is there. It's, it also has to do with power and it has to do with cooling and it has to do with just the efficiency of using getting more out of a use space in R basically is a lot of what we're
07:24
interested in doing. And in 2012, we said, look, the enterprise all slash array is a possibility yet I know over time and this was 11 years ago, we were staring down the road map of hard drives, obviously being very, very prevalent. And our road map as we evolved, reflected going after the hard drives that were prevalent out there as far as 15 K
07:49
drives, which would be what you would consider your tier one applications running on 15 K drives. And I don't know if you remember Eugene back in the day with hard drives. I, I remember when I got to E MC, one of the first things I learned that database people were doing was uh short stroking hard drives in order to get faster performance out of them for databases.
08:12
Had, had you heard of that? Did you know what that was? I wasn't that different to that but doing funny things with hard drive. Yeah, it was a fascinating concept. It, it's based on physics and GE ge uh geometry. Actually, you know, if you had a spinning platter of data, the center part of it was faster to get to for the head itself.
08:38
So you could increase iops on a hard drive by reducing the surface area that the head would travel. Yeah, you were sacrificing a good 50% of the drive capacity. But when you were looking for performance, this is what mattered. So short stroking drives was a big thing that the first time flash came out into the E MC portfolio when I was there.
09:00
And subsequently, I think they were prodded on by pure was this realization that flash storage could actually get those kinds of shade tree mechanic hacks out of the way and really seriously take a look at flash as a primary tier of storage. And and I'm a believer that pure drove that mentality. And by doing that, obviously, we were able to get our feet dipped into where structured data
09:27
block storage and to some degree virtualization, you know, being able to bring higher performance with the whole drive being used as opposed to a small part of the drive being used with flash. Uh Obviously, as we continue down the path of evolving, we wanted to go after unstructured data. And when we talk about unstructured data, what we're talking about is big,
09:49
big sets of directory structures with documents files, what have you discreetly stored underneath those directory structures themselves. And flash blade and the direct flash module were our entry points for 2016. For that flash blade is not what we're gonna talk about today. That's more of a scale out model versus the scale up model of flash array.
10:12
But it does show in 2016, we had a very intentional move that said look, we're going after unstructured data which is not block data. It's more like file data, which we'll talk more about later on with unified storage, 10-K hybrids as well as near line storage. That's when we're starting to talk about the deep archive,
10:32
the tier three stuff that goes away and is very rarely accessed. Uh We brought out flash array C for that. The idea there is there's a different grade of flash storage going into the direct flash modules that brings the price point in little bit at the cost of performance as well, but not everything is created to need to run at 1000 miles an hour.
10:55
And that's what flash array C really brings to bear. And flash array E which we announced at accelerate takes it even further by bringing the price point of all flash array very, very close to an inflection point with storage that generally leverages spinning disks still. So you can see that that pure out of the gate. Yes, flash storage was a great idea that we
11:21
kind of clung to, but we knew where the puck was going, we knew that there was gonna be an evolution over time. That wasn't just about performance. We saw that the power consumption could be affected that the efficiencies of software could be. We thought as we evolved our product and our platform and I think you're seeing it today
11:43
that, that we've pretty much taken the gloves off on an entire data center need in saying, look, we're best for block, we're best for unstructured, we're best for file and all of it is going to land on flash and take away the complicated and needs of tearing storage or managing into tiers or really kind of needing to gain the system to give more applications better performance and others. Not. So I think there's,
12:10
there's a real great story Pier has backed into or is evolved into with all flash as its cornerstone. The unique technology advantages and Eugene's gonna talk into a lot of these is the direct flash module. I think I I absolutely love the direct flash module because what we did from the very beginning it appears is we said look,
12:33
traditional drive sizes are fine 3.5 and 2.5 inch drives are great with their seda and Sass interfaces. But if you really want flash to realize its true capacity possibilities, you have to rethink what that hard disk is doing inside of a storage array and that's where the direct flash module comes in. It's direct flash module is nothing more than a bunch of flash with an NVME piece to it.
13:01
And we're stacking the cells on top of each other to be able to make those direct flash modules become ever bigger as far as their capacity is concerned, which is a neat concept because we've got a lot of room to grow. And I think that's gonna give us a big advantage as unstructured data and other data continues to grow at the rates. It does evergreen.
13:23
I think that's another one that we won't talk much about. But that cloud operating model is really where Evergreen lands and this idea of flexible consumption models aligning with either Opex budgets or Capex budgets pure is uniquely armed to do that. A lot of it has to do because we manufacture our own hardware like the direct flash modules and arrays themselves.
13:48
And the advantage that you gain with evergreen and being able to do storage on consumption or an Opex type of thing is that we don't focus on just the software being where our revenue is generated. The hardware is a part of it as well. And because we can blend all of those together, we've got this unique advantage with Evergreen to be able to offer variety of consumption models to the customers themselves.
14:13
Today, we're gonna focus on flash array, which is our scale up uh that our our scale up storage array flash blade is for a different time. But fusion and port works. We won't talk much about either, but those have to do with more modern application creation. And you can see again, this slide really does align with what we felt.
14:34
Our strategic direction was at that beginning slide, which was, it's not just a technology thing that there's a consumption model and there's a direction that people are going with applications that we need to be more flexible than legacy providers are in providing a storage ecosystem that matches where most data centers are going, which is a lot different than they used to be 80 years ago with mainframes.
14:59
So, Eugene, it's, it's time for you to jump in. We're gonna talk about flash array. Eugene is the big hardware guy and I'm just a color guy. So I'm gonna sit back and move the slides for him. It's all you Eugene. Sure. All right. So let's just uh talk about how we uh evolved
15:17
essentially and what this thing isn't, uh what this slide is uncovering is what was before this, this chassis. Uh As you can see, we're upgrading our uh our uh controllers through our generations. We go to the latest Intel processors and the cycle repeats. So, what I want to point out here is besides the fact that we uh try to keep up with the
15:41
latest and greatest is that what you're looking at is a single chassis uh besides a flash array, excel on the bottom and this chassis started with flash array M and it continued to now flash array XR four and CR four. And many customers are going in the same chassis in the same identical chassis from the M and they upgraded to XR two, then they upgraded to XR three and now they're upgrading
16:08
to XR four. And the same story was C so we are absolutely developing our controllers. Uh The latest processors were with the latest technology available at the time of development. But some things stay the same. And when we uh when we did this development, we focused a lot on, on the chassis itself and it survived almost a decade now and multiple
16:32
array generations. So that's a neat uh piece of history there as well. Um Guess next slide. Yeah. One question for you is as we look across the Intel frameworks that we've used and Sapphire Rapids is the newest one with XCR four is our intention to stay with Intel II. I know the answer.
16:55
I'm throwing you a softball question, but there is a reason that we're sticking to Intel because I know we probably have questions in the field. Why not a MD? Why not armed? So what's, what's the take on the Intel thing? It's just the perfect need for us. It, it works for us. It's, it's been working for us as you can see
17:13
from this slide and it's, it's easy to work with. Yeah. Yeah. And it's, and I think the other thing here is that the operating system, our purity operating system is built to be tuned to work with the Intel stuff as well. So if anybody out there is asking questions, hey, why not arm and all of that?
17:33
It's just a matter of it's the perfect tool for what we're trying to do with where we're going. So here's your direct flash module, I'll get out of the way. Sure. So what is the, what is our direct flash module? Why does it exist? So when we first uh put our, they had uh they had S SDS in them.
17:56
And what an SSD is is essentially a black box. It has its own controller, it manages its own right, leveling it, it, it manages a lot of its own things and you can't control any of those processors from uh processes from the outside. So what we did here was we, we said, you know what a lot of flash is wasted uh because there's a lot of flash held and reserved inside of an SSD.
18:22
Uh There a lot of latency occurred because each device is its own separate device. Uh And if you need to write something, uh it, it becomes a huge um actually a latency intensive process. Uh So what we did was we said we want to manage all of this at, at, at the array level. We don't want to have individual devices
18:43
managing their own thing. We want the array, all the flash and we want the flash to be available to the array. So we developed uh direct flash modules. Uh What the uh what what happens here is all of the flash media is available to the array and the purity operating system to work with. Uh we no longer have to rely on this as these controller to be the most
19:09
efficient thing to do to do a right. We know what the latency will be here. We know their, their ray and the operating system manage how the rights happen and how the reeds happen. And it's, it's a lot easier to work with flash media, especially a large amount of flash media and you have access to all of it and these things are really dense. So we have two point twos,
19:34
we have 4.4 we have nine point ones. We uh 18.3. These are all in production right now. We introduced the 49 and we're gonna keep growing these and there's actually multiple flavors of these. I don't think we're touching anywhere on DFM, DS Donald. But uh well, the other thing we did with these, our arrays,
19:54
um they need LVM functionality um And on flash array Excel already, uh we introduced these modules which also house NV RA M and which allowed us to even to go even denser. So instead of having separate space occupied by nu ra M direct flash modules, went in space in that space and we got a much denser system,
20:19
believe it's a 20% density improvement on, on, on Excel. So that's, that's why we have them. Yeah, I think it's, it, it, it's an important piece to the overall equation I think because we rethought the fundamental part of a storage array and said, let's, let's take a chance and do it differently.
20:42
And I, I believe it's paying off quite well because go ahead. Yeah. Well, what I was gonna say is this is not just us, you know, printing a PC B and shoving flash on it. There. A lot of physics and material science and uh intellectual property went into developing this. This is not just us saying,
21:02
hey, let's let's strip out an SSD of, of that controller that, that hide, that the flash hides behind a lot of work went into these. It is fascinating and amazing. Yeah, I think this plays into the efficiency, you know, the efficiency theme that we've got going on in this webinar is that if these drives don't
21:23
change their footprint size, yet flash can be stacked on them to let them grow in place as you know, new release come out. You're gonna see. And coz mentioned this in an accelerator, some other place that these DF MS will eventually reach 300 terabytes. So imagine an array with 28 300 terabyte DF MS in it as far as
21:50
raw storage is concerned, it just becomes our ability really to adjust the purity operating system to see the larger sizes. I mean, I'm oversimplifying obviously, but the efficiency say it stays self contained when we're the ones controlling how quickly the DFM scale, which they're going to scale far, which I think is neat. Yeah. Yeah. And as far as energy efficiency goes,
22:19
if, if you think about how much like a motor, well, there's multiple motors and hard drive, there's the spinning one and there's the, the head's moving, uh how much energy it takes to keep all of that running. And considering how much of it you actually need because our flash modules um are not the only secret sauce here. Uh Raw flash doesn't really like raw flash is
22:44
expensive. What we do really well is we compress and we d do and I know this is a hardware session but when you start thinking about efficiency and how much of actual equipment you need. Our software plays into it a lot. Uh On average, we see things like 5 to 1 data, data reduction uh on like normal V si workloads.
23:05
We see much higher data reductions on, on VD I and, and that goes right into your energy efficiency because you need less to do more. Yeah, that's, that's really, uh I know where you're going. I know where you're going. I, I'm, I'm writing up, I'm starting a blog on the efficiency thing and I was doing some digging on data center power
23:31
consumption and of course, it was United States centric but data center power consumption in the United States accounts for 1.8% of all power produced in the United States, which, which seems like a lot. I mean, yeah, we're a power hungry country but 1.8 seems a lot. And 20 5% of that data center cost is in the storage part
23:55
which that in itself is crazy when you consider all the heating and cooling and electricity that's needed to run a data center. So if we have a way to impact the 25% of the largest part of this day, this power consumption, there is a lot of potential impact in my mind. So you definitely speak to just to take it out of the US context.
24:21
Uh If, if people uh on the on the webinar have heard about the regulations in Germany specifically from late last year. Uh they were calling uh one of those energy efficiency laws, the data center killer uh which the law that was gonna stop all data, all new data centers to be built in Germany. Um Just so, you know, we are doing a lot to stay up to date with uh worldwide
24:47
regulations to make sure that we meet everybody's power efficiency uh requirements where I do things right now to be uh well, essentially allowed to operate in the eu Yeah, and I and I have an appreciation for that pure because I don't get the feeling that we're just stamping green on things and saying, hey, we're green, you know, we're legitimately interested in matching what the European countries want.
25:15
I lived in Germany and I know they are very, very cognizant of that kind of stuff and I'm happy that they're driving that kind of mentality as well. So, yeah, so that's the DFM for you. No matter how big it gets, it's gonna still have the same low power draw that would aggregate into the larger picture. Yeah, we talked a little bit about this already. Did you wanna talk any more about the NV RA M
25:40
and its role because I know it's shifting in the XCR four that we have tiers and stuff like that. What, what, where do you see NVME being a part of this? Because this leads into a conversation about the interconnect between the controller and the DF MS as well, right? So we, we've gone all in on NVME over fabric.
26:01
I would hope you talk about that a little bit. Yeah, we'll, we'll talk about that as well. I just want to uh talk a little bit about how we design our systems. There's always uh a bottleneck, you're gonna hit somewhere. We, first of all, we strive for it to not be our array,
26:16
but even within uh any storage array, you're gonna find that there are bottlenecks and you design a system to hit a specific one first and first we try, try for that to be the CPU, we never want to bottleneck on our story. Uh which is, uh why a lot of, a lot of research and a lot of development went into the DF MS. Uh, what they do is there's a lot of drains in
26:45
the S sds when we are doing this. If I that way, um, the 20% flash, I believe I mentioned it before. The 20% flash that is hidden uh on a regular SSD is unlocked here because we're managing all of that on the right level. Uh So we manage wear on the array level, uh right leveling on the ray level.
27:08
So if something wears, it's uh you don't have to hold 20% of each storage device in essentially uh postage waiting for something to fail. You need a lot less. So a and all while all of that is, is powered on and consuming power. So we're, we're doing savings even there.
27:32
Yeah, I think that's, that's an important piece to, to pull out of there because I remember the early days of SSD that 20% was even more as I recall because, you know, flash is, is just, well, it degrades or it goes bad over time and need to keep that in reserve. And I had heard at one point it's 50%. So 20% still seems high. In fact, we're squeezing it even more is neat.
27:55
Yep. That's awesome. That's good. So, it's obviously proven at scale. We, we make our, we manufacture our own DF MS. But let's talk a little bit about failure rates because I know people are gonna say, well, they manufacture their own stuff and it's not industry, you know, it's not industry mainstream or whatever you wanna call it.
28:14
But talk a little bit about manufacturing and how we manage into annual failure rates. Because trying to find a meantime, between failure number appear is impossible. And I'm an old storage guy, I can't find that. I mean, the number are fairly low. Um I mean, everything and fail. But with the most times we see something fail
28:38
is if it's do a if something happened to it in transit, but for the most part, these things run for a very long time, we right now have a rays that are still running media from 2012. Um What we do here is we, we have a lot of telemetry. We analyze the entire fleet. We look for fingerprints of issues and usually
29:01
we know uh that there may be an issue happening somewhere and we're just gonna proactively replace that and not wait for something to fail, but that doesn't happen often at all. Uh just because we are very proactive, we monitor the entire fleet. We, if we see an issue and if we can fix it in firmware, we will do so and then we will recommend firmware upgrades, uh purity upgrades.
29:28
Um So it think things like drive firmware driven issues are not really a thing with us. Uh We, we find and fix them proactively, which is why we don't have such a high failure rate. Uh Yes, there are instances where some customers manage to wear their drives out. There are workloads that are in extremely right intensive, but that's,
29:55
that's just the fact of life, but that is a small, a very small percentage, probably sub 1% of workloads that exist out there for the most part. Uh Failures are very rare. So let's let's stop for a second. And let's let's throw the first poll out there because we've been talking a lot. I wanna make sure nobody's asleep but you know,
30:18
talking about workloads and thinking about flash arrays. You know, how many workloads do you put on an array itself? Um You know, obvious more is you try to make it better from a cost perspective, but performance matters. So, you know, I'd like to see specifically out there. You know, how many workloads are you guys,
30:36
man? Wow. We've got some people with nine plus workloads who those have to be chugging along quite well. Yeah, this is, this is playing into what I was thinking. It was gonna be 1 to 3 workloads, 4 to 8 and then nine plus it, it's, I I'm not surprised at how these numbers are coming in.
30:57
And it's, it's interesting to think about the workloads that are on the arrays today versus the workloads that are going to be on the arrays tomorrow. That a lot of what we are doing proactively with DFM and you know, fixing it in real time as the problems come in, gives us some flexibility to adjust what they do as workloads evolve and things have a tendency to compress in the data center.
31:27
So thank you for answering that poll guys. Um Not surprised 1 to 3 workloads nine plus workloads. My hat's off to you guys for uh running nine plus workloads on it. So um let's go next slide here and really get into XCR four. I think this again before we turn Eugene loose, this
31:49
webinar is really about flash array XCR four, but there's a bigger picture for efficiency. But the XCR four story is so compelling in its improvement over RR three that its impact on overall data center efficiency is, is unstoppable in, in my mind. So Eugene, I'll turn it back over to you. So there's not much to talk about, right? This is what XCR four looks like,
32:17
which is a lot like XR three as well. I think one of the things that's kind of cool about it. Uh This is my only input. I'll let you talk from here, Eugen, we have a new Bezel, it's a unified bezel. So the Bezels the same across XC and E and, you know, I, I've never seen so much interest and focus when I worked accelerate about
32:39
people wanting to know what the new Bezel was gonna look like. What's the, it, it's actually a pretty cool bezel. And the only thing I know about it is made with uh reclaimed material, which again, the green thing matters to us. So I'll, I'll switch it over to you now.
32:53
Yeah, you have no. Before I, before I got into this position, I did a lot of work in data centers with ras and that bezel. It is. Ee everybody is asking. Well, can I buy a spare bezel? I wanna put it on my office wall and I just go, I, I appreciate it but why?
33:13
Um but yeah, we should really start selling them separately. What I want to mention here, what the difference between the R three generation and the R four generation is we've um we have redesigned the controllers. We went from PC I agent three to PC agent four. We have um sorry, we've, we've changed the way our networking is on the controllers.
33:41
Uh For instance, uh the ports on our controllers now support NVME over Rocky and NVME over TCP. We added different cards. Um The other thing we, we did was we went from eight lane PC I slots to 16 lane PC I slots. So things like 64 gig fiber channel, uh 100 gig uh Ethernet. Uh What board cards with, with 25
34:12
gig Ethernet. Uh All of that uh now is able to, to run it's essentially line rate all the way to other bottlenecks that maybe um like if, if you're running an X 20 array, you're, you're probably going to hit your CPU before you're gonna hit the networking. Um So a lot of work went into the R four generation but also into the protocols that we
34:37
s uh we serve out of our race for instance. And the Metc uh was introduced just recently. Um We do NVME over uh RD ma um we do NVME over fiber channel, what that gives you and you need to have direct flash modules in the array. Because if you are running an older array with S SDS, you, you don't have the NVME protocol, right? So what that allows you to do is to have end to
35:04
end in the me from your initiator all the way to your array. Um And if, for instance, if NVME over Rocky was uh expensive because you need special switches. You need special uh Nick, you need to do a lot of uh configuration. Um Why? Because uh for people that don't know what uh
35:29
Rocky is that runs over UDP, imagine running storage over UDP, you need to make sure you're on a lot less network it's on you. Whereas TCP handles that for you. So when we rolled out on VME TCP, this opened up uh essentially a very easy avenue for anybody on Ice Kazi to transition to NVME TCP. It gives you 50% lower latency.
35:52
It's uh giving you better performance. You, you're really losing nothing by going to it unless you're using some features that get compatible. But essentially this, these drives, the NVME drives are directly accessible to your hosts. So let me, let me, let me pull up the poll real quickly and it's gonna lead me to a question for you.
36:17
Actually, specifically Eugene, let's throw the next poll up, which has to do with the network Flex. Here's where we all brag, right? OK. What do we have for our storage network? You guys can brag all you want out there? And the reason I'm asking on this pole thing is that what does it mean when you talk
36:37
about NVME over TCP? It does it have to be 100 gigs. Can it be 25? Does it, can it slink into 10? Because I mean, we're talking almost quasi D MA level stuff here, right? And it's got to go fast. The latency has to be extremely low.
36:53
So what, what are our requirements for the NVME over TCP? The requirements are you are using a compatible operating system and you're using an array that is running uh a minimal version of purity that supports it and hasn't VME media. And once you have those pieces in place uh as low as 10,
37:21
10 gigs and as high as 100 gigs. Uh You just need your normal, normal networking. Like for, if you, if you're switching from IKE to U TCP, you don't even need to touch your switches. Oh, wow. Ok. Yeah, because it all happens at the stack, right? So there needs to be, like you said,
37:41
application compatibility with what we're listening for, obviously. And I think I remember doing my reading for accelerating like VM Ware does NBME over TCP now with I think ESX eight, I would imagine. And I think Windows has some NVME over TCP as well if I'm not mistaken. But to the point is is that the application has
38:03
to be able to communicate over the stack as well. Yep. Awesome. Well, that was a good poll. Um The poll obviously didn't tell me anything new. Everybody still has the winner is Gigabit. A Gigabit network is out there. Good to see the 10 gig and the people who have the 100 gig,
38:21
I'm jealous because that's gotta be some fun, fast stuff that you're doing out there. So thanks for answering that poll guys. It's given us some really good information. So talk about the Tale of the tape here for XCR four and as far as you know, you mentioned DDR five and some PC I gen four. Is there anything you want to highlight here specifically? Sure.
38:44
So, um because of, um because of a schedule coincidence, this this array kind of jumped a generation of processors. Um So uh what happened was we kind of jump one, that's why we see such a large performance gap between our three and our four. Um So up to 40% performance gain again, I I am saying
39:13
up to that means that not everything. Uh but we do see really big performance gains across the line. Um The other thing I want to highlight which uh we also developed is this Direct Compress Accelerator Card. Uh It, it is present on uh our higher models on X and C seventies and nineties.
39:38
And what that does is it takes our compression algorithm and we're, we're doing it in software and we have been doing it really well in software. But on arrays that are really busy on CPU which is our higher models usually, um We just got together and developed a hardware card which offloads that functionality into that dedicated card doesn't take CPU cycles now and you know,
40:04
just makes everything better as far as compression goes. Um What I want to also mention is uh non disruptive upgrades and for people who are not familiar with our non disruptive upgrades, and I'll talk more about them later. But when we launched the fourth generation, you could already from day one do a non disruptive upgrade from the previous generations to our four.
40:31
Why is that important is our, where people who are on the correct level of support, get their controllers renewed every couple of years and all of that happens, happens non disruptively. Uh My actually Donald my take from the previous accelerate, not the one we just had. I met a lot of customers, existing customers of ours who didn't actually understand their non disruptive upgrades
40:59
because everybody thinks upgrading an array is difficult. It requires a lot of downtime requires maintenance windows. Well, I'm not saying you shouldn't schedule a maintenance window, but I have personally done hundreds of these and uh they are truly non disruptive like support does a check. Uh We do a check and unless something goes
41:22
horribly wrong, which it usually doesn't, we managed to do a, a controller upgrade like this in under two hours all while your workloads are running and with little to no performance degradation either. So the things that happen is especially if you're running things like VM or just a couple of paths go down to a data store and then they come back up and we have NPIV and it's,
41:49
it's very seamless. Um And the whole system is built to, especially on C to be three petabyte plus capable, especially with these new drives coming soon. Yeah, I was gonna say the stacking of the flash is gonna make the drives go up exponentially with every platform released quite honestly. Um You know,
42:11
again, we, we've talked about this new Bezel design and all of that stuff. II, I know you're itching to get into the non disruptive upgrade stuff as I am. So, we're kind of repeating here what we've already talked about, you know, DFM sizes and all that. These are things you can look up on websites and determine it, I think. Do you want to touch on the No SASS installs or
42:30
quarter or? Sure. So we now consider SASS media which assess these uh in our world to be obsolete. We, we do not intend to sell any more Sass media. We want everybody running direct flash modules and pure arrays um because it's, it's so much more efficient. It's so much easier to upgrade. Let, let me just take a step back and describe
42:58
what a capacity upgrade looks like on an array, running direct flash modules. Let's say you have an array that's running 2.2 terabyte modules, 10 of them. And then you know, you're gonna do a huge migration and you now need 18.3 terabyte drives. What you can do is simply swap them one by one. You, you pull one out,
43:18
put the 18.3 and you look in the gooey for parody to Rebuild it. Grins and repeat and the entire process takes uh I don't say anywhere from 10 minutes to maybe an hour and a half, but it is really, it's non disruptive. It's in place. You're not migrating anything anywhere again, done these a bunch of times too. This, it's beautiful.
43:41
The parody Rebuild Speed that parody rebuild speeds. Insane man. I mean, like I come from a legacy storage provider where the larger drives got, we fooled around with Rage Six and stuff to avoid the double drive failure and all of that. But that was because it is using traditional methods of rebuilding.
44:01
I that's an insane speed for rebuilds. Oh, it's, it, it definitely is. I mean, I, I run my own data centers for a decade before coming to pure. I, I've felt that sitting there scared that my hot spare and the Raid five is just that, that, that rebuild is actually gonna trigger other drives to fail. Yeah. Right. Exactly. The whole Yeah.
44:23
Yeah. And then you might as well get your resume out and just send it out to Yeah. Monster dot com or whatever and, and I'm not even, you know, playing into the pure card here. That's, that's reality. You're sitting there and it's a three terabyte drive for instance. And you're sitting there waiting,
44:39
is it gonna rebuild fine or is it, is the whole thing gonna fail? You're gonna have to leave your friend and leave your friend and say, hey, can I bite your nails for a while? I'm tired of biting mine. Yeah. Yeah. So this is, this is kind of where the no sass thing is coming from. We have the next generation thing.
44:56
Why would we go back in time? Totally agree. And again, we've always been a company that's focusing on where the puck is going. Not where the puck is and, and how we make the best of what we can out of what's existed for 80 years. And I think your description of everything is perfect in, in, in really showing that intention.
45:16
All right, Eugene, you got five minutes to talk about NDU I want you to dig with me because this is huge and I'm gonna stage it from the perspective of I've been in a legacy provider. You've worked with legacy storage. Everything you do when you move from one platform to another is nothing more than heavy lifting, the data from one place to the other, a gigantic robo copy.
45:36
And it doesn't mean if you're changing vendors, you don't do that. But if you're upgrading within a generational thing of legacy storage, same thing, plop the new one down next to the old one, move the stuff, sit there taking things down off line to be able to do that. And from the side that most technical folks don't see the financial aspects of that are
46:02
terrible too because I'm paying for both at the same time. And if you're talking really big amounts of data, you're talking 6 to 7 months that you're paying for two things that should be only one in your data center. So depreciation is affected, your budgets are affected, all of that is affected for that length of time that we've seem to fix with NDU and now I'll turn it over to you.
46:26
Yeah. And I, I just wanna bring this up from, again from a, from an efficiency standpoint, you don't have two things running at the same time, but one you also have in the space work. A lot of customers uh sites I visited just simply don't have the space to house multiple storage systems. Uh Spaces is at a premium.
46:45
Uh And I know a lot of folks are in colos or in smaller cages and just, and I've been through a lot of these uh uh this is personal pains that I've lived. You just don't get the space, you don't get the time and there's somebody screaming at you how expensive this is. So let's talk about you really quickly. What this is,
47:07
first of all, this is not limited to just one chassis. We, we uh we have customers that are running the same array identity from when they were on sale in fa 405 when this was like a s shelve and two server heads uh that they then non disruptively went to the new Long Life chassis upgraded to an Mr A then inside that chassis upgraded to an XR two, then to an Xr three and now going to be upgraded to
47:35
an Xr four. all of that while the workload is running. So we have a rays that have been around for essentially 10 years but are but have been four different generations of hardware. Um This, this has been something that uh was pure goal from day one to be able to do it this way.
47:58
Uh Right now, we also put out flash array Excel and we're able to upgrade from, let's say, an X 70 or to an XL 1 70. Again, non disruptive. Uh The only caveat is it has to be to Connecticut, we're able to replace chassis non disruptively. That's another thing we, we can take a chassis
48:21
that has failed uh or is about to fail or something is going on and just non disruptively amazing. Um So the way that looks at the two controllers, uh the performance envelope is slightly oversized to accommodate like songs or disruptive heart rates. Um And the hardware piece that is the most amazing here.
48:51
Uh uh uh Our uh oh we lost Eugene. He's on a hot spot, but I can jump in and tell you where he was going with this. So it, it's the end of you process that he was just talking about. Oh, he's back. I was gonna cover for you.
49:12
You guys, I was, I was, I would get dropped. Uh I hope my audio didn't get garbled too much. No, you were fine. You were talking about, you know, the, the being able to swap all of the guts of legacy flashing to the more modern stuff. So I'll let you go on. Yeah. So most of our upgrades are uh are sub two hours. Like for instance, those that for uh an XR two
49:38
to an XR three usually are right around the two hour time frame. Uh In this, on this slide, we're doing an NDU from flash, a access Flasher Excels. Now that is slightly more involved because we're going from a different chassis to another chassis. There is a lot more staging, there's a lot more
49:57
clean up, but it's still non disruptive. And for under six hours to migrate an entire erase worth, like you might have, you know, 1.5 petabytes of effective capacity in there and you're just migrated in under six hours all while everything is still running. So that is the fascinating piece of how the hardware and D works because yeah, software upgrades, most people can get them right.
50:23
This is different. This is not uh like it's, it's not a migration. It is truly the same array identity, the same wwns, there's no rezoning. No re being literally nothing for you to do except make sure that the path come up, uh path come up once uh we're done. But we're also monitoring that like our support is monitoring that the professional services
50:48
engineers are looking, we, we check how many paths you had and then we make sure we have the same number. So it's, I can probably talk about the technical side of this for hours. We don't have hours, but just it, it's, it's an amazing process if you, if you get to be in a data center while pure is doing an und disruptive hardware upgrade and take the time and watch.
51:10
It's amazing. Yeah. One of the things to call out on the non disruptive upgrade on this slide specifically too is that big red epic on the right there. You know, that's a patient management system that a lot of hospitals use. So hospitals are very dicey about that system being down for whatever reason because patients you have to go to paper records and all of that stuff.
51:33
So showing the non disruptive process staged against Epic says a lot about we truly do mean it's non disruptive. I know everybody's gonna wanna go with maintenance, windows and everything, but the amount of heavy lifting to prep for it is almost non existent as far as because you said we don't change wwns, we don't change zones, all of that stuff doesn't change.
51:57
So the, the NDU is definitely one of our strengths and uh I could talk about it for hours as well simply because I've been a part of the heavy lifts in the past. So um we're down to the last five minutes. I'm gonna do a real quick touch point on some things that we also feel are a part of the efficient data center. Um You know, what workloads should you consider for X for all of them?
52:19
Of course, um you know, it means more robust IOPS and better latencies for databases and VM Ware and Zen server. And I hope we've painted the picture well enough that show that, that the inner workings of the flash array combined with our, our, our, the new networking changes and things that we really have brought to bear a platform that can bring all of these applications together onto a single ecosystem which that in
52:47
itself is efficient, right? And a lot of that has to do with the fact that we're also unified now. So a flash array is capable of providing not just block storage but file-based storage too for NFS and SMB shares. So if you're in a data center that's using netapp or a dedicated NAS provider along with a dedicated block provider, um you can combine those functions on to Rras and have all of the
53:14
same deduplication, data reduction availability. DF MS, all of that stuff in the same platform because it's all built on purity FA which is the operating system itself, the block services and the file services are pure services to pure to pure D FA, which means they take advantage of all the same subsystem things, which is really cool because if you're duping and compressing and block that is reflected
53:43
over in file as well, so you'll get AAA synergy of deduplication and data reduction between the two separate services themselves. Um Another thing to bring up as it relates to efficiency is pure one. So it's not just the hardware, it's the software and then it's the management ecosystem. And Pier one to us is our A I operations type of thing that really when you deploy the edge
54:08
services that report to pier one gives you everything that you see on the bottom there between real time monitoring, analytics, the ability to move your workloads or at least determine that you need to move your workloads. Because there are some technologies behind the scenes that we have that can enable that um a lot of policy based management too, being able to go to one and do self service upgrades to all of your arrays,
54:31
looking at things as a fleet, as opposed to a hodgepodge collection of arrays themselves. So here one for certain is, is another one of the big things that's out there. So um wrapping things up, I it's multidimensional, you know, this whole efficient thing that we've talked about for the last hour or so. Yes, we focused a lot on the hardware and the hardware matters.
54:53
The, the power savings has the ripple effects, you know, from the DFM all the way through to the other things that we do, you know, fewer power whips, less battery backup generator capacities, all change when you move to an all flash data center. And we've got to be able to leverage that hardware with a more modern storage operating
55:12
system, which is what purity fa is to us. And then that bigger picture is obviously what matters most when you're talking about efficiencies because you have to change processes and visibility and all of that stuff. I do want to do a shout out to somebody on chat who called me out and said that my pole forgot about fiber channel on the speeds and feeds one where we were flexing.
55:36
So somebody had fiber channel out there. I hope it's 16 or 32 whatever your fiber channel is, that's fine by us as well. I, I'm an Ethernet guy and I apologize for my oversight on, on the poll question.
  • Microsoft SQL & Exchange Solutions
  • FlashArray//XL
  • SAP Solutions
  • Tech Talks
  • Enterprise Applications
  • Enterprise Data Protection
  • FlashArray//X
  • FlashArray//E

Donald Poorman

Technical Product Marketing Manager, Pure Storage

Eugene Glik

Technical Marketing Engineer, Pure Storage

Knowledge is Power. If you want to get more from modern data, Pure’s Tech Talk series can supercharge your storage savvy. Led by Pure solution experts and industry guests, these lively discussions explore features, live demos, and best practices to address the use cases you care most about. You’ll come away informed, inspired—and ready to unleash the full power of modern data for your business. Uncomplicated Data Storage, Forever.

Performance and Efficiency.  It’s what’s needed to drive your business forward, but two things are typically at odds. You want to help your company innovate but are trapped in a cycle of day-to-day operations, just keeping all the different systems you use to manage data running.  Isn’t it time to break free and never worry about legacy storage systems again?

Pure is revolutionizing the storage industry by delivering the largest-ever advancements in performance, efficiency, and security with our next-generation FlashArray platform.

07/2024
Pure Storage FlashArray//X | Data Sheet
FlashArray//X provides unified block and file storage with enterprise performance, reliability, and availability to power your critical business services.
Data Sheet
5 pages
Continue Watching
We hope you found this preview valuable. To continue watching this video please provide your information below.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Your Browser Is No Longer Supported!

Older browsers often represent security risks. In order to deliver the best possible experience when using our site, please update to any of these latest browsers.