Skip to Content
40:42 Webinar

Are You Dealing with Unconstrained Growth in Unstructured Data

In this session, see how an all-flash storage architecture from FlashBlade delivers the unstructured data outcomes that storage leaders need—with unparalleled performance, scalability, resilience, and power efficiency.
This webinar first aired on June 14, 2023
Click to View Transcript
00:00
Uh Thanks for coming to accelerate. Um This session is uh are you dealing with unconstrained growth in unstructured data? Now, that's a lot of, to go along with our uncomplicated data storage message. Um But today, what we're gonna walk you through. Uh So, myself and my colleagues, Lana are gonna walk you through some of the whys behind
00:22
flash blade and how it was designed to uncomplicate unstructured data. Um We talk a lot about what flash blade is and, and you heard this morning a lot of talk about uh the product. But I think at the end of the day, what matters to a lot of people is how it works in your environment and why we made a lot of architectural decisions that we did when we, when we built the product and what impact that
00:45
has had over the last uh seven or eight years that the the product's been out in the field. Um So my, so my name is Justin Emerson. I'm a principal product manager uh for flash. Uh I've been with Flash Blade now for uh more than a year uh in the business unit. Before that, I actually worked in the field uh with uh with customers in flash blade and joining me in in just a bit will be, will be and I'll give her a proper introduction,
01:08
but I wanna set the stage for you on how complicated unstructured data was uh about two decades ago. Um So for those of you who were around with unstructured data back then, um scale out systems were always uh sort of the realm of those who were brave enough to wade into the complexity and the management and,
01:34
and all that kind of stuff scale out systems were hard, scale out systems were complicated. Um Some of the big reasons for that were the technology that was available at the time, right? Many of these scale out systems were designed around commodity servers, commodity hardware. And this was certainly in vogue at the time,
01:51
you know, this was in the period of time where Intel was iterating on, on CP us uh at a rapid clip. And everybody was saying, why should I build anything custom anymore when I can just put my stuff on on commodity X 86. So there was a big drive towards the commoditization of these scale out systems. The the next thing was is that networks were slow.
02:10
Um And so you ended up having lots of complicated networks, lots of, you know, multiple networks, front end networks, back end networks, sometimes they were Ethernet, sometimes they weren't. And the other key thing was that all of the systems that existed at the time really existed in a world of file and file alone. We hadn't really seen it while object storage
02:30
did exist. And the concept of object storage is not that recent. Um really at the time, the big uh the big scale out systems were all focused on a very file centric architecture and to be perfectly fair to all of those systems that existed, you know, two decades ago, this was a perfectly reasonable approach to take.
02:48
But less than a decade later, a lot of these architectural decisions became problematic. They became counterproductive um in order to achieve performance in a scale out context, most of these legacy systems were designed around spinning disc, they were designed around the constraints that spinning disc puts on the engineering of a system. When you have spinning disc, your number one
03:14
enemy is seek time. So many systems were optimized around that and and of course, hopefully, as you know, flash has no seat time. So that's a very, very different set of of architectural uh decisions and design constraints that you're designing for when you have all flash.
03:30
Um The other thing is is that although this wasn't new to the time, um unstructured data around this time was was really exploding. This is where you were seeing the collection of data move out of the realm of the so called web 2.0 companies and into the mainstream suddenly everybody was trying to collect everything. The famous economist uh front cover, which was the uh data is the new
03:57
oil was around this time. And so everybody was saying I have to keep storing everything. I'm not really sure why, but I got to store everything. And so the growth in untrust structured data really took off. And the other thing that was happening around this time is a huge shift in the unstructured
04:14
data space from file centric to both file and object. Um A lot of this was driven by the proliferation of applications in the cloud where it became much more economical to use object storage. Um But also because as we hit these huge numbers of unstructured data, we started hitting the scalability limits of what files and file systems could afford.
04:36
And so with all of these changes, um there were there was a big gap in the market that we saw as pure storage. So you know about middle of last decade, pure storage had been around for a few years. Um And we made the decision to build essentially a startup within a startup. And that was because while we have this Flasher rate product,
04:59
which is this absolutely revolutionary uh all flash scale up platform, we recognize that this growth in unstructured data and our belief which hopefully, as you heard this morning has been borne out that eventually flash was going to replace disk across the whole stack. We knew that we needed to approach, designing a scale out system to solve for these problems.
05:23
We needed to design a scale out system for this new era of media, this new era of data and this new era of access method. So uh the question then is how do you design a system like this? And uh I'm very privileged to have with Mena Tumi Nova, uh who was one of the original uh engineers uh with the flashlight product.
05:46
And actually I found out was that the accelerate uh when we introduced the product back in 2016. So you may have seen her. She's a returning champion. Please welcome my colleague Spit Lana. Thank you so much, Justin for finding such a kind way of saying you've been here forever, right? I have worked on flash blade when it was still
06:07
a code name. I still remember the times when our uh flash blade sales engineer and there was just the one he was searching for alpha customers and what he would do because there was, wasn't really much to show what he would do is he would walk the future potential customers down the roads of engineers and say, look as soon as they finish typing, we'll have something to put into your environment,
06:31
right? And then followed him was our VC who was actually staying very close to the product. And uh he would also make his rounds and offer his uh timeless words of encouragement. Are you ready for it? Type faster and fast? We did, it was a very fast and dynamic environment. We were actually uh putting in uh the
06:55
principles for years to come, right? And in this environment when everything is so dynamic and fast moving, you have to code to principles, right? The three principles that we were following along were scalability, efficiency and simplicity. As you heard, Justin say, we knew that we were just at the beginning of the explosion of
07:17
unstructured data. So whatever we were capable of putting together flash in the beginning, we knew that the capacities we're gonna be dealing with later on are gonna be so much larger and not just capacities. Everything that comes with it, the number of file systems, the number of files, the number of objects, the amount of metadata that we are dealing with all
07:36
of that is gonna scale. We knew that the entrance for us is gonna be in the high performance environment, right? And we also knew that the requirements from the customers are gonna be different. We knew that we needed to hit the smaller requirements as well as expand to the largest amount of of computer that customers are gonna
07:55
throw at us with efficiency. We're actually very proud that the direct flash technology that you are hearing a lot at at this conference has actually originated from this early intercessions that flash was doing. And the idea behind it was we wanted to grab the absolutely latest in flash NAN technology and have direct access uh to it, have benefit from it.
08:21
And then finally, we were the second product that was coming onto the market. Flash was already wildly successful with its uh simple approach. So we had a high bar to hit over there. I want to talk more about scalability and the best way to talk about scalability is by actually taking you uh through the story of one of our customers.
08:41
They started with us in 2017, they grabbed the uh 17 by blade that we have just released. At that point in time, they started with a nine blade configuration. Um They were building out a deep learning pipeline for the internal use case. And basically this was the small configuration to just dip your toes in and see how it goes
09:02
next year. They came back because they needed more performance. So we have expanded them non disruptively from nine blades to 12 by the end of the year, they were successful. And as they have done the uh calculation, they have realized that they needed a lot more capacity. So what we have done is we have non
09:21
disruptively upgraded them in place from 17 terabytes to 52 terabyte blades. And they kept going with us. Once they reached the limit of a single chassis, we have upgraded them to a two chassis and then to uh three chassis configuration overall with us, they have gone through six expansions, increasing the capacity in that timeline, 16 acts.
09:49
What has allowed us to be able to do that is the modular design of our hardware. In our original flash blade, uh you can expand by adding on blades with flash blade. S, what we are doing is we are improving the capability and now you can expand uh the compute by adding on blades and you can expand the capacity by adding on drives. Now, there is a third aspect that you generally think about as you're thinking about skill and
10:17
distributed systems. So it is compute capacity and networking. Luckily, for flash blade, you do not need to uh think about that because what we have done is we have uh done integrated networking. This is uh if you turn around the chassis at the back end of it is a fabric module and sort of you can see, oh, when engineers were allowed to name something,
10:41
let me just explain a little bit. Basically, the networking is you have the front and back end and control network. What we have done is we have done the software networking that encapsulates all of it. So it becomes a fabric for a so fabric a model. Yeah, you, you can tell we are disallowed to name anything anymore. So thank you uh with um uh because of the integrated
11:07
networking, you can scale by adding on blades, adding on drives. You do not need to plumb through the networking part of it. The only time that you need to touch the cables is when you expand from single chassis to multiple chassis, then you can tell we add on the XFM, the one at the top, right, and you interconnect the chassis into that XFM creating a single
11:30
networking space. And over here on the right, you can see a picture of um a flash uh multi chassis flash blade in our lab environment. And I hope you can tell just how much of effort we have put in in having as little cables as possible in this environment. Let's start capacities for a little bit. We can start.
11:52
The smallest configuration is a seven by 1 24 terabyte and the largest is actually 10 chassis, fully loaded. So let's explicitly talk about capacities. I want to draw out the timeline of our improvement and capacities. We started back in 2017 with 1 20 terabytes that was based on our eight terabyte blades. And then we kept on improving the capacity.
12:18
Uh We went to 52 terabyte, we went to five chassis and then 10 chassis flash blade s came in with its improvement in how much capacity we we can do. And we are over here today at 9.6 petabytes that is five chassis of full, fully loaded flash blade s and uh we can keep going in the near future. We can go to 10 chassis.
12:45
And you have heard an announcement today about 75 terabyte that will bring us to 30 petabyte and the flash technology is already there so that we can build out drives up to 300 terabyte that would put us into 1 20 petabyte raw. I hope you get goosebumps because I absolutely do.
13:13
I started at 1 20 terabyte. I still remember the around the table discussions. Should we be shipping 52 terabytes? Would that be too much? Let's just ship it and see what the market tells us. And the market told us we need more. So we have increased 80 X already, we can increase another 12 X.
13:34
So please tell us, do you need this? Do you need even more? We have talked about our scalability in capacity. Let's talk about our scalability and performance. Coming back to the customer with the uh deep learning platform.
13:51
Not only did they increase capacity but they have also improved performance six X. Now, where did the six X come from? It came from first, they went from 9 to 15. That's about a factor of two X and then another factor of three X in going from single chais to three chances. I hope you're putting this together that we are able to scale close to linearly with our
14:15
performance. And that's because we have invested significantly into distributing everything as much as possible, distributing the connections, the file systems, the files, the sub parts of the files so that there isn't a single point of communication so that as we get more hardware to be working with, we can continue scaling.
14:36
And another two areas that I I'd really like to highlight uh our investment in is the small and metadata. Uh There, we have significantly invested in our distributed engine uh that is powered by our underlying technology. And then lastly, the direct flash with the ability to access directly the flash, we can squeeze the most out of the capabilities of the drive.
15:01
One good way of saying whether the uh flat Os environment is able to scale is to see what is happening as you upgrade the hardware on flash blade, we have recently undergone an upgrade from original flash blade to flash blade. In there. We have gotten um more CPU more networking, more memory and QLC flash. Well, you know, from the perspective,
15:23
really QLC flash is shouldn't be put into the advantages. So let's just say it like this uh with better networking, with more CPU with more memory. And in spite of QLC flash, we were actually able to achieve two X more performance across the different metrics of performance. Uh whether it is throughput, whether it is small or metadata or whether it is single stream performance,
15:46
we were able to achieve two X performance. And this is not just coming from our lab measurements, this is coming from a real customer data. So this customer they're an Ed a customer and they were hitting the limits of a two chassis flash blade, right? They wanted to see whether the flash would be
16:05
able to offer more for them. Uh We have given them a poc of a two chassis flash blade 200 they were able to observe 2.5 X more through 1.8 X more, right? Throughput on their workload. We have talked about scalability in capacity and in performance. Let's talk a little more about efficiency.
16:31
No, uh the last couple of years have been rough for everybody. Uh We are hearing again and again in conversations that uh uh customers are running out of space of the power in the data center or just playing for everybody, the energy costs are rising. We have been paying attention uh to efficiency for a while and uh we hope that we can uh
16:55
assist you on this journey. The way we think about efficiency is in three domains. We think about architectural efficiency. How well we are using that uh hardware that we are working with the physical space efficiency. Uh What is the physical uh footprint per petabyte as well as the power efficiency? What is the draw per petabyte that we are using
17:19
with architectural efficiency? There is really a lot that I could be talking to you about, but the one that I really want to talk to the most about is direct flash basically. Uh what is the advantage of direct flash over SSD? Well, think about it, SSD is essentially a mini storage area that is making its own decisions. So what is the advantage of making global
17:42
decisions over local decisions? For one? Think about it? From the schedule perspective, the global view can give you uh whether those are, for example, front end operations or background operations and how much to give to each one of those at every point in time.
17:58
In terms of the um reliability, think about it. Uh every system needs to, to clean up. At some point. When you have a global view, you can decide the most efficient item to clean up as well as the best time. You can delay as much as possible the clean up time with the global knowledge which allows us to to improve reliability significantly.
18:23
And then finally, in terms of space efficiency, we only need to over provision to hide certain aspects once, not twice as you would have with the S SDS with physical space. I would like to hit three points. Number one is our blade uh technology allows us to uh have the uh compute in a much more competitive representation with direct flash.
18:50
We are laying out the flash ourselves. And then finally, as can be evidence in the flash blade to flash blade as transition, even though in or uh original flash blate was four ru flash blate is five ru. So just a 20% increase over there. But in terms of the capacity, we will be able to increase over two Acts uh resulting in significant physical space savings
19:16
in our road map of being able to go up to 300 terabyte, we will be able to continuously improve in the physical space efficiency in our product lines. And then finally, with power efficiency, I would like to hit two things. One is we get significant savings from using flash instead of the hard drives. And then number two is we actually invest quite a lot in the development to ensure that we use
19:43
as little as possible power resulting in the 1.3 watt per terabyte on our flat line. Now, we have talked about original flashlight and flash as and what we were really optimizing there for is for performance. Where was the uh I don't know what happened there. I guess I will go uh sorry, please continue uh with uh
20:14
flash blade, uh original flash and flash plate as what we were optimizing for is for performance. But what happens if performance is uh uh not the most important thing you still need performance, but that's not quite the most important angle. What if you optimize for the fourth thing? Fourth efficiency that I haven't listed out the cost efficiency,
20:36
right? And uh uh Justin will be able to uh tell us what we were able to come up with. Once we change the optimization things Yeah. OK. Let's see if we can do this without blowing everybody's ears out. Awesome. Thank you,
21:00
apologies for that. Ok. So as as was talking about, we've spent a, you know, when we entered this market, we entered this market with an eye on flash was going to penetrate these performance oriented workloads first. But we knew that there was going to become this crossover point where we could address workloads that weren't that performance sensitive even with a flash platform.
21:25
So that's what we introduced earlier this year. The reason this is so important, especially as we talk about energy efficiency is that in the world today, including data living in public clouds, analysts estimates are that 90% of data of bits still live on spinning disc. If we anticipate that unstructured data is going to continue to grow at a rate similar to
21:51
what it has over the last several years, we're going to see something like 10 X the amount of unstructured data by the year 2030 that we have today. Now, if we need to take dis based solutions and scale them up by 10 times, that's a real problem. It's going to be too much power, too much complexity, too much space.
22:12
So we have to undergo this transition to flash if we want to be able to do this sustainably. And so that's where flashlight E comes in. So flash E is a new, is the newest member of the flash blade family. It is built on the same second generation hardware platform that flash blade s is based around. But what flash blade E does is it optimizes
22:37
instead of for performance and efficiency, it optimizes for both efficiency and cost. And so flash blade E is designed to go after very explicitly those workloads which until recently could have only lived on spinning disc for economic reasons. And so by taking the same architecture that
23:07
Lana talked about that we built with the original flash blade and then with the flash plate, s, what we've done is we've taken a scale out system which scales symmetrically and started scaling it asymmetrically. What's important to note, one of the things that Lana said is we wanted to make sure that everything in the system was able to scale. And so there are many components in the system.
23:31
When I see components, I mean little bits of software, you can think of these as services or micro services that sort of make up that compose the system in flash play. S and in first generation flash play those components all scaled symmetrically every blade in a flash blade as cluster is homogeneous. But with flashlight e what we have done is we've changed that paradigm and now we are
23:59
scaling the system asymmetric. So now instead of every blade being the same, there are now two kinds of blades, there are now two kinds of chassis while they are physically the same one chassis, which is called the control chassis is populated with EC blades and then subsequent chassis which we call expansion chassis are populated with ES blades.
24:21
And as you can see here, while the EC blade run all of the services, the same that they do on flashlight s the expansion chassis are only running our direct flash software, the part of purity that interfaces with that direct flash and that allows us to cost optimize and it allows us to build large flash systems that can offer predictable performance performance, that's better than disc,
24:49
but at a competitive acquisition cost. And hopefully for all the reasons that we discussed this morning, a better TCO and most importantly, a better customer experience, we've had actual customers that have moved to flash blade from a spinning disk. And this is before we even had flash blade d because they literally had somebody whose job it was to run around replacing failed drives.
25:15
The results of the I, I'll take questions at the end, but I I do see you um the, the advantage is that you get from a power standpoint. Again, these ex chassis actually use less power than the EC chassis, which means that the larger the system, the more power efficient it gets. And so with flash blade e we're seeing even greater power efficiency.
25:38
And when you compare that instead of to all flash systems, but to all disc systems, the difference is pretty staggering And so coming back to this original design goal that's to talk about, she talked about the scalability aspect. We've talked about the efficiency aspect both in terms of performance efficiency,
25:59
space efficiency, architectural efficiency, and now cost efficiency. With flashlight E, we really do have a single platform, a a single scale out platform that can touch all the different capacity to performance points with a single architecture. But we have to keep things simple, right? We don't want to start rebuilding those complex
26:27
systems of the past. We don't want to build the same thing that was there before. So let's talk about remaining simple throughout all of this. We have tried to keep flash blade as self tuning as possible. Now, there are classes and I've actually talked to some people who've been to them,
26:46
some, some uh introduction courses to flash blade. But I want to be clear that we have built the platform to be as simple to use as possible. The greatest example I can think of for this is when you create a file system, there is only one required perimeter and that's to give it a name, you get the size, it'll just assume you want the whole box.
27:07
If you don't give it a size, you don't have to say what tier it is in. You don't have to say what, you know what dis group or what ray type or what you know, eras your coding set up, you want all of that stuff is done automatically by the system. And lastly, we need the scaling to stay simple. Um This is a picture from one of the original flashlight decks a long time ago where we were
27:31
looking at other scale out systems and data centers and the numbers of front end networks and back end networks, it just got really gross. And so we continue to integrate one of the most complex parts of any scale out system, the networking and do it automatically. That's an entire, you know, area of expertise that you don't have to learn.
27:55
You don't have to think about because the system has all of that software to find networking built in. So simplicity really does matter. But we can't, as Matt verse said this morning, don't mistake simplicity for uns sophistication. We have continued over the last 78 years to deliver incremental
28:18
efficiency improvements in terms of both our performance and our feature function. A great example of this. In the most recent purity release, we just introduced even more replication options including fan site object replication. We continue to focus on additional security features including the introduction not only of safe mode several years ago, which was first introduced in the flashlight product,
28:44
but now with object block bringing that same even greater level of granularity to immutable snapshots. Um But all of this is to say that we deliver on additional features over time and we do it non disruptively and without charging you extra, we're not adding a new feature, nobody paid for replication when we introduced it. That was just part of the part of the system.
29:11
So today with our flash array portfolio and now with our Flash Blade family, we really do have what we think is a portfolio that can address all of your unstructured data needs. And hopefully, what you've seen is that over time, that advantage is only going to increase.
29:39
Um Charlie said at the keynote that we are going to be, if we are three years ahead, now, we will be three years ahead in three years. And that's because we are really investing in this scalability. We're continuing to focus on this efficiency. We keep trying to make sure that we keep it simple.
30:01
That's part of the whole design philosophy behind flashlight. So hopefully, that's given you a perspective on why the system works the way it does, how we design for it. And um if you want to learn more about flash blade, um we'll be taking questions uh shortly after this. Um But also you can find more at pure storage
30:23
dot com slash flash blade. Um I hope you all give it a look and I hope that uh for those of you who are Flash Blade customers. First of all, thank you. And second of all, I hope that we have really uh endeavored to uncomplicate your data storage. So thank you very much. Uh, we'll be taking questions,
30:41
uh, both you can ask myself or, um, I believe, uh, or maybe check out the, um, I guess I'll, I'll run the mic around as it were. Um, they would have any questions. I know we had one back there. What's your question? Yeah. Uh, two questions. Um, one, the scaling on the different types of blades with the ex and the ecs.
31:07
Can you scale both ways with it or is it only ex blades to scale? Great question. So um the idea behind Flashlight E and how it scales is to meet a minimum performance target at a particular capacity. So if you look at uh the part numbers for Flashlight E there are three, there's a small, medium and large today and going forward,
31:33
we will continue to have that simplicity where what we're selling is a capacity point. The only number in that sku is the number of terabytes. So while we are explaining how the system works, how we're able to achieve this asymmetrical scale. We're not um adding that configurability to the system, the system designed to meet a certain uh performance to capacity ratio.
31:57
So as you add chassis, there will be configurations in the future that have multiple control chassis, but that complexity is not something that we are foisting on customers. Customers don't have to think about that. What you have to think about is how many petabytes do I need and that's the one question. So how does it kind of read like a like a direct flash she and the flash a side of the world?
32:19
Um So the question is, does it, is it like a direct flash shelf, a direct flash shelf very explicitly adds additional capacity to the existing controllers? There are still blades in the expansion chassis and those blades still do add additional compute. That's because at a certain point, you're going to run out of things like PC IE lanes or the compute ability of those control
32:40
blades to address additional storage. Um So there are actual blades in there. But if you want to think about it, conceptually, you can think about the, the control chassis as a controller and the expansion chassis as dishes, but that's sort of imperfect because we can have in the future additional control shelves. So both aspects can scale out. So rather than it being a scale up system,
33:04
I like to say, as I said here, it's an asymmetrically asymmetrical scale out system. Any other questions? We we we were told to keep the presentation to 30 minutes and leave 15 minutes for questions. So we were diligent about that, which uh I have several customers looking at the e the largest version of it for a multi petabyte use case.
33:28
Um And then you have the scale going 30 60 petabytes. What do you imagine the change would be for a customer like that? That goes the largest route that you have today to scale up to the largest in the future. You wanna take that one? No. Ok. All right. So, um, today, what we're, what we're shipping
33:50
are our chassis of 48 Terabyte drives. Um And so the largest configuration that we're shipping today, um, it will be larger soon. The largest configuration we are shipping today is about eight petabytes. So it's a four chassis system. Um, when we go to larger DF MS, those are going to be introduced into the E family. For sure.
34:10
Um One of the advantages of having a asymmetrical architecture where we are not solely, you know, where we're not overly focused on performance, linear scalability. We anticipate the ability to have mixed um environments where somebody will have chassis of 48 S and chassis of Santa Vibes or the future of chassis of larger DF.
34:35
Um And that's because with flashlight s we want everything to be the same because we want to have that predictable performance given profile so that a customer if they so desire can scale linearly on a particular performance per terabyte, but E isn't designed that way E is not, does not have that as a uh as a sort of a design tenant. So um we will be able to have mixed configurations in the future.
35:01
Um But if a customer does want to say, for example, upgrade those existing things, that's probably more of a conversation around something we have green one, which is available in our new UDR tier. Um which means that whatever the hardware that we deliver is our problem and not the customers. Um So that's another way to approach that, that as well.
35:26
And just to say it explicitly, basically, you have seen that uh the customer that I was showing they have upgraded from 70 terabyte blades to 52 terabyte blades. That capability would also be possible with the flash E as we are the larger sizes of drives. So even though today it would be 48 once we release 75 and sooner there would be a capability to upgrade in place to maintain the the same physical footprint
35:53
question. Yeah. How much power we, that's great. How efficient is flash point B compared to the original flash? How power efficient is that the flash is? Yes. And I last, would that be for the same part? Ok. So what I'll first start off by saying is that
36:25
the first generation flash play, the the successor to that is flash play. S most flashlight customers are probably using flash play because they care about their performance of their unstructured data. Um And so our expectation is that most original flashlight customers will want to move to flashlight S and we have uh you know, we have programs to do that.
36:50
We'll be introducing, you know, fully nondestructive upgrades from our first gen flash blade to flash blade es later this calendar year. Um That said there are customers that may be using flash blade that have workloads on there, which they've put there because of its simplicity and its ease of scalability and it's simple to manage and not need performance. So those customers may be looking to move those
37:10
workloads to something like a flash blade flashlight. S is more than two times as power efficient as the original flash plate and flash lane E is even better. So I don't know, I I'd have to do the math on it at some point, but I can, you can certainly look at the the energy efficiency of the original flash blade
37:32
is, is definitely surpassed by flash plate. S that's for a multitude of reasons, not just, you know, better CP us and and so forth, which gives us better um performance for WATT in the compute layer, but also the move from um lower er sorry lower bits per cell to QLC technology which gives us much more efficiency in the the demand layer. Um and then a bunch of other architectural efficiency improvements that the team has
38:00
worked really, really hard on implementing. So you absolutely should expect flash flat to be much more power efficient than our first GEN platform because E is more power efficient than S and S is more power efficient than the first GEN platform. Think that something like 91, then it's not worth money. But does that uh was here one, the next one.
38:27
Yeah. So that's, and that goes right to one of those uh like uh quotes that was on, spit on a slide. We talked to a lot of customers that are just out, they're not out of space, they're out of power. Um How many people here are out of physical space in their data center? How many people here are out of power in their data center?
38:50
So more hands, that's what I would expect most people and it's not true in every case, obviously, but most people run out of power before they run out of space or rather they say I'm out of space because I can't put anything in the empty space in the racks because I'm out of power. Um The, the thought process behind, you know, a lot of what Charlie talked about this morning is that you can by replacing that
39:14
legacy storage and in even replacing first gen flash blade or older flash arrays even with, with newer generations, um you're going to be able to achieve significantly better power efficiency, which means that you can fit more storage in the same space or you can use and less in the same power envelope or you can deliver the same storage, probably higher performance at the same capacity in a smaller foot.
39:42
There's probably some complexity in how you have them both running at the same time in order to make that transition. But the end state should be that with our newer platforms, whether you're comparing it to our older platforms or especially if you're comparing them to other platforms, especially legacy ones, you should have significantly more efficient usage of the power
40:04
that you do have. Yeah, I, I recognize you. I think you were uh in the, in the video last year, right? Uh We, yeah, so uh uh Mississippi Department of Revenue. All right, awesome.
40:27
So, yeah, so that's uh, our gentleman over there is one of our, our beta customers for flash play. So thank you, appreciate, appreciate you being here.
  • FlashBlade
  • Pure//Accelerate

A high-performance, file and object storage system can manage your analytics, AI, ransomware recovery, and technical computing workloads. Disk-based repository workloads can now be on flash, too. In this session, see how an all-flash storage architecture from FlashBlade delivers the unstructured data outcomes that storage leaders need—with unparalleled performance, scalability, resilience, and power efficiency.

Test Drive FlashBlade

No hardware, no setup, no cost—no problem. Experience a self-service instance of Pure1® to manage FlashBlade, the industry's most advanced solution delivering native scale-out file and object storage.

Try Now
07/2024
Pure Storage FlashArray//X | Data Sheet
FlashArray//X provides unified block and file storage with enterprise performance, reliability, and availability to power your critical business services.
Data Sheet
5 pages
Continue Watching
We hope you found this preview valuable. To continue watching this video please provide your information below.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Your Browser Is No Longer Supported!

Older browsers often represent security risks. In order to deliver the best possible experience when using our site, please update to any of these latest browsers.