Skip to Content
42:58 Webinar

Flexible Disaster Recovery for Hybrid Cloud

Come learn the benefits and tradeoffs of cloud-based DR with Pure to complement your DR portfolio.
This webinar first aired on 28 July 2022
The first 5 minute(s) of our recorded Webinars are open; however, if you are enjoying them, we’ll ask for a little information to finish watching.
Click to View Transcript
00:00
Now let me introduce our presenter from pure storage. We have David Stockman. Principal Field Solutions Architect. David, we'll go ahead and turn to you. All right, thank you very much Emily. Um so like Emily said,
00:14
my name is David Salmon. I am a Principal Field Solutions architect here, Pierre and I focus on our platforms. So that is all of our solutions and how they integrate with cloud providers, whether it's on premises, so VM. Whereas a hyper V or whether it's some of our cloud based offerings such as Equifax metal, amazon web services and Azure, um you can find me active on social media,
00:35
so whether that's twitter linkedin or on gIT hub as well as I do have some active blogs in regards to all things virtualization as well as storage. So when we think about what the agenda today is, we're gonna talk about what can cure help within the cloud and kind of some of the ways that we can make it a little bit more efficient and also provide you some of this flexible disaster recovery because really in this day
00:56
and age it's getting a really complicated for a lot of customers to kind of do a disaster recovery because either their sites are no longer there, they cannot get the equipment for those secondary sites, they're aging up some kind of going to provide some alternatives where we can leverage the public cloud for disaster recovery.
01:15
And so when we first think about your data in the cloud, it really is going to be a little bit different than what you think about on premises because on premises you have these arrays that are there, they're built in snapshots. And er when you think about the cloud resiliency is a little bit differently, you have to pay for protection, pay for globally replicated and pay for high
01:34
availability. When we think about efficiencies on the premises, well here at pure storage, we're very efficient, we thin provision and compress all of our data globally across the entire array. When we think about efficiencies in the cloud, they're really not there, and I'm gonna dive deep into why this is not really inefficient and when we think about cost
01:53
considerations is really a lot of the costs are fixed when you have that on premises, but in the cloud there's really a lot of cost considerations from a disaster recovery perspective, we're gonna think about egress charges, ingress data is free, but if we can move less data and less time we can make that more efficient. We talk about cold versus warm, D R. That's also gonna be a consideration from the
02:14
cost and overall when you think about native cloud performance and I ops and bandwidth can be very easily wasted. And so when we think about the cloud, it's really a different location with a lot of the same problems that we once had on premises, there's really all of these trade offs that we have to think about when we're trying to pick the right distance,
02:36
um in this example we think about managed disk, so that's gonna be an Azure disk or a lot of ebs volume in A W s and when we think about this, a lot of the decisions we have to think about are gonna be capacity versus throughput capacity versus psyops and why this is important is because each disc from a Azure or ebs perspective that we think about DP to DP three or premium access, these performance comes with capacity and so the larger the disk
03:04
you have, the more performance you have, the small little the disk you have, the less performance you have and so you tend to have to over provision disk just to be able to get performance And if you think about this from a D R scenario, if you're not using that disc all the time, you're having to size that disk for the worst case scenario and you're really having a whole lot of wasted space.
03:25
When you think about performance first latency, that's also consideration because we can have higher performance at lower latency. But when we think about features, you might have some discs that require multi attach and not all disk types in the cloud provide that. And even if you think about AWS um a lot of flexible there, but there's not really a lot of durability of those volumes,
03:46
so for disaster recovery, that might be fine. But when you start running production environments out of there, that is a big consideration. If we think about Azure, um we really get that is think about disk resizing and Azure, there is no online disk resizing and so if you ever need to resize that disk well yet you have to de allocate that virtual machine and make those changes.
04:09
And so a lot of lot of considerations that we need to think about here and if we think about snapshot capabilities, there's not a whole lot that's available in the cloud, I would say that they're just basic snapshots. The snapshots will be stored and either as your dis manage that cbs or within their blog platforms. But when you think about restoring that snapshot, it's not gonna be done instantly and
04:31
there's gonna be a right penalty behind that because it's having a copy data from behind the scenes. And so there's some considerations that you have to think about when we come down to this. If you think about our purity platform, that's what our flash arrays run on, really a lot of features that are just available out of the box.
04:49
So our best in class data reduction within provisioning and compression are ransomware remediation with safe mode, all of our replication, whether it's our active cluster synchronous replication are a sync replication or whether it's our active D R and if you think about our high availability, our snapshots and are always on encryption, when we utilize our top block storage solution, all of this is going to be available to you on
05:12
top of each one of the public cloud providers and so if we think about the challenges that we're solving today, well obviously cost is gonna be the biggest thing that we're going to think about in the cloud because we want to be able to pay less for your storage when we think about disaster recovery. Well that's where we're going to dive into the individual use cases and find out what solution makes sense for you.
05:34
Because not only is cost going to be a factor of that, your R P O S and your R T O s are also going to be a big factor in that to identify as what solution are we gonna use for that? Disaster recovery? Are we gonna use our director application or are we going to use something such as our and when you think about that, I'm also gonna dive into how we can also utilize that disaster
05:54
recovery data for maybe your fast and scalable deV test environments. Are these analytics, um kind of getting better use of your data. And then we're also gonna talk about when you do migrate your data, what that's gonna be. I'm gonna touch base on that a little bit but really the disaster recovery and test that will really cover the bulk of that.
06:13
So we think about cost efficiency in the public cloud as I mentioned earlier, there really is no cost efficiency when you're using native storage. So what can we do to help reduce some of that um storage then? Well, we do de duplication and compression. So when we think about data being stored, well we're going to consume less data at rest to be stored.
06:34
We do pattern removal and thin provisioning, which means we're only going to be storing unique blocks that hard to consume. And when we think about our snapshots and clones won their instant and they're all pointer based and so they don't actually consume any additional storage outside of the data between snapshot, which means as you're doing your disaster recovery and you're
06:54
replicating up multiple points in time. The data that you're storing is gonna be very um Reduced in the event that you're only storing. So give a 10% rate of change. Each daily point of time will only be 10% of that data on top of those based snapshots. If we think about our costs and what we would think about to get your data into the cloud,
07:16
whether this is from an efficiency, this cost is a big standpoint. So if we think about egress traffic right, This is when you're failing back from the cloud, the cloud providers will fill you for each bite that you transfer out. So if you can transfer less data out that is we're going to see a huge increase in safety, think about 100 terabyte example, There's 100 terabytes that you need to replicate into the
07:38
cloud. If you're doing that with a native public cloud provider or a native toolset, you have to replicate 100 terabytes of data into the cloud and say that takes you 10 days. Well, security, we only replicate our unique data. So if that 100 terabytes is at 5-1, well that's only gonna be 20 terabytes of data that we have
07:59
to replicate, it. Which means instead of that taking 10 days well we can do that in five times less time. So that's gonna take us my math is correct. Two days. Right. And so the effort to get that data into the cloud is gonna be reduced. So there's not necessarily a cost associated with it from an infrastructure level but there's a man our cost there.
08:20
Same thing applies with that example, if we took that 20 terabytes in and only change 10% of data when you have to fail back your data, you're only having to replicate two terabytes of data instead of that 420 terabyte changes. So a lot of savings there. And then preserving replication bandwidth, we preserve this through our compression or duplication, which means as you are having these replication pipes,
08:43
whether it's a direct connect or express route, we're going to reduce the amount of link size that you actually need to be able to handle that data consecutively. So really the idea here is that we want to make a product that serves the majority of our applications and helps with scale and so performance and capacity are typically bundled resembling resulting in that over provisioning and so that's where we come in and optimize
09:08
this. So I said I was gonna kind of dive into why we can become efficient. Think about AWS GP two is the most popular disk that is out there. The general purpose disk with that you have 100 gigabyte disk, Well you only get 300 I apps if you want to be able to get max first of all for max I ops of that disk, you actually need to provision a five terabyte disk to be able to
09:29
get that 16,000 tops. And so in this case think about it 100 gigs of data you actually need but you need to be performance well. Yet the provision of five terabyte disk with GP two and GP three low durability. It's only 2 to 3 nines of durability which means a W. S. Says you should have a second copy of that data
09:46
so it can be protected. And so if you think about that You have 100 terabytes of data or 100 gigabytes of data that you need performance and available. Well you're having to store 10 terabytes of data within aws to be able to get that. So not really a lot of efficiency there from a savings point of view if you think about I 01 and I or I 02 I 01 was kind of the same thing
10:06
where it was still fast but it wasn't durable IO to built upon that and I o cheated but it's very expensive. Can scale independently. So I O bandwidth but it also requires specific VMS to use everything about a gene. On the other hand, capacities are limited in binary increments. So 48, 16 32 all the way up to 32 terabytes, which means there's a whole lot of wasted
10:30
capacity. Say you have a 4.1 terabyte database that you're trying to replicate up to the cloud. Well, you can't you actually have to make that an eight terabyte line and so you're almost doubling your cost of storage for that machine while having almost a 50% wasted allocation. And the same thing applies.
10:48
You have to provision capacity to be able to get performance with those premium access cities. And so if you need performance, well that's where ultra comes in but you have to provision for peak i ops and there's a whole lot of wasted capacity here. So when we think about our pure tossup block store offering,
11:02
it's built upon NATIVE AWS and Azure storage. It is a virtualized storage platform, which means that you don't have to worry about any physical equipment, it's not shared, it's deployed on demand within your own infrastructure and so are no longer provisioned at a per volume basis because it's done for that of the entire rate and so you can combine write and read from different workloads and it can reduce the number of performance
11:27
that you actually need. And so when we think about that data mobility, right? I covered what this looks like. Moving data into the cloud isn't always free because while you're going from an on prem environment into the cloud, well that is free.
11:43
But if you think about once it's in the cloud and you need to move that data to another region or an availability zone, there are costs associated with that, luckily for us, as long as you're transferring data within the same Az from a VPc perspective that's free. But think about it at a scalability if you're moving data into the cloud Free, if you're moving data within an availability zone,
12:04
it's normally about two cents per gig from a network transfer. If you're moving data from between regions, it's almost doubles about $4 per gig. So just think about how you act to do that with our purity replication, we reduce those egress and ingress data fees. So again, reducing that spanned with no impact on your data reduction.
12:26
If we think about our replication and how it reduces bandwidth, the same thing applies there is our storage footprint is reduced, lowering those cloud costs but also the data transfer costs and network utilization links are also minimized. I talked to a customer where they were trying to size the replication link to be able to get
12:41
their data into the cloud and they're like, well we need a 10 gig link for our direct connect. And when we actually looked at it, the only time the replication link was really taxed was within that very initial data sync. Other than that they were replicating almost no changes. So they were at like 40 megabytes per second every hour.
12:59
And so they were able to reduce their replication link and again reduce that time. So really it provides replication time shorter and again, that's gonna be dependent on our P. O. If you're moving less data more often or if you're moving more data less often. Some considerations there. But let's talk about disaster recovery if we think about disaster recovery into the cloud.
13:24
Well we have four options. We have two forms of asynchronous replication are asynchronous periodic, as well as our asynchronous continuous, also known as active tr asynchronous periodic, which is our most common one, which is compatible with all of our sword beach balls are D m et cetera is gonna have an R P. O as low as every five minutes if we think
13:46
about and I don't know if you want to pull up the first poll question here. Um but when we think about our a R P. O as well as every five minutes when we think about our active D. R. That is going to be our R P O. S. That can be as low as one second. And so we're doing a continuous replication to another environment and we're not having to
14:05
wait for that acknowledgement. Obviously with hybrid cloud, we're not gonna be able to use active cluster where we can stretch a volume between your on prem and your cloud environments, but it can be used to protect your data stretched on premises or once you're in the cloud provide that high availability. And then the last one that we're going to talk about is cloud snap.
14:25
And cloud snap provides the ability to offload snapshots a synchronously to the cloud and free up capacity. And this is kind of where we're gonna think about a cold D R. And so when we think about this, let's think about those R P O S and R T O s, are you looking for business continuity, disaster recovery,
14:44
a near term archive or a long term archive. And so if we think about active cluster, um again, zero R P 00 R T O replication. So really the impact is less in the cloud, we can provide this in the cloud on premises we can provide with on premises, but not across those environments. Active D R provides very low R P does and low Rto across any distance or cloud.
15:09
One thing to remember is that active D R is not supported with virtual volumes. So if you are using this for hybrid recovery, it does need to be a bare metal volume or an RDM what the majority of our customers are using is gonna be our asynchronous replication. Um it provides minutes of RP over seconds but again still seconds, two minutes of an R T O to recover because all we have to do is take that snapshot and turn it
15:33
into a long time. And I'm actually to do a demonstration of this later on, but another flexibility is gonna be utilizing our cloud snap while it's still going to be any distance or any cloud, it's going to provide you higher R P O s, entire R T O s but at a greatly reduced cost takes but when we think about the simplicity, right, a lot of this of what solution you drive is
15:59
going to be dependent on your total cost as well as your R P O. So if we think about cloud, cloud snap, That gives us the ability to offload our snapshots to add your blob or S3 stores at a minimum of every four hours. So our RPO is ours, our RTO could also be ours And so again the lowest cost in total because we're only paying for that S3 or block storage um in that perspective,
16:23
a sync replication is again very low to set up from a complexity medium recover ability, but lots of automation out there to do it and then our P O can be low to high, you have the ability to set that whether you want that to be minutes, hours or days, but it can be expensive to run are more expensive to run because you have cbs the cloud blocks are continuously running and you're storing your data,
16:45
but it's going to provide you a warm D R site that's ready to go active, D R is gonna kind of be a little bit more complex. Not by much, but still provides you low R P O as low as seconds. And again, it's still costly because you have to have that cloud block store running but provides your lowest R p O S, R T O s and then active cluster complex because of how it is set
17:05
up. But again, zero R P 00 R T O in a fully automated halo. So let's talk about what this would look like with periodic replication, you have an on premises luxury and you're setting up that replication of an aging up to the cloud, you're replicating the data as well as every five minutes,
17:22
which means that you can actually go ahead and take that data and replicate it. When we think about that we bring the data in, we then present it to your Pc to or as your VMS. What we can then also do is reverse that replication and reduce again that ingress and egress costs out of the cloud providing that flexibility to fail it over back in the same thing without having to worry about utilizing any tools.
17:48
If we think about active cluster, this is where we truly have that single volume that is stretched between two sites that provides that that swords metro cluster and have a a seamless fail over. But the next thing that we're gonna talk about, which is really going to be a feature that most customers like. And again, this is gonna be the cold dark poll question Emily is cloud snap allows you to have
18:11
a low cost archive Because with this perspective from any array running purity, we have the ability to offload our snapshots to an S. three or a blob storage. The R P O. Here for cold. Er, I'm interested to see how everybody gets this.
18:27
Um, okay, everybody's getting this good is correct, isn't ours. And so the difference between cold D. R and warmed er is warmed, er can be done as low as seconds, two minutes cold er, with our offload has a minimum R P O of every four hours. And so that is going to be the biggest consideration when you're thinking about this
18:48
solution, but it works across our flash rate in our cloud block store and what that means is you can take those snapshots and offload those to an S three or a block target and have them there for long term retention or cloud archive. And so what I've been talking to a lot of customers lately is ransomware and they're looking at these insurance policies but they may not have a secondary site and they need a
19:10
way to offload their data somewhere else. And so that's where cloud block store and clouds have been coming in and providing them that flexibility to be able to handle this data. And so it's a way that provide that off site and in the event they do have a disaster recovery scenario, they can either pull down the snapshots back down to any flash array or any product running purity or stand up a block store on demand and be able to hydrate their
19:35
snapshots out of the cloud and be able to consume that data. So it really provides the flexibility without any type of features. Right? No additional costs. From a flash rate perspective you're just paying for the underlying storage. And so at that point, once you hydrate those data to your cloud block store,
19:53
that's where you will then present them to your guest um to your arms. And so when we think about what this will actually look like from a ransomware remediation perspective, think about our flash arrays, we have safe mode. Safe mode is really going to prevent permanent loss of critical data to either malicious attacks or admin errors.
20:16
And what's happening now is customers or bad actors just aren't going into your environments and deleting your data or encrypting your data, they're going in there and finding your data deleting, encouraging your snapshots, encouraging your backups. And so what safe mode allows you to do is disable our eradication time blocking destruction for up to 30 days.
20:35
And so it prevents the actual the actual manual destruction by having a pin code that two members of your team need to call into our support, which means that in the event they do encrypt your data, you can have a full recovery of ransomware within seconds because we're actually taking those snapshots and converting those back in two volumes. So some some really great stuff there.
20:58
So question from kevin um if we have any performance data from Pure AWS Azure, we do have some stuff that can be shared under Nga. So please work with your account teams and we can definitely dive deeper into those. The great question. So I talked about some other ways that we can use that disaster recovery environment because again we want to be a little bit more cost
21:22
effective. Everything about costs when you're using D. R. It's sitting there doing nothing. And so think about the way our snapshots and technology work. Our factory is great for production workloads purity is very efficient because it's the same environment but think about snapshots and clones if you're using that D.
21:40
R. Data. And you're replicating it somewhere, it's sitting there not doing anything, it's consuming space but not consuming performance. So think about cloud based test environments, you can spin up your test at U 80 staging and make copies of your production data. So you're gonna be getting a really large data set if you're getting 4 to 1 and you have four environments that should be 16 to 1 data
22:00
reduction across those environments. And so while you're doing that, the storage at rest is essentially free because we're de duplicating it globally, but you're now able to take advantage of the performance that is available there and the replication of refreshes can happen as really as often as you want, it can be done as low as every five minutes.
22:19
And so if you're thinking about other native tools, a lot of that can be done at much higher. R P O S and R T O. S, these environments can be rebuilt very rapidly. We had a customer who had an environment that would be toned many many times and so it would take them almost about a week to get these environments stood up.
22:35
And so when they looked at moving to talk block store, not only were they getting really large data reduction, 80-100-1, they're actually able to get these environments stuck up in minutes, saving them a lot of extra time. And so when you think about what this actually looks like. Well we have that master data set that sits on premises, say that's your your sequel database.
22:54
We didn't just do our basic replication up to the cloud, clone it and then present that to multiple VMS. And so you have the ability to kind of have a cloudburst scalability and have it done instantly at a almost near zero cost. So really, um big efficiencies there. And so when you think about the data migration, the same thing applies,
23:15
right, we take that data set up and present it to the cloud, we replicate it up, we take the snapshot to turn them into volumes and then present those to your machines. And so this is how we have the ability to have that data running into the cloud. So I'm gonna jump into kind of an example in the demo now coming up and if there are any
23:36
particular questions, I know we've had a couple come in um but we can also address those towards the end. So the environment here is that I have a VM that is currently sitting today on premises and I need to replicate that up to Azure from a pr perspective, one of the things that I did highlight and I discussed is the ability of data and so today there is a requirement that cloud based VMS have to boot
24:04
off of cloud based storage, which means from an operating system perspective, we cannot do that natively today with block store, there are some things coming with as your VM ware solution where we'll be able to replicate a v M F s but we need to be a little bit more granular today with that. So if we think about what this solution has is we have our VMS that are running on VM ware or
24:28
really any platform and we have our data volumes that are existing on the flash road. We want to take those and replicate those up to Azure. So if we think about what that looks like to get our OS disks up to the cloud, well, we have to use some type of external tool, there's a couple out there, we can use Azure site recovery, we can use Azure migrate,
24:51
you can use any of your third party vendors so teams or no combo, Rubert, Cokie city and what that's going to do is set up the replication of those Os fonds that's going to take that VM and convert it to a cloud based VM. The data volumes can either be done through a cold D R or through a warm D R if it's cold D R. That's where you're taking those volumes and replicating them up to the cloud with clouds
25:17
now or you're gonna be using the data volumes and replicating those up directly to Cps And so in the case here is the blob storage will be in the minimum R P O every four hours, R T O will be much higher if you're going directly to cbs. Well then in that case your R T O will be seconds and your R P O will be low as well as covered. So we take that data and replicate it to the
25:41
cloud. We can either replicate it and have it run from there or we can have it um be presented to cbs directly. If we think about what the actual d our workflow entails, um we have a couple options in this case I kind of covered, we can use Azure site recovery for the OS bombs but I tend to say depends on if it's a modern
26:03
application or legacy application, if it's a modern application that can be very rapidly deployed through code or it's a file server or a sequel server, the OS volume doesn't really matter too much. All of the data you really need is in your volumes and so from an L. O S perspective, you don't necessarily need to do anything with your OS you can actually utilize and as your template and deploy a pre
26:27
configured sequel DM and just mount your data. If it's a file server, you can just use a based Windows system. But if it's a legacy application that might take hours or days to configure or that's where you're probably gonna want to use a tool that will orchestrate the fail over of the OS file. So once you kick off that D R fail over, will you take that as your VM,
26:48
you fail it over and it will deploy that as your VM. If you didn't have cbs running from a warm D R, this is where you would then stand it up. Takes about 10 15 minutes and recover those snapshots into cloud block store. Once you're there, you set up, that is the connection of your data from cloud block store to that out of your b M and then you're off and running your now able to be ran.
27:14
So what we do here now that we're running here, you can either stay here forever or obviously you might want to get your data back on premises. And so this is now where you will use as your site recovery or again, your other third party tool to replicate the os data back on premises if you need to. And what we are then going to do is take our top block store replication and set up a
27:38
direct a sync replication back on premises. Again, this is where we're going to reduce that cloud spend needed by reducing the amount of egress out of the cloud. And so now you have that you're now back on premises and you can retire the cloud based VMS and revert that back to how it was originally done. So with that let's kind of pause the share here for a moment and jump into
28:06
what this actually looks like from a live environment. So the best part about purity whether it's our flash array or our top block store is the exact same interface. This is one of my flash arrays that sits on prem and this environment which I'm going to log into because it doubt it's gonna be our block store. We look at it once I'm in, it's going to be the
28:33
exact same user interface with the exception that over here we have cloud based icons instead of a physical flash array on the machine. I am here, I have a volume that is mounted up here and within this I have my data volumes that are sitting here. I have a database that exists here on my machine.
29:02
If I look at my Azure based VM I currently have deployed so this is gonna be more of a warm D. R. I have a pre configured BM sitting here that just happens to have a disc here that is empty and this is already set up And from an eye scuzzy perspective connected to our cloud blocks burger from a replication point of view, I am replicating up my volumes to club lock stores.
29:27
So this is where we have our protection groups. I'm replicating a volume up every um our so what we need to do first and again, this can be very easily automate able through power shell rest python is what I'm going to do is I need to come into my machine and offline the disk. What I'll then do is take the snapshot, sorry, the teen pop up is covering up
29:54
for where I need to highlight and take that volume and I'm going to overwrite My volume. Does't matter if this is one gig, 10 gigs, one petabyte. That copy is going to only take a second. Now when I come in here, all I have to do is online that disk and when we explore it we can see that the data has been instantly refreshed. The thing about what this looks like for your
30:21
database environments or other environments that you need to rapidly refresher for fail over the very well thought out orchestrated um exception. The same thing applies now that I'm running here, I can come into my flash right and set up a protection group and say that when I come in here I am going to protect my volume so I'm gonna add in my data volume that existed here,
30:48
I'm going to replicate it back to my on premises flash array and I'm going to do it at any interval or if I need to do it, I can say that I'm just gonna replicate it now. And when we replicate this data, if I didn't change any data then really that reverse is gonna be very quickly in the case here is that it's only going to replicate the unique data I've changed and bring that back on premises. So really a great way to kind of get that that
31:14
data up and store it very efficiently because again, global de duplication I have about um 11, 11 gigs used again not too much, but I'm getting 3 to 1 data reduction, which means instead of storing 30 gigs, I'm starting and again as we do this at scale, a lot more things will occur. So I see two questions and environment where we have mixed endpoints and devices backed up to
31:40
the cloud, how does this fit in with the workflow? And if you think about endpoints it's gonna be kind of I guess depending on what it is, if it's endpoints as far as limits, first Windows, the workflows are still gonna be kind of the same if you think about any of the host of scenarios. Well those may not work if you're using like an RDS um solution so we do have the ability to
32:02
kind of provide that as far as individual use cases. Um I'll talk about some examples we have as well as if you are interested in learning more many ways to connect with your account teams. Another anonymous attendee asked when referring to data protection with safe mode, is there a pure document that can be presented to an insurance company that will provide proof
32:23
of recovery? And again that is something that I would work with your account team to make sure that any of the appropriate documents you have needed are there? And so we we should have something that can be provided in response to that. So great question there.
32:39
So let me go back and get my power point up and running as we do this in my mouth and then we'll have time for last minute Q and a let's go and do our share over here. So as we wrap this up, this is a solution that can be PFC just like an on premises flash array. Our solution is deployed through the Azure or
33:14
AWS marketplace um and so what we can do is provide a free trial license for you to go out and deploy this. We have two forms of doing a pOC what we call a node trial or no license trial as well as a proof of concept trial. Another licensed trial can be done without any intervention from here. You just use the license key cbs um no license trial, it's actually in the,
33:36
in our documentation in the actual deployment as a tool tip. And what you'll do is you'll enter in a license key and then go out and deploy it within this PFC. You only pay for the underlying cloud infrastructure that it takes to run our solution if you're learning wanting to learn more about what cloud block store can do.
33:55
Um This is a link to our platform guide, this is links to our documentation are kbs, our walk throughs and individual use cases and more detailed walkthrough of how to do some of that er fail over with cloud, indoor site recovery or aWS migration services really any of those solutions that are out there and so I really thank you for your time and now I'll open it up to any Q and A. I do see that we do have some Penny Q and A.
34:22
And also if anybody wants to ask any questions, please put that in the Q and A or I'm assuming we may be able to unmute people um as well if needed if you want to raise your hand. And so the question we have from Shane is if we need to offline the disk at the target system, um that's gonna occur where the re pressures occur. So yeah, so that is going to happen at the target.
34:43
So every time you need to refresh a volume, it does need to be offline prior to the refreshed because if you try to refresh an online volume, Windows and Linux are not necessarily smart enough to understand that data has changed. So while you can override it while it's online, it still will require an offline and online ng of the disk for the operating system to understand that the data that's changing.
35:08
So while we do have some built in workflows that can be very easily scripted with power shell example from sequel. Right, my favorite one. Well, it'll do is it'll remotely log into the database and offline the database. It'll then offline the volume log into the flash array and refresh that volume from the latest snapshot it'll then online the volume from a guest perspective and online the
35:31
database and that operation while automated can be done very quickly. So um very easy way to kind of get that. But yeah, offline the disk is required. So the operating system is requirement for that. So as far as plans to integrate with G C. P. Um today there is no short term roadmap to create a a software defined solution with DcP. We do have the ability to kind of do a
35:58
Polo model um within G C P and X one X where we can have a flash array there um with that and that's something that you are interested in to please work with your account team so we can make sure that we are aware. But as far as G C. P. There's no short term plans to be able to integrate that solution today from a software defined solution but other alternatives that
36:19
are out there that we can help with with which are which would be our cloud adjacent architectures. Great question. Let me check the chat to see if there was any questions there. Um I think they're um yeah I saw kevin's question I think those were all there.
36:40
So from a ransomware scenario We have different ways to protect against that. So obviously if you're trying to protect against Rand somewhere and we're doing that locally. That's where our safe mode will come in and we also have a rapid restore. So in the event that your data got encrypted. Well all we would have to do is go in and take those volumes and refresh them from the latest
37:04
snapshot that you knew was not affected. And so with safe mode we have the ability to disable the eradication time for up to 30 days, which means that you can have 30 days of local snapshots. What you can also do is tie that into other solutions from our portfolio. So we have our flash array C which is our capacity optimized storage and so you can offload and replicate data from your primary
37:28
array to that flash array C for on site longer term retention. We can also take those same snapshots and offload those through NFS or S 3 to 1 of our flashlights and so we do have the ability to protect against it there and if you look at any of our rapid restore such as cohesive e we can use our flash recovery solution to very quickly recover in the event you have to go to a backup instead of that.
37:55
So there really is going to be a a lot of solutions there um that will be coming so um some really great questions there and it all depends on whether or not you're doing that if you're wanting to be able to recover to the cloud because your entire on premises solutions there, then that's where you would use a cloud block store solution and either lose that as a direct facing target or pull it out from the cloud snap target.
38:20
So we have a question about a backup as a service into our hybrid style data strategy. Um so I would like to give a more detailed answer um but I would say that it is going to be part of our hybrid cloud data strategy and if you are wanting to learn more um work with your accounting to schedule a call with myself or one of our our project managers and we can give you insight into what's kind of happening in the future with some of that
38:48
hybrid style data management because there are some unique things that we are working on that may fall in line with that um potential solution and the question you're asking great questions I guess we can pause that there and it's not, I guess Emily we also, I think you mentioned we have that giveaway of that air purifier. Yeah, absolutely.
39:18
We can, I can go ahead and go into that while people type in some more questions. So I randomly drew a raffle prize winner today for the home air purifier and I choose Cynthia B from new york. So Cynthia, I will email you with some more information. Um and as always we hope you guys check out peer storage dot com to view our exciting and
39:42
upcoming webinars. We have some very cool webinars coming up on August 18. We have a I want to break free event with a queen cover band so that will be super fun. So peer storage dot com to view our webinars um and at the end of today's webinar there will be a short six question survey that pops up so we would really appreciate your feedback and you will receive a follow up email with a link
40:09
to this recording um and we will stay on for a couple more minutes if you guys have any more Q and A. You want to type in the panel for David. Otherwise, thank you guys for joining us today on our weekly tech talks webinar. We hope you guys have a great thursday, thank you Emily for that. And as we did that we have a few more questions coming.
40:29
So Diane asked can we aired up the solution by shutting the network down. And so one of the requirements today for cloud block store is that the system interface is does require external internet access and that's how we do phoning home of data and our logs and so today the answer is no, we cannot air gap this solution um in the future we are working on a dark site version which will have the ability to be in a truly isolated
40:57
environment which would also be fed ramp compatible. So the answer is not today but it will be coming. Um Michael also asked our AWS and Azure cloud block store functionality currently equivalent and that is the case. Right. So the best part about cloud block store is it's based off our purity. So while the underlying infrastructures are
41:17
completely different, they're both storage race but they're both storage, but one's an Apple ones in Orange. When we build our purity operating system on top of that, they are 100% equivalent, which means the same features you have on premises, you have an azure, the same features you have an azure,
41:34
you have in aws, you're doing any automation, the same ap eyes are available across the platform. So I would say 99.9. So between azure and aws, the functionality is 100% equivalent Between on premises in the cloud, I would say the functionality is about 99% equivalent. Um with the features we have, I'd say pretty much the one that we don't have is um,
41:57
nfs offload but we do have the S three and Azure blob offload there. So really great questions um here, so I definitely want to thank everybody for taking your time um within this webinar and definitely catch the replay and take a look at our things. So Someone asked about Apple Android 3rd Party compatibility and I'm not sure if that is specifically um in regards to this because we are working with cloud
42:27
providers, not mobile devices awesome. I guess with that we can definitely shut it down. And um, I'm always available on social media. You can find me at David statement on twitter. Um, you can also find me on linkedin if you have any follow up or definitely I always say reach out to your accounting schedule called be
42:50
more than happy to discuss anything with anybody, one on one, whether it's our virtualization or talk based solutions.
  • Hybrid Cloud
  • Tech Talks
  • Enterprise Applications
  • FlashArray//X

David Stamen

Enterprise Technology Strategy Director, Pure Storage

When you need a disaster recovery (DR) you need it now.  It is not about performance, it is about uptime. For some applications you can worry about performance at a later date.  A hybrid architecture using both Pure hardware and Pure Cloud Block store on AWS or Azure is a cost effective and flexible solution. Come learn the benefits and tradeoffs of cloud-based DR with Pure to complement your DR portfolio.

During this webinar, David will discuss: 

  • Pure Cloud Block Store enables a variety of DR solutions to meet varying business objectives
  • Although manual steps are required, scripting and other methods can be used to simplify the recovery process  
  • The features that enable cloud based DR are included in the Purity operating environment that powers both FlashArray and Pure Cloud Block Store
07/2024
Pure Storage FlashArray//X | Data Sheet
FlashArray//X provides unified block and file storage with enterprise performance, reliability, and availability to power your critical business services.
Data Sheet
5 pages
Continue Watching
We hope you found this preview valuable. To continue watching this video please provide your information below.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Your Browser Is No Longer Supported!

Older browsers often represent security risks. In order to deliver the best possible experience when using our site, please update to any of these latest browsers.