Webinar

5 Things Your SaaS Backup Must Do

You must consent to Statistics cookies to watch this video.

Speakers

Henry Baltazar

Research Director @ 451 Research

LinkedIn

GRAX Team

LinkedIn

About this talk

Most enterprises rely on SaaS applications, such as Salesforce, to run critical aspects of their business. But 451 Research says nearly 1 in 3 businesses don’t actually backup their SaaS application data. And 52.8% rely solely on the same SaaS application vendor to provide backup and recovery services — which means they don’t actually own their data.

As we all know, inadequate protection and accessibility can result in significant financial and productivity losses. Just as important, it can also stifle business insights, hurt compliance efforts, and derail growth. In fact, 451 Research reported that 92% of organizations have experienced negative impacts due to bad backup (corruption).

Join 451’s Storage Research Director, Henry Baltazar, and GRAX, as they discuss:

  • The top five capabilities to consider when evaluating SaaS backup
  • Why these capabilities are critical to avoiding pitfalls
  • How your backup strategy stacks up and more!

Now is the time to take back control of your SaaS backup data. Submit the form to discover the five must-have capabilities your SaaS Backup must do to protect and get the greatest value from your SaaS application data.

58 min. Published on

Transcript

DIANE: All right, let's get started. Hello, everyone. We are so excited to have you guys join us today for our Webinar on five things your SaaS backup must do. So today, we will be discussing the top five capabilities your SaaS backup must have in order to properly protect your data while getting the most value from it. And then we will end with an interactive Q&A session.

So I'm Diane from product marketing here at Grax, and I will be your moderator for today's webinar. And leading off today's discussion will be Henry Balthazar who is a research director here at 451. And joining Henry is Chris Shakarian, Grax's VP of marketing.

So just a quick few housekeeping items before we get started. First, at any time during this session, you can submit a question for the Q&A portion of this webinar by dropping it into the question box. We will do our best to answer all of your questions during the live Q&A portion.

Second, the presentation slides are available for download in the resources section in the console. And finally, the on-demand version of this webinar will be available for download once the webinar concludes. And with that I'd love to hand it over to Henry to kick things off.

HENRY BALTHAZAR : I accept, Diane. And thanks to you all joining us today. For today's presentation, the first things we'll be talking about are these five different categories that we're looking at in terms of requirements for SaaS backup going forward. The first and still surprising to me because I feel like I've been doing SaaS backup webinars for years now, there's still a bit of a misconception that you even need SaaS backup. So we'll be talking a little bit about that in the first segment.

I think the next thing we're also going to talk about is going to the second part is, it's not just enough to have a backup, especially if it's just a daily backup. More and more organizations are looking can have much better and more rapid recoveries and granule recoveries. Let's make sure you're not losing data, and to make sure operations are not disrupted.

Next thing we'll be talking about is the beyond backup. We also need to make sure doing the right kinds of testing and verification because it's kind of no point doing a backup if you're not 100% sure that when something does go wrong you'll be able to get your data back and workloads up and running in a meaningful fashion.

The last two areas to me are probably the most exciting things I'm thinking about in terms of the future of technology and where we're going. We can imagine. But next things we're going to be talking about is the importance of data portability, especially in the age we are now in terms of a lot of things being shuffled around between on-prem and a lot of work was going to on-prem, and the increasing importance for SaaS and public Cloud based applications running these workloads.

So with those things coming into play, it's going to become much more important for us to be able to move our data and make it maintain the second part of it, which is the accessibility. Because one of the chief complaints we continue to hear from our stakeholders and people that deal with data every day is the fact that they have difficulty trying to find and access that data. So it's a definitely-- it's going to be a key issue that we'll continue to see over the next few years I believe.

So moving on SaaS data protection. And as I was mentioning, I mean, I think that the biggest area there-- when we start thinking about key storage product pain points, the biggest issue and the thing that we have to keep in mind is data growth and how data growth makes all these other issues that we're about to talk about worse and worse year after year.

Now, when I look at the surveys and what we hear from customers as we talk to them in interviews, it's pretty common to hear customers saying that their data is growing anywhere from 20% to 40% per year. But I think that's really scary is when we ask them about, well, what's your budget for storage growing at? That's closer to 10%.

So when we start looking at that, it becomes very apparent very rapidly that we're not in a sustainable situation. We need to do things much more efficiently. Any dollar that we spend towards storage, or data protection, or data management has to give us a lot of value going forward. And that's a message we continue to see throughout the industry and throughout all different sorts of data management areas.

But let me dive in a little deeper to the sort of inputs and what's relevant here. One of the top areas and continues to be a big area what's going on with the SaaS recovery and data protection. That continues to be a big pain point, and will continue to be one going forward.

Another key aspect is performance. Especially as applications slow down and as customers have higher expectations for the level of service they're getting from the different workloads that are running, whether they be in the Cloud or on-premises. Another key area is also compliance. People are concerned about what data is accessible and where that data is residing, and making sure that they're not losing compliance sensitive data so they don't wind up having to pay massive fees or other elements like that from having downtime.

High cost from a Capex basis isn't really a big concern from a sales perspective, but it continues to be a big issue. And overall, storage management cost in storage cost will continue to be key issues and key areas where I think we can make improvements going forward. So as I was mentioning earlier, I think one of the key problems and things that I hope goes away is the fact that there's still a lot of organizations that don't think they need to backup SaaS.

Right now in the survey, we have about 30% of respondents who said that they're not backing up their SaaS applications or applications running in public environments, which to me is pretty scary. The other 53% are saying that, it's running in the Cloud. We're going to rely on our Cloud vendor for backup and recovery. Which to me is a dangerous proposition and definitely not the best way of doing things.

I think part of the misconception comes into play because when people think about, I'm going to move this work to the Cloud or I'm going to use the SaaS workload or this Cloud workload, you're not really thinking about traditional things like, OK, what happens if a hard drive dies? What happens if a server goes down, or what happens if this network thing goes down? Because those are being handled by the Cloud provider.

The thing is a lot of other bad things can happen that are not on the hard level that you won't be protected by. So for example, if somebody does get access to a machine and/or access to account, and winds up corrupting or deleting data. That's not something-- that's not a hardware issue. That's another issue that could be affecting-- or it could be affecting your data repository.

So again, there's other elements that we need to think about and other reasons why we have to deal with SaaS. I think the other thing that was interesting is when I look at the other options, all these other options are pretty painful. I mean, you have 11% of respondents saying, hey, we manually copy your data an all purpose of data center. While you could do that, that's definitely not the easiest thing to do. And also it puts a lot more burden on yourselves in terms of doing the automation and the verifications to make sure that's happening.

Other vendors are also trying to-- other customers also potentially use-- copy their data to another Cloud potentially, but I think that will wind up becoming more prominent in the future. But right now, it's a very small number of people that are moving their SaaS data to another Cloud at this point. But again, I think that's an opportunity for us to improve things.

Getting back to why we need this. I mean, this is one of the scary things I saw in terms of when we ran the survey earlier this year. We had 30% of respondents who already had or experienced a significant outage in past years. An outage that led to a lost revenue, an outage that led to lost productivity. And I expect that number to only go up, and up, and up.

Well, still have 30% who said they've never-- 32% who said this, that they've never experienced an outage or had [INAUDIBLE] events. But quite frankly, chances are something will happen in the future for most organizations. And it's better to make sure that you're prepared for now or suffer the consequences.

And when we talk about consequences, one of the key things is cost. From that same survey, what we found was 49% of those outages wind up leading to losses of over 100,000. 7% of respondents saying that their outage cost their company over a million dollars.

So when we start looking at justification of why we need to do this, to me this is one of these key things to show. And if you're trying to fight for a budget, definitely you need to start making sure that the stakeholders or the budget holders understand this is what's at risk. There's a lot of different things that can happen if we don't take care of the SaaS problem-- SaaS data protection problem right away.

Another key thing to think about. Well, I mean, the obvious thing about losing data would be, OK, we're lost worker productivity. Whatever you were working on before, you're going to lose because of that outage. That continues to be a big pain point, an obvious one.

Also the other party, which would seem to be obvious was the lost revenue from this business opportunities. If you're workloads are down and you can't process transactions, that's going to immediately lead to lost revenue for your organization. And it's basically what a lot of companies want to avoid as possible.

But the thing I'd like to emphasize here is there's also other side negatives in terms of what happens to your company reputation and what happens to your brand as a result of some of these outages. If 35% of organizations are complaining that the outage lets damage reputation for their company. Another 19% saying, hey, this also lets lost customer loyalty because of these issues.

So there's a lot of other tangible issues behind just losing a transaction or just losing data because of backups. We need to keep those things in mind. The other tangible issue and something that we've heard mostly in the financial sector especially is the penalties relates to compliance. If they can't do their workloads in time, they can't process what they have to do based on SLA. Those could be hefty penalties, and as [INAUDIBLE] said, it could be even higher than what you're going to spend on backup.

So we start looking at these things, potential consequences. It's clear that we need to start taking the SaaS backup part of this much more seriously and make sure that we have all our bases covered going forward. And with that, I'm going to kick over to Chris Shakarian. He's going to talk about what he's seeing on his side of the market.

CHRIS SHAKARIAN: Hi, everyone. Thanks. I'm Chris Shakarian. I run marketing at Grax. So you're probably going to heavily discount everything that I say. So rather than talk to you about Grax, I'd love to actually tell you a few stories throughout this presentation of what customers are doing, and what companies and organizations like you are doing in the space. And why that's stretching the boundaries of what the traditional SaaS backups or even backup tools do for organizations.

And really, our goal here is not to make you buy Grax, but rather to help inform the decisions that you should make around SaaS backup and how you can now demand more from vendors in the space. Ultimately, we see SaaS data as your organization's history. It's the history of changes happening in your business.

And so many companies are acting inside of SaaS applications, that those SaaS apps become really high value targets for data consumption and reuse of that data. From beyond just an insurance policy and making sure that your SaaS apps are protected, going well beyond that to actually taking strategic advantage of that data.

The truth is, as you see on the screen, the number of organizations come to us and we work with some of the most iconic companies in the world, fortune 50 organizations. They come to us initially sometimes for that peace of mind. I need to protect that SaaS application and the data, the sensitive data contained in it. But inevitably, they start to then think about what else can I do with that data? Can that improve my archiving strategy? Can I stretch the imagination of what I'm doing with development, or security, or reusing that data for new kind of applications, whether it be even machine learning or just basic business intelligence and analytics?

So there's definitely a maturity curve and a graduation path that kind of happens to us. Some organizations come in right out of the gate and say, We get it. We just want to consume the data in many places. And others say, We have a real need here. We've realized, to Henry's point, that we need to actually protect our SaaS applications and start there. So I'm headed back to Henry.

HENRY BALTHAZAR : Hey thanks, Chris. Now that we established that, yes, you probably need SaaS backup. The next part we can talk about is that SaaS backup has to be pretty good. Because what we continue to see is customer expectations are going through the roof. End users, stakeholders, they don't want to see outage, they don't want to lose work.

They don't want to be told, hey, that thing you're working on, in less than four hours, yesterday afternoon, you guessed that, we lost it. People don't want to hear that. When we start thinking about backup, it's not enough to just have the bare minimum. And for more and more organizations, the concept of just going with daily backup itself is not good enough.

So we have to start thinking about what are the other options, whether it be going to more snapshots or going to something that can capture transactions. Because I think that's where we're going to start looking at in terms of being able to look at these other type of capabilities. Because again, as these things start happening, you want to be able to roll back transactions potentially, especially if you're trying to find something or find an abnormality within your data sets.

Another key thing is, as I was mentioning earlier, tolerance for downtime is becoming more and more minimal. And in our surveys, what we found was, looking at this-- I will make a quick explanation of what we did in this study. In the study, we basically asked customers about their RPOs and RTOs for various types of workloads.

For those of you who are not familiar with terms, Recovery Time Objective is or RTO is basically the time it takes to come back or to recover an application after an outage. So it's basically, how long am I willing to wait for that workload to come back up and become successful again? The RPO part of this is also really key as well, because that's basically a measure of how much data am I willing to lose if something bad happens.

If I was just talking about the scenario earlier, if you settle for something like daily backups, well, guess what? If your RPO for that daily backup is basically, you're willing to lose everything between last night's backup and tonight's backup. For some organizations, that's not going be-- that's really not going to be acceptable, especially in the mission-critical world, very few of them are willing to do that.

Only about 3% we're saying, it's OK, I can lose a day of data. And I'm sure that 3%, if they knew what that day of data is going to cost them, they'll probably say no the next time I ask the question. But as you look at this chart, there's a lot more, especially in the mission-critical side, a lot more people in that purple zone who want to have the ability to recover data, and make sure they're not losing data within a day, and making that incremental part of data loss and recovery as small as possible.

Things get a little bit more lenient as we go to non-critical and business critical workloads. But even on the non-critical side, things like test dev and other elements where it's not production, you're not going to wind up in the newspaper if that goes down, even those elements, more and more organizations and more and more stakeholders, they're not willing to lose that data.

You still have a lot of-- you still have a lot of purple here in terms of, I need to make sure this is recovered, at worst, in the next couple of hours. So at a minimum, it's clear that even right now-- and I expect these to get more stringent. Even right now, snapshots are probably the bare minimum. And at this point, even daily backups probably not good enough for most organizations, probably not good enough for most stakeholders.

- We definitely see this in our customers as well who really look at rapid and granular recovery as being tied to, obviously, rapid and granular backup, the capture mechanism for how you captured data. The truth is more and more organizations are realizing, especially for SaaS applications, that DIY doesn't really cut it anymore. And so when they come kind of clear that hurdle and realize that there is a lot more that they could get from doing it with a set of tools that are available now in the industry, what's starting to happen is actually, as Henry pointed out, and keep me honest here Henry, that RPO is starting to shrink more and more closer to near real-time.

People are starting to get more ravenous with their consumption of change data. And it's really also fueled by everything from expectations that are even set in the fast space. When you log into your SaaS application, that instant gratification that we've all been trained to have and the expectation that we have, the same expectation is kind of starting to extend into recovery.

Well, if I have it all in the app guarantees uptime, shouldn't I be able to roll back my data with one click instantly with any level of granularity that I possibly can? And the last thing that we see here creeping up, which is really, really interesting, is data ownership is actually coming up. So you'll see this is a theme throughout, and we'll talk about what that means in the context of a couple of marquee customers that we work with.

But data ownership is really, really important. And knowing that some vendors allow backup data, SaaS backup data, there are methods of bringing that back into your data ecosystem, and making that part of your overall cloud infrastructure, and an active participant in your data warehouse and other strategies. That's becoming more and more critical. So you'll hear data ownership be a more and more important thing. But I think the headline here is exactly what Henry said, rapid and granular recovery is putting more and more pressure into more rapid consumption and capture of change data in SaaS applications.

- Yeah, I definitely agree. I mean, it can become more and more painful going forward. And again, it's about customer expectations. People don't want them-- if you're responsible for that in your organization, you also don't want to let your coworkers down by saying, hey, you know what? You lost this, or you lost these transactions. Nobody wants to hear that.

So you're going for, we need to make sure that we're doing a much more comprehensive job of that. And that's actually a good segue into next second tier in terms of a backup testing verification. Again, it's not enough to just have a backup and have that automated and running.

We need to make sure that those backups are actually going to work when we need these things are going to happen. To me, the testing rate and the levels we have to do are just not enough at this point. To me, I think at a minimum, you should be testing your backups at least more than twice a year.

Right now, only 54% of respondents in our survey said that they were doing that. The reason why I'm saying the testing rates have to go up is the fact that when you start thinking about your organization and what's going on with your infrastructure and workloads, we're at the age where workloads and infrastructure changes constantly, especially in the world of cloud. You want to be able to have these data is moving from different applications, different workloads.

We want to make sure that as these different changes happen, whether I'm creating another server here or using containers here, we're using all these other elements or putting data in another data where else here. We need to make sure that as we make those changes, the data protection keeps up with those changes. Because it doesn't-- if it doesn't keep up with the changes, then you're not going to have a proper recovery.

And if you're not going to have a proper recovery, you throw away all those benefits that you're expecting to have in terms of a really strong RPO and RTO. To me, this is definitely an area where we need to make some improvements. And one of the interesting things we saw here is we also split the results in terms of organizations that were more automated versus the ones that were more manual.

And not much of a surprise, the organizations that were more automated tended to do a little bit better job in terms of doing testing more twice a year. That was 58% versus 54%. Again, testing is going to be a key issue. It's not a glamorous thing to talk about, but it's something that's necessary, especially, again, because our environment are changing so rapidly.

Backup has to keep up with that. Our data protection, data matching policies have to keep up with what's going on with these different changes. Switching to the next section. Another key thing that comes up a lot these days is what's going on with the environments, how they're changing, especially as data moves back and forth between different environments.

One of the key things-- and this is definitely a difficult conversation that I had with an organization but probably something that happens commonly is-- for this particular telecom company, they ran into that wall in terms of the limitations of their storage capacity and budget versus what was going on with data growth. In this particular case, they wound up having way more data than they expected. And because they didn't have the budget and they didn't have the resources to potentially move this to cloud or a cheaper tier to store that data, they wound up throwing away years worth of data.

That data wasn't compliant-sensitive data. It wasn't data that they, quote, unquote, "had to keep on site or had to retain," but a lot of organizations want to be able to mine this data in the future. So after they deleted the data, they instantly regretted. Well, what was the value of that data? What could we learn, maybe, from a business intelligence or other insights from data that we wound up having to throw away because we didn't have the resources to deal with that?

So as I see this, I mean, this is really driving home the reason why not only do we have to protect data, but we need to make sure that data is portable, and we have a better way of keeping that data, retaining that data at a cheaper cost if we can't keep it on our primary or our production system. So it's going to continue to be a big issue. When we look at what's going on with workloads, this is from our cloud and hosting management of workloads to keep project survey.

Within this survey, what we were seeing was today, in 2020, sat on the left, we're clearly seeing a much more expected expansion in terms of workloads going up primarily, being executed in public cloud environments, public cloud being SaaS hosted apps and more IaaS and PaaS going into the future. As we start seeing this we also see, obviously that's coming at the expense of on-premises and traditional IT with that dropping from 46% of the workloads in the environment today in 2020, now dropping only down to 17% in 2022. So as we start looking at this, I think the clear message here is a lot of these workloads, a lot of this data shifting, to me, this is scary.

There's a lot of work that has to go into terms of moving workloads and moving data. But it's also a great opportunity to start revisiting what are we going to do for data management, what are we going to do for data protection. It's just like when you move to a new house. When you move to that new house, you don't just say, OK, we're going to be in the new house.

Day one, you wound up having to clean things out, you wound up trying to make sure that you're prepped for when you go into that new venue. I think the same thing has to happen with your data. And to me, that's why this is a great time to start investigating these things. Because as you start looking at these new environments, you can't jump into these new environments without having data protection covered and compliance of other elements covered.

So as you start looking at this, it's a great opportunity to me, because these things are happening. It's going to be a great opportunity for us to facilitate this data movement between the different silos and make sure that data is accessible and protected at the same time. When we dive a little bit deeper into the data protection segment itself, we're already at a point where the majority of customers are using some sort of cloud within their data protection.

In the most recent survey we had, 40% respondents said that they were doing a hybrid cloud type deployment, where their backups are being run locally on-premises for what they're doing with the long-term retention being pushed to a cloud, probably some less-expensive object storage service and one of the public cloud providers. And we had another 21% that were solely using things like online backup or DRaaS SaaS applications to be able to pick some of the data today. We still have stalwarts on the on-premise side.

You have this 39% that continue to run things on-premises all the time. A lot of them are focused on doing that, because they have security concerns, compliance concerns. But more and more, we're starting to see a transition of those customers moving way towards that, especially as the cost for object storage continues to drop and especially as people start thinking about the accessibility advantages of having things in a cloud service as opposed to having things in tape sitting in a bunker somewhere, how long it takes to recover that data, how long it takes to make that data accessible.

So again, right now, we're seeing a lot of transitions in place. That's going to be important for us to make sure that as we start doing this, we're preparing the data management and the protection part with this. As far as when we start looking at why and what are the things we need to do in terms of outages, the other elements we have to think about is these different outages could cause these areas.

And I think that's also a key thing that we're going to have to solve, looking at in terms of why we need the portability. So here, we had about 50% of organizations that they experience the facility power failure, which wound up in outage. Clearly, if that happens, you can't really control that in some cases. You're going to want to make sure that these elements are available somewhere else.

- Nice.

[INTERPOSING VOICES]

- Sure.

- Thanks. We're starting to get in a groove here. It's just natural. So with that, I wanted to share two customer examples that were really, really interesting. One is really about compliance and regulation. So there are many, many reasons for data portability, as Henry just mentioned. And it really depends on the organization with the top three or even top one reason is it's trying to change.

This one happens to be a very prominent payments processing provider. And so this organization, you've probably processed one of your payments with a vendor in the last week with this organization. And really, when they looked at this space, they said, well, of course, we have to protect our SaaS data, but we're under heavily regulated security and governance procedures, everything from WORM to FINRA, and all the things that go with financial organizations.

And the truth is-- and the kind of the elephant in the room with SaaS applications in the finance space is that in order to really be compliant with these heavily-regulated forms of requirements that are coming their way, and this applies to health care as well, what's happening more and more is organizations need to take more and more ownership and control of their data. So for this company, in particular, WORM actually require them to store an archived and historical data in their own data facility. That was already secured.

This organization invested tens of millions of dollars in doing this on their own infrastructure and creating a cloud environment that was secure and financially compliant. And their only choice was to actually take that back up and archive data and move it off into this cloud. So that's exactly what they did.

And really, it's emblematic of this kind of narrow approach of like we need to solve compliance needs, and that in itself puts pressure for organizations to really think about data ownership and archiving strategy in a very different way for SaaS applications than they would for traditional apps. This example is also very interesting, a very different use case. Certainly, this is a health care organization. And so they're regulated their medical device company.

They are a wearable device company. They're regulated by HIPAA and all those other things that happen in that industry. So again, data ownership is important, but it wasn't necessarily the driving factor for their decision to take their archive, and back up data, and put it into their own storage facility, into their own cloud.

Here, this organization makes wearable devices that I bet some of you guys have on your wrist or in your pocket. And what they realized was that the support ticket volume that they had historically was a great source of insight from their users about what they can improve with their device. And so taking those support interactions, tickets, emails out of production essentially amounted to them shutting the door on continuous improvement on their product.

So what they started to do was archive that data into their own cloud while keeping it still available in production. And that's one of those things-- when we talk about data portability, it's not only about pulling the data down into your own kind of data warehouse but then being able to access that data in many other places. And cuing up the next point that Henry is going to talk about, portability and access go hand in hand. And the truth is there are many different reasons that you do that from compliance to actually actively mining historical data and using it to improve your business to retain customers to grow your business or to mitigate security risks.

- Thanks, Chris. So rounding out to the last segment, we talked a bit about the need for data accessibility. And this is definitely a key area we're going to continue to monitor going forward. And I think we're going to see a lot more innovation and more business model changes because of this going forward into the future.

I think one of the key issues we're starting to hear and one of the concepts that a lot of organizations start to do is how do we start creating-- what we hear is people want to do a self-service data type of asset within their environments, especially for trying to help Q&A and developer type people be able to use production data to create their applications and whatnot, which of course, obviously, that's going to be a difficult issue as well. Because you have to make sure that as you present data, you're presenting data that's not compliance sensitive, and you're making sure you're not creating a data leakage event by doing these different things. To me, I think this is a key area that we have to concentrate on.

Because even before we started talking about SaaS and a lot of this innovation that's happening in terms of a shift cloud, even with that more traditional type of areas, organizations are already having a tough time here in terms of-- as you can see in this particular example, they have difficulty trying to define and set up a test and Q&A environment just within their core infrastructure. And now, they're seeing an even more daunting task. Because as more and more of that data is out of their reach physically, it's in the SaaS cloud, or public cloud, or whatnot, they understand that they need to figure out, we need to get a hold of this, because we can't control the data. It could lead to much more difficult issues, as well in terms of data leakage and other types of areas like that.

And the other area we're talking about shortly is, if you can't find data, it also has a massive difficulty in terms of what's going to happen in terms of your productivity and whatnot. But what we're saying is with that change environment, again, we need to make sure we're extending the data management, so it not only covers all our traditional data silos and [INAUDIBLE] but also needs to be able to extend and link together what we're doing in SaaS, what we're doing in cloud, and what we're going to be doing next in the future.

One of the key things we saw in one of our-- this is from our data and analytics survey. One of the questions we're asking about is, hey, what are the pain points that data scientists and business analysts are facing? And not too surprising, but on the business analysts' side, we saw a lot of them wasting way too much time just trying to find and prepare data, which clearly, that's not what you want them doing.

You want them analyzing data, you want them delivering data value, you want them finding these new insights. But here, you got 47% is now saying, hey, we're spending over 50% of our work hours trying to find and prep data. That's really not acceptable when you start to think about-- I mean, these are essential employees within the organization.

They deliver this really powerful insights you need to drive your business, insights you need to make business decisions. But here, 50% or 47% of them are just trying to find and prep data. So that's really not sustainable. It's a deficit that we can try to improve on. But it continues to be a big problem going forward.

And to me, I look at this, we need to make sure, because all those hours of spending during this, they're not spending time doing what they're supposed to be doing. So we need to make sure we're fixing that going forward. The other thing we were looking at and another key issue that we see is the fact that though organizations have backups and they have compliance applications, at this point, only 45% of those organizations said that OK, we've got backup, and we've got compliance applications. We feel that the data we have on the backups is accessible to those compliance applications and other type of tools that we need to be able to look at that data and process that data.

To me, this a key area where we can have some improvement going forward. Right now, there are 26% that say that they have partial access to stored backups. Going forward, we need to make sure we're doing a much better job of this. Because again, we can't start looking-- it's about values, right? We can't just-- a lot of organizations are not going to be happy about saying, I spent this money, and all they got was backup.

Organizations, they want to get more out of that, whether it be using that data for-- maybe it's for a test dev use case like what we were talking about earlier, or maybe it's for other business analytics and other types of use cases here, like trying to get inside out of it. We need to get more things out of our data. But right now, we're it's seems like we're at a point where there is a bit of a silo, and we need to break through those silos to make sure that backup data is successful. Right now, we're going to head over to Chris.

- Thanks, Henry. I mean, that's a perfect tea up for this use case. This is a Fortune 50 tech company. And they came to us. And right out of the gate, they got it. They realized that, yes, of course, we need to protect the SaaS applications that we have. But also, we need to extract as much data as we can out of those applications for many, many other data consumers that are inside of our organization.

And data consumers can be data scientists, developers, as Henry mentioned, business operations teams. In fact, for this customer, all of those teams ended up over time thinking the end application, the SaaS application itself directly with APIs for that data, for their specific use cases. And what happened was the inevitable API limits were hit.

The organization had many data replicas. I mean, this is a real problem. They have redundant data. You have your backup data, which is your insurance policy. Traditionally, it's thought of that way. And then these various teams replicate portions of that data. And that not only creates kind of operational overhead and creates pressure on the end application that you're now hitting from multiple angles, it creates a lot of consumption issues that can be also tied to liability and compliance liability.

Imagine GDPR. How do you enforce a GDPR request when your data from your SaaS application has been replicated across a myriad of groups internally? It's challenging. And those types of scenarios, those compliance scenarios are very real, and we see them in the industry today.

So this organization realized that. And what they started to do was they said, well, why don't we just create one data facility in one of our clouds? We use the myriad of clouds that most organizations do, from the AWS, to Azure, to GCP. They even had their own environments.

And why don't we pull down our SaaS application data and archive data into that facility and have everyone hit that? Because the cost structures there are great, we can have near real-time consumption of that data, and we're not hitting the end application. So all of the business consumers of that data are going to be able to get what they need without hitting all those roadblocks that we just talked about.

So they quickly realize that we can meet the tactical obligation of doing a backup of our SaaS applications and have our cake, and eat it, too and consume that data now in many, many different scenarios. And what happens naturally after that, it's funny, is the desire to consume more and more real-time data. So now, that same customer is coming back and saying, OK, can you give us more here, more real-time there, faster there?

And it's really, really interesting to see. When you bring the cost structure, and the access, and portability down to near zero in terms of the threshold for access within the organization, you start to see organizations really engage and people swarm around it from a data ops standpoint and mind what they can to improve the business, to optimize the business, improve the security profile, all those things. It's kind of a core tenet on the data ops side of things, and we see that definitely being something that is starting to really take hold in the SaaS back up space. The headline here is no longer should have SaaS backup to be used just for the insurance policy. For organizations that use it for more, they're going to gain strategic advantage in their markets when they do.

- Thanks, Chris. So just to round out some of what we're going in terms of the key takeaways. The main things to-- the main things I'd like to focus on are-- the first part is what we talked about a lot since the of the presentation. Data protection is a key requirement, and we have to make sure that becomes more and more granular and more rapid.

Again, customer accreditations are going through the roof. So whatever the bar is set out today, expect it to be raised going forward. And that, to me, should be a key sign that we need to do things differently. Because many of you are already experiencing this. You already seeing that your environments are changing.

You're already seeing that new innovations are coming to play in the way you do things in the next generation workloads. Those things are going to have to have data protection too. They're going to have to have really strong data management as well. We need to make sure we're doing those things correctly. So again, data protection has to be a key focus. We need to make sure it's granular and rapid.

Next, as we talked about, is the data portability, how we move these things between different silos and what goes on there. That's also going to be a key issue going forward. And I think that's something we need to make sure-- a lot of organizations are worried about that, because the potential for data loss. And a lot of organizations, I think, when we start talking-- earlier, we're talking about some of the organizations that don't want to move.

A lot of them don't want to lose control. So I think the best way for us to have this is have strong data portability capabilities in place. Organizations will be more trusting and more willing to be able to move some of these workloads around. Pick another key issue notes, we didn't really talk about this.

On the portability side, especially with backup, some of the complaints we hear regarding backup is third party vendors, they take a backup and run it back up to cloud. They stored it in a proprietary format, which is not accessible. Well, I mean, while that takes care of that data protection part of it, if you don't have things accessible in another format, accessible on a more data format, or accessible in a way that applications or processes could use it on the other side, well, you've heard a lot of the value proposition, potentially your backup.

So we started looking at the portability. It's also got portability, which leads to the next part, which is about the accessibility. So as you make this portable, they have to be accessible. Because as we mentioned earlier, on the accessibility side, this is stakeholders, a lot of these are important analyst people, a lot of these are data scientist people struggling to find the data. And that's becoming more and more of an issue going forward.

And it's becoming even more painful. Because what winds up happening, in some cases, is they wind up causing other problems, because they can't buy their data correctly. One on one of the organizations I was talking to earlier this year, they saw it firsthand. Data scientist or analyst looking for data set, they couldn't find it, they wound up grabbing-- they wound up creating clones of other data to be able to find this data, or they wound up doing a rapid recoveries of other data just to try to find data, which wound up spawning more and more waste of storage capacity and more and more resources.

So as you start looking at the accessibility and long-term data management, we also have to think about how are we going to tag data, how are we going to make that data much more discoverable for organizations, especially as they try to deal with these things. And so they can do more time doing work and less time trying to find things.

And when we boil all of this down to, what are some of the technical underpinnings or requirements that organizations are looking at for any technology, including backup, that they're looking at. We see this time and time again, this idea of, if you can bring your own stories or bring your own facility and capture your SaaS data in that facility, you can actually take back ownership and control of your data. And it's kind of funny. It precipitates a cascade where you say, well, now that I have the keys to the data, I can probably let other folks, according to my policies and my public cloud, have access to it and consume it more.

And so now they can come to this data leak instead of trying to hit applications in a myriad of ways. And what that inevitably does is then create the need for more granular capture and granular capability to ingest more and more data. Because that change data becomes really valuable to more people than just an insurance policy.

And as you saw here in a couple of the stories that Henry mentioned and that I spoke about, we see organizations go through this maturity curve. I mean, certainly, some come right in and say, look, we just want-- we're already there. We want to optimize our business based on this historical data, because we see that SaaS is the new frontier for change data that's really valuable to our organization, more so than other sources.

So we want to ingest it, maximize it, use whatever we can out of it. But a lot of organization start with, I just need a basic backup need. And I think the one takeaway that I would have from this is, when you do have that need, whether you do have that need of, I just need basic backup to I'm ready to optimize my business, we want to make sure that you're making the right decision and looking at that long-term potentiation. Because if you don't have the optionality to do more with that historical data later, you're going to have to rip, and replace, and do other things later when you find that your competitors are doing it already. So with that, I think that's a wrap. I'll hand it back over to Diane.

- Well, thank you so much, Chris, and thank you, Henry as well. And Thanks to you both for walking us through the five things that your SaaS backup must do. And with that, we are open for audience questions. Just a quick reminder, you can submit your questions in the question box.

And it looks like we just got our first question in. And this question is for both Chris and Henry. This question has multiple parts and is about RPO. They are asking, where are things trending with RPO? And our companies tightening RPO over time or is it hovering around an hour? Also, how do you balance that with storage costs? Henry, would you like to take it first?

- Sure, I'll take a stab of that. I mean, as I mentioned earlier, expectations are going through the roof. It's going to be-- people don't want to lose data. They don't want to lose transactions, they don't want to lose their own productivity. Expect to see more and more of this even on things that are-- even for some of these things are not production, even things like test dev and whatnot, things like testing another elements, like that. We need to make sure we're doing a better job of recovering data fast and making sure that we're not losing-- we lose as little data as possible going forward. So I only see that problem getting worse and worse.

- We definitely see that with customers as well, where there is-- when they find more cost-effective cost structures in their public cloud, there's an immediate kind of jump in consumption of, I want a higher frequency, which is inevitably tied to RPO. And it can be because of compliance, or it can be because I want to consume the data. But there is a tipping point where it can get quite extensive and expensive if you're capturing, let's say, up to every single change, which we allow our customers to do.

It's just really a personal kind of preference and settings. So there is still on that near real-time. When you get closer to real-time, the data does exponentially grow, and it's something to be cognizant of. But I think the overall trend, I just echo with what Henry said, it's moving to near zero, I think, because of that training that people have for instant gratification and SaaS apps and extending that into recovery. We definitely see that as well.

- I want to go back to thing, Chris, that we probably should have spent more time talking about. But there is another key advantage to being able to do that portability, and moving data out, and making more accessible. And it's not really intuitive at first, but your archive could actually be a performance booster for you.

I mean, it is a case like that, because as many of you already know, especially using SalesForce today, as you get to certain levels, you wound up seeing those instances slow down, because you have too much data. So again, as we start thinking about the portability elements here and moving that data from a SaaS platform, which is probably really expensive data to something that's much more cheaper, you're going to get a cost advantage, and you're potentially going to get a performance advantage by slimming down and still making things accessible and recoverable.

- Great. And we have our second question in. And this person is asking, what's the best way to make my SaaS application data accessible to my intelligence and data science team?

- Yeah, I think it goes back to-- I'll take a quick stab. I mean, I think it goes back to the point I made a little bit earlier in terms of the portability, making sure that as we move that data somewhere else, it's still accessible. It's not just this secured copy that's in a proprietary format that people can access, they can't deal with it. That winds up being a big issue.

Because again, as we talked about it earlier in this presentation, we want to get to these environments where we have self-service data access. We want to be able to-- somebody is creating applications should be able to get some sort of a code or some sort of dev that's at least similar to the production data to make sure that the applications they're using or trying to create, new work which they're trying to create actual work. So there's going to be more and more pressure for organizations that are on the infrastructure side to make data more accessible to either the Q&A people, or as dev people, or dev ops people, or developer people who want to create these applications. So I think that's going to be a key issue.

- I mean, to echo that, I'd say, we saw this in that Fortune 50 company example. A shared data lake that has non-proprietary format data that the organization can actually access and hit, it's a core principle of data ops that we see in that space as well. And this is why we built our tool the way that we did. Really, the biggest no, no of every SaaS application is letting your customers truly own your data.

And that's exactly what we did, because we never wanted to lock our customers in with their data. It's their data. They could pull GRAX out at any point in time and still have access to it, because it's in their cloud. But all of that aside, what really matters is that customer owned shared data lake. And the control over that is a big, big, big deal when it comes to accessibility and just making the most out of that data.

DIANE DENG: Perfect. And this actually leads very well into our next audience question. They're asking, why is ownership important? I thought when I buy back up, I own the data, because I pay for the service.

[INTERPOSING VOICES]

- Yeah, I can talk about it. I mean, ownership is this thing that every SaaS application will tell you that you own your data in it. But when you look at ownership and possession, which, in legal terms, is even 9/10 of the law, it's all about where the data traverses, where it stays, how accessible it is to use.

So the truth is when you start to look under the hood, you start to realize that while SaaS applications will tell you you own your data, when it comes to a compliance audit or when it comes to more scrutiny, you have less control over it than you think you do. And so it's really critical these days, we think in the global environment that we find ourselves in, it resonates on a personal level and on a company level, letting organizations and people take ownership and control of their sensitive personal data is something that is really critical and is starting to get supported more and more in the SaaS application space.

But it's also hard to ask these SaaS apps to instrument that in. Not only what's the incentive mechanism, but it's not trivial to actually build that in that capability into every SaaS application. So there's that cognitive dissonance of people think they own it, but they don't really. When they try to get at it and they try to switch tools, they had a lot of roadblocks. Some of them are arbitrary, some of them are purposeful. But either way, the end result is the same, ownership is really a cornerstone of really just maximizing what you can get out of your history and your business.

- Yeah. I definitely want to emphasize that, too. That's I mean, I think that's a great point. But when we look at this, it's about not just owning the data in SaaS platform but owning that data in terms of how I choose to-- how I choose to process that in the future. Because there's all kinds of innovations that go out that are going on beyond where your data is created. Let's say Google, or Amazon, or whoever comes out with a new micro service that you want to be able to leverage or tap into, a new AI innovation, new ML innovation, new applications, or things that I want to do. Maybe it's compliance level thing, security thing, or visualization type of thing.

Well, that's my data. Why can't I-- I need to be able to do that. If my stakeholders and people building applications want to be able to take advantage of those things, I can't do that unless I own my data. Even though the provider says you own the data, well, ownership really means, well, actually, I can actually use the data somewhere else.

So again, it's back into-- again, that's why we emphasize portability so much and accessibility in the presentation. Ownership is about what I can really do outside of where that data might have been created, because the value proposition is shifting around constantly, as I showed earlier. And a lot of things that are traditional on-prem, those things might go to PaaS, they might go to SaaS, they might go to other places.

And more and more, as we start looking at the innovation cycle, we can't control innovation is going to happen or which cloud it's going to be in. But we also don't want to miss out on these innovations just because we did happen to be sitting here. So yeah, that's going to be a key issue. And that, to me, is going to be a fascinating thing to start looking at, continue looking at in future years.

- Perfect. And it looks like we have our first question directed to Chris. They're asking, which capability out of the five is the most critical to your customers? And why is that one above the others? Chris?

- Yeah, absolutely. I mean, I think that it's really-- well, if I can cheat and say it's the portability and access, because I feel like those go hand and hand, right? That is probably the biggest one, because it changes-- it kind of flips this industry on its head and the way people think about backup is changing. It's emblematic of the fact that people are thinking, how do I do more than just buy an insurance policy?

The traditional motion is, pull it down, put it on the shelf. If something goes wrong, take it off the shelf, put it back in. And it's kind of an archiving strategy thing too. And when you start to think about, well, that's kind of a cost center, and that's a tough sell internally. At the same time, you have all of these teams wanting to do so much more with that data.

So I would say portability and access are really critical for a long-term. Because if you have that in mind when you're selecting your solution and the flexibility that Henry just talked about that that unlocks, then you're going to pick the right choice now as opposed to a year from now or two when you realize you can get more value or competitors moving in mining their data more effectively. You might have to rip replace and then start to hit record again with a new tool or a new product. And then you're kind of starting from square one.

- Great. And Chris, you just touched on the data archiving strategy. So we actually have a question for Henry. They're asking, what do you think is the most important functionality for a data archiving strategy?

- Diane, our coverage strategy, I mean, to me, especially in this competitive age where there's all of these different price wars, and other areas like that, and other concerns in terms of where that data is sitting, locality wise, I want to have the freedom to put the data where I want to put it. If my organization is very much traditional and we're not going to cloud, we're not going to do, blah, blah, blah, I want to have physical ownership of that data.

I care about where it is, what country it resides. Well, I need to have that type of freedom to be able to move in and set up an area with those different types of service providers. So to me, it's going to be about making sure we have choices in terms of where we put that data. Because we want to take advantage of-- again, we don't know where the next innovation is going to be.

We don't know necessarily-- maybe somebody comes out with a new service that cost super, super cheap or has new innovations that do things to reduce cost of storage or have to do innovations to-- maybe it's something with the opposite direction, where they have new innovations that drive performance up and allow you to do different types of apps. We don't know what that's going to be. So to me, it's going to be about having a really strong portability.

And again, having that ability to decide where I can choose and the ability to potentially move things when they make a mistake. Because unfortunately, that is another condition we see in the cloud too. We talked to organizations where, oh, yes, we're going to cloud.

CIO says we're going to cloud, move all data to cloud. Now, guess what? They had latency issues, and they had to move it all back. So again, we need to make sure to be able to balance that flexibility to choose where that venue is going to be and how we choose and process that data in the future.

- And one of the things that-- I have to chime in, because we obsess about this quite a bit, because our customers are very, very interested in this. Archiving is another one of those things where you're sitting there and trying to make hard decisions about what do I keep in production and what do I pull off to this side? And inherent in that is whatever I take out of production, I lose access to in my production environment.

So I mean, that's archiving traditionally. But what we're starting to see more and more is organizations are saying, wait a second, that's not like a cloud native approach. We should be able to have our cake and eat it too, right? Why can't I take the data out of production into my own data facility, and then build a digital bridge to that data facility, and have it resurface in the end application?

So this is like a little bit of bleeding edge. But I do want to say that even in this archiving realm, there's an emergence of a new model that is really about organizations saying, well, no, hold on a second, I shouldn't have to sit there and make decisions. I should be able to flexibly move my archive data to a more cost-effective cloud structure, and then peer back into it with my application.

My end apps, I can take them pressure off and still have my cake and eat it too, consume it in production and consume it in my data lake for analytical scenarios. It's a little bit of a bleeding edge case, but we definitely see that with some of the more complex customers. And these are organizations that understand inherently that the rules in SaaS and cloud are different than they have been historically with backup, and on-prem, and those types of things. And you can do more things like that that really fly in the face of what we've been trained in our minds as the traditional backup and archiving model.

- I agree. I mean, I think that 100% speaks to why storage data management right now is so, so challenging. Because you really have all these pressures. OK, I need to make it-- I need you to make sure you reduce costs, I need you to make sure you put in this place. But then they're not willing to say, oh, yeah, and by the way, it has to be accessible.

A lot of organizations are-- all these pressures are coming at the same time. We need to make sure we're doing a better job of doing this and getting more value. Because the bar is going to continue to be raised, and we need to meet those challenges if we're going to go forward and really support our innovators going into the future.

- Perfect. Well, that was, unfortunately, our last question, because we are pretty much out of time. So that concludes our webinar. I wanted to say thanks, everyone, for the excellent questions. And of course, thanks again to Chris and Henry for such a great and engaging webinar today.

As a reminder the on-demand version of this webinar will be available shortly. And on behalf of GRAX and 451 Research, thanks again for attending, and have a great rest of the day, everyone. Take care. Bye.

- Thanks, everybody.

See all
GRAX

Join the best
with GRAX Enterprise.

Be among the smartest companies in the world.