Webinar

Master Your Salesforce Data Archive Strategy

The #1 Secret Global 100 Companies Swear By

You must consent to Statistics cookies to watch this video.

Speakers

GRAX Team

LinkedIn

About this talk

Salesforce contains dizzying amounts of mission-critical business data. But, storing every bit of that information generated inside of the application can be prohibitively expensive for organizations. Additionally, too much data often slows down performance.

Watch this webinar, where our data archive experts show how to overcome the challenges of archiving Salesforce app data and create a winning archive strategy. They discuss:

  • When to backup, and when to archive data
  • Key considerations for shaping your archive strategy
  • How real companies used archiving to solve challenges
  • What the future of data archiving looks like

See how easy it can be to boost your app performance and slash costs through an effective archive strategy. Plus, we share how Global 100 companies are ensuring their archived data remains available and accessible for all their business needs.

48 min. Published on

Transcript

- All right. We're going to get started. Thanks so much again, everyone, for joining us today for our webinar, Master Your Salesforce Data Archive Strategy, The Number One Secret Global 100 Companies Swear By. So we're going to be talking about what to think about with your Salesforce data archive strategies and tips and tricks. And then there will be a reveal with what we've found is the number one secret that companies do swear by for their archive strategy.

So a couple of housekeeping items. This webinar is being recorded. We will share that tomorrow. So keep an eye on your inbox if you want to access that. And if you have any questions at any point during the webinar today, use the panel that you see on the side to input in your questions into the Questions tab. We'll be answering those at the end of the webinar today.

So I would be remiss if I did not start with who is GRAX. Who are we and why are we talking about data archive today? So my name's Laura. I'm on the marketing team here. And I'll be your host today.

But who is GRAX? We are a SaaS data value platform. Essentially, you can think of us as a data recorder for Salesforce. And we let you back up, recover, archive, and reuse your data in other applications, anywhere and any time. How we're different from other players in this space is our three key pillars of data ownership, data capture, and data access.

So that's all the pitch I'll give you before we move into our content. And let me introduce our speakers. So Anna is the vise president of customer success here at GRAX. She ensures that our customers have a seamless implementation and make the most out of their use of GRAX. And she'll be talking to you today about things she's seen organizations do with Salesforce, with their data, and with their archive strategy and the return they've gotten from that.

- Hey, everyone.

- She has years of experienced leading high impact, high visibility projects. And she's delivered millions of dollars in value while managing international multifunction teams. And our other speaker, Sohil. Sohil is a senior product manager here at GRAX for our Salesforce backup and archive solutions.

Prior to GRAX, he's managed various client service engagements for LogMeIn Xively platform, which was acquired by Google. He also has extensive experience guiding companies through software implementations, defining, mapping, and streamlining business processes atop platforms like Salesforce and AWS. He really enjoys helping organizations leverage their SaaS solutions. And he's a Salesforce certified administrator, platform app builder, and Sales Cloud consultant.

So thank you so much Sohil and Anna for joining us today. And we're going to get started with a poll question. So what is your biggest archive pain? Choose one of the following, your storage costs keep increasing, your performance today is terrible, both A and B, or you're not sure, or other.

Give a little bit more time to vote. What's your challenge today? And if it's other throw that other into the chat or into the Questions tab, because I'd love to hear what you're struggling with. Give it a few more seconds here, and then we'll close this out.

All right. 5, 4-- get those answers in-- 3, 2, 1. And let's look at those results. Not sure or other are actually taking the lead here. Well, I'm glad you're here with us today. As we talk about it, hopefully, a few things will resonate. And if it was another situation, definitely let us know what you're thinking about or what you're struggling with in the chat or in the Questions tab.

And both A and B coming in second. And that's what we see the most of. Storage costs keep increasing, you're constantly buying more storage in Salesforce. Or you has so much data, your app is slow and really struggling with performance. Awesome Thanks so much for giving us your feedback there. Love to hear more.

And Sohil and Anna, I'll turn it over to you. I'll turn my camera off for now so you can focus on our wonderful speakers today. And I'll come back at the end to answer any questions and help there. Bye, guys. Take it away.

- All right. Thanks, Laura Hi, everyone. All right. So why archive, right? Having worked in customer success and talking to many customers, I've heard customers express these two pain points very, very frequently, right? Either the increase in app storage costs or declining app performance, and both of those are really what we're going to talk about today, right?

So everyone is worried about their app performance degrading as their app storage grows. And of course, everyone is, therefore, worried about the corresponding app storage costs. So that's why we're here today. And that's what we'll talk about.

So let's define data archive. Data archiving is the act of backing up and then deleting data from your production environment. To archive is to delete data from production. So that's kind of a big deal.

- Thanks, Anna. So here at GRAX, we think data archive is fundamentally broken. So what do I mean by that? If you're responsible for implementing an archive process and policies, you essentially have to make no win decisions about which data stays in production and which data you can live without, right?

And you're making those decisions in the context of what if I archive this and get it wrong, or someone wants to see that historical data later, will we lose all that valuable information. So what ends up happening, oftentimes, is you'll go to a lot of different stakeholders, see what data people need in production. It can quickly turn into a coordination nightmare.

So then what we'll see is sometimes you'll try putting in a custom tool or API to archive older data out. And again, you're coordinating with other teams, building a solution that's going to involve endless maintenance. And we've seen a lot of companies try to build homegrown archiving solutions and then manually repeating that process at different increments.

So essentially, if you find that you do need older data for historical perspective, then everything changes, right? And one thing I've seen over the years working with a lot of customers and their enterprise data strategies is, whatever you think you might need or even not need today will, 99% of the time, be different a year from now, or even six months from now. So your business will have signals and data points that are constantly evolving. So there's a lot of value in having the full history readily accessible, regardless of what you're actually archiving.

So I think what happens is, at the end of the day, a lot of customers will throw their hands up and just buy more storage from the app provider, which really is just kicking the can down the road. So it's great as saving money and optimizing app performance with data archiving is. We do think it's fundamentally broken and frustrating given these three things that we just spoke about, the internal politics, the manual processes, and having to put your data on ice.

So if you Google data archive, these are actually the top three considerations that you'll see over and over. We actually think they're flawed and not relevant anymore. So let's take these one at a time.

What data needs to be archived? So this is based on the premise that you actually have to lose access to the data once it is archived. And it immediately turns this question of what data needs to be archived into that broken process that we just spoke about on the last slide, with the assumption that you'll need to spend a lot of manual time coordinating and building things up front before you lose the data forever.

The second one, is my archive data safe with its integrity preserved? This is based on the fact that most third party tools actually save archive data into their own cloud environment, not yours. And then basically rent access to you.

Which if you think about it, it's completely backwards. And it forces you into this coordination and manual process of validating, when, where, how the providers storing and using your data. And at that point, you're focusing on the wrong things.

Finally, who can access the archive data? So this is based on that first premise, which says you have to lose access to the archive data in the first place. We think you should be spending your time on creating helpful feedback loops and revenue with your data now that it's available and accessible, even after archiving.

All right. So as you're thinking about what you actually need to consider for data archive, it's important to make sure the solution that you choose actually addresses your pain point. So let's take a look at those pain points. And the first one, which we've touched on briefly, is that pain point of app storage costs.

So if you take the time to find the right archive tool, you can solve for the pain of paying that evergrowing application storage and paying for overages when you go over capacity. And these cost will add up. They'll eat into your IT budget.

In fact, as you can see here, right, it can cost up to almost 110 times more to store up data in Salesforce than in your own cloud storage. And we know Salesforce and many platforms can charge an arm and a leg for not just data but also file storage. And we don't think you should be running your business, making strategic decisions with this as the major factor looming over you.

- That's a huge multiplier cost, Sohil. An example of this pain point that was resolved with data archive is this instance of a customer who filled up 80% of their Service Cloud storage with attachments. And as Sohil just said, IT just kept having to buy additional storage in Salesforce because they needed to stay in compliance and maintain access to their data.

So a reminder, archiving is deleting out of production. So maintaining that access is key. But after implementing an archive solution, they were able to set and forget their archive policy and continuously and drastically reduce their app storage costs, while also remaining compliant and, as I mentioned, without losing access to their archive data.

- So let's take a look at the second pain point. And that deals with app performance. So your app is likely overloaded with vast amount of data, and it may no longer be performing. So many of us probably are all too aware of the Salesforce report time outs, endless page loads, things like that.

Well, if you consider the fact that over 20% of Sales and Service Cloud data is actually unused and organizations don't have the time to delete it out, it's easy to see why the application can get bogged down and not working as well as it could be. And of course, Salesforce best practices will tell you to design and develop your architecture, schema put in place, ETL and other data strategies with these limitations in mind. But the fact of the matter is, things will build up over time.

And especially, data, it will grow at an exponential rate as your business captures more and more information. So again, all that to say you shouldn't really be making business decisions around data strategy based on something like reports not loading in Salesforce and costing the entire company time. So we think there's a better way to essentially remove this from the equation altogether.

- Let's talk about that app performance Let's take a look at this customer, who after they implemented GRAX, basically, we helped improve this pain point around degraded performance, right? So this customer is in a heavily regulated financial industry. And they were experiencing degrading performance in a large part due to the fact that they had 35 million tasks and emails that were negatively impacting their production performance.

After implementing GRAX, they were able to automate and capture the deletion of their data that was older than one-year-old. That was just the line in the sand that they drew. But that allowed them to vastly reduce their in-app storage. In addition, this allowed them to also radically optimize their Salesforce performance by putting their archive strategy on autopilot.

So they knew that every time they hit that one year mark data would be archived. And therefore, their app performance would remain stable. Ensuring that they maintained their app performance standards and all of their data was stored in their own FINRA compliant data lake while remaining available in their Salesforce production was key to solving this pain point of degraded Salesforce performance.

- And last but not least, the third pain point, losing access to archive data. So it can be a headache trying to figure out which data you can delete and afford to lose access to. As we mentioned, there's multiple stakeholders who care about cost and when it gone.

But there's also users who need access to that data in production. And then of course, there's every other little detail that you have to consider before actually deleting it out of your production environment, such as the other mechanisms that users may be accustomed to in order to access the data. So you don't want to look back in a year and regret the data archiving decision that was made because you had to compromise between budget and access, right?

- All right. So here's an example of how implementing GRAX for archiving allowed this financial services customer to archive and maintain access to that archive data. So specifically, this customer needed to archive their customer support team's Service Cloud emails and text messages. And they needed to be stored in a FINRA/WORM compliant. So that means WORM standing for Write Once, Read Many compliant data facility. OK.

So what does that mean to them and the fact that they needed to archive this data? So the ability to have this read access was crucial to adhere to this stringent regulatory compliance need that they had. But by implementing an archive tool, they were able to control the data store and have the flexible data access across multiple internal systems. So they didn't have to choose, as Sohil just gave the example of, between what data to be archived and what data to keep in their Salesforce. Because they knew that they would have access to the data regardless of whether it was archived or not.

- All right. So that brings us to the answer for what you're really here for today, the question of what is the number one secret global 100 companies swear by for their data archive strategy? So it's just that. It's the ability to retain ubiquitous access to your archive data in your Salesforce application and beyond.

So in order to compete like these global 100 companies, your archive data should live in your own storage, such as your S3 bucket, so that you can retain full ownership and access to your data. And also if you need, it can live adjacent to your other valuable business data in your data lake. So this is real data access. We've talked about it. And it's the true way, we think, to master your Salesforce data archive strategy.

So let's talk a bit more about archived data access. So losing access to archived data shouldn't be a problem today. You should have that ability to retain archive data access and visibility, both in app and downstream, in analytics tools and data warehouses. So this combination of both production availability and downstream access is what makes GRAX very powerful. So essentially, you want to get to a place where you're feeding all of this into your data pipeline in a seamless manner.

So let's take a look at what that future actually looks like. So if you're choosing the right archive tool for your organization, you can get more value from your mission critical SaaS app data. So you can see here from this chart how the value increases over time, the quicker you can move from something like basic data recovery into that more strategic data op space.

So experiencing improved app performance and cost reduction benefits, that's really just the beginning. The secret of the ubiquitous data access that we talked about, it's getting all the way to this data reuse downstream. And to be clear, it should be reuse of your own data. It's about exposing the archive data where it's needed but also when it's needed.

- All right. So archiving your data and still having access to it, the data access. So that to me sounds like having your cake and eating it too in this situation, right? But don't take our word for it. This GRAX customer was able to immediately reduce their app storage costs.

However, the real headline is that they were able to archive out of their app storage but still maintain the access, like Sohil was just talking about. And they were also able to reuse their data to drive product innovation. So this has opened the door for them to develop and deliver a better user experience for their wearable technology by being able to make more informed decision on their product development of all of the data that they had archived and new data that is still continuing to come in, and being able to see those both side by side.

- So we've seen how data access is crucial and can be the secret behind your archiving strategy. But there are some other considerations for you to take into account. So let's review some of them. As we mentioned, archiving no longer means losing all access to your data inside your org.

One thing that's important is ease of use and deployment. So looking for a tool that has nearly no learning curve can be easily implemented. With archiving, especially, you want to make sure that you have an intuitive experience that guides you along the important considerations and puts you on a path to success. So I think this is even more true with archiving because of some of the complexities involved with such an intricate database operation like Salesforce hierarchical deletes, for example.

The next consideration is the ability to put your archive on autopilot and stop wasting time setting up and manually running the same job repeatedly. So you'll want to streamline that process by precisely targeting what data you want to archive out of your existing Salesforce, using something like a Report or SQL Query so that you can essentially use the tools that you already know and understand to do things like establishing a rolling date and gracefully archiving as things age out, for example.

Your archive tool should allow you to control user access to your organization's archive data. but then who really wants to set up and maintain user permissions for another tool? So rather, we think you'll want to consolidate your ID landscape, so your archive tool should allow you to carry over a lot of that use for permissions from your SaaS app and utilize that existing permission framework within the confines of things you already know such as the Lightning App Builder and conditional visibility of lightning components based on Salesforce profiles, permission sets, fields on the user object, things like that.

And finally, a real archive tool won't just dump a CSV in your lap to upload it back into production. So make sure your archive tool does data restoration correctly and restores your data and its relationships back into your SaaS app. So there will be times when you do need to unarchive and put select records back into Salesforce as native Salesforce records. But the key there is that you want to have that visibility and access up front so that you can do all of this easily when it's necessary.

- So if these are considerations for what a great archive tool should include, take a look at the GRAX offerings up on the screen, right? We've got Backup and Restore, Time Machine, Data Archive that we've just talked about, and coming soon is our Data Hub feature. So this type of return on investment that we have discussed today is only possible, really, because GRAX is the only tool in this space that allows you to view and access your SaaS app data in production, whether it's backed up or archived.

- All right. Hey, everyone. I'm back. Thanks so much, Anna and Sohil. We're going to move into our Q&A portion. So if you have any questions about any of the material you saw today, or about GRAX, or about anything-- we didn't touch on archive-- please let us. Ask your questions. You can throw it into the Questions panel.

And while we're letting some of those come in, if you want to learn more, I mentioned there will be a follow up email tomorrow with the recording and some additional resources you can take a look at. Feel free to visit our website, grax--G-R-A-X--.com, or follow us on LinkedIn and Twitter. We're always sharing cool new things that happen there. All right. So let me take a look at the questions. All right. So first question, how can data archiving support my compliance needs?

- I can take that one. That's a good question. So specifically, in the context of archiving and GRAX archiving, you can use filters, as I mentioned, Salesforce Report, SQL Query. So a lot of this ability to granularly target specific records based on filters will come in handy to meet your compliance needs, right?

But the other thing you'll need to think about, which we mentioned, was the access and commissioning after that archiving has taken place. So a lot of times, you'll see companies need to balance Salesforce operations and policies with some general company data compliance or industry regulation. So that's where after you perform the archive and you've set your archive on autopilot, that's where access and optionality really become key. And that's what, I think, we touched upon a lot in this presentation.

- Awesome. Next question. I'm currently using AWS for storage. Can GRAX work with my existing account?

- Sure. So I can take that one. So yes, absolutely. So GRAX can use your existing S3 bucket for storage of anything that you need to archive or just back up out of your Salesforce environment. In fact, for those of you on the call who may not use S3, GRAX is pretty agnostic in terms of backing up to cloud storage.

- Great. And this one's an easy one. We could all take this one. Where can I learn more about how you guys archive?

- Yeah. I think the first place you can go to our documentation site. There are some specific articles around archive and best practices. Or as you can see here, just get in touch with us for specific use cases. I think archiving, especially, can be dependent on your use case, your objects, your schema. So if you have questions about anything specific to your use case, just get in touch with us.

- Yup. Right on our website actually, if you hit Get Started, that form that pops up will let you book a meeting directly with our team to get your questions answered directly. So if you have a specific use case, that's a great feature that will pull up their calendars instantly, and you can get your answers live. Next question. So let's say emails are archived. If a user looks at a case where emails are archived, will be archived emails still show in the feed in the same email related list?

- Yeah. I can take that one. So we see this use case a lot, right? So the case hierarchy is a big one. You have cases. You have case comments. You have email messages.

Those are some of the big ones. You have tasks. That's another big one. In terms of visualizing any of those related lists back into Salesforce, there's a few different ways you can do that.

And again, I won't go into too much detail here. But you can use-- typically, what will recommend is using a GRAX provided out of the box lightning in component. So you can put that on your page layout using the Lightning App Builder just like you would any other lightning component. You can use Salesforce's permissions, visibility. And we'll automatically pull back all the email messages or all the case comments, for example, that are related to that case that your user is actually viewing.

So one good thing about this feature is that you can customize it and filter those email messages, for example, even further. So with Salesforce native related lists, you're going to get all the records related. You can't do too much around that in terms of filtering. With GRAX lightning component related list, you can actually filter further.

- Awesome. Thanks, Sohil. Can you talk a little bit more about how to plan when you archive data? How do you think about when and how to plan that whole process?

- Yeah. I can take that one and hop in if you have any other thoughts after. But essentially, what I would say is, typically, the path that we go through with customers and what we see them go through is, when they do leverage GRAX archiving, they'll start in a sandbox. Because they'll want to create typically a full sandbox and mock out certain scenarios to understand their schema and their hierarchy.

So when you're doing an archive, one of the most important things-- and this will vary for every single environment and every customer, so there isn't necessarily a one size fits all in terms of what you need to archive, for either just performance, storage costs. It will depend.

And the way it works is-- you'll see in some of our documentation is that you essentially pick a root object, so something like case. And GRAX will automatically return the hierarchy underneath that, so you'll be able to see children, grandchildren, everything that's related to the case. Because when you do delete case data, it will cascade downwards.

So based on how you've set up your Salesforce relationships, deleting a case is not just deleting a case. It may be deleting the case comment. Salesforce might automatically delete some custom objects that you have that are detailed to the case. So there is a lot of different considerations. And we have some best practices and steps actually on the documentation site for how we see customers typically go about this, right?

What we recommend is start with-- figure out that hierarchy. Start with a small record value in your sandbox and then progress onward to bigger and bigger data sets such as you need. And you'll be able to use the Date Filters, the SQL Query, the Reports to essentially target exactly what you need and do your smoke testing in your sandbox before you move to production.

- So if I could just add to that, to give an example, like during the presentation, we did have one customer who started with archiving out of their production environment, only cases that were three years and older. And what they found was that their app performed much better, but also that they were slowly starting to save on storage costs.

And I got a call from this customer just about six months ago saying that he managed to archive out all of the cases up to six months ago, and that he was sort of tracking on six months back, and everything else was being archived. And so customers really do sort of see what works and then get more sort of active in their archiving once they establish what works for their organization.

- Awesome. Thank you both for that. I want to go back to one of the responses we got for other in the poll. This person said that maintenance and restoration were two of their critical areas. And I'm just wondering if you guys had anything to add about that. How to-- we touched on it a bit in the presentation, but any more color you think would be helpful around how to deal with maintenance and restoration concerns.

- Yeah. I can take that one. In terms of maintenance and restoration, I would say the way GRAX does archiving you can set up your jobs with-- once you've gone through the sandbox, once you've gone into production, as Anna was just talking about, typically, you're setting up this criteria on something like a rolling date. So there isn't too much maintenance involved from the perspective of having to go back into your job and reconfigure it, right?

If you want to increase date ranges or if you see how it's been working for a month or two and want to take action, then you might go in and edit or create another job, something like that. But in terms of the maintenance of the actual GRAX archive job, typically, we don't see too much of that. It's usually in up front task to understand that hierarchy.

Set the job, and then just get email alerts or check in on it, and see how many records you're archiving each day. Run some summary charts, which we make available to see over time, right? You might want just a simple bar chart to see how many records have been deleted for a specific object in the past three months, right? So you're kind of tracking how that performance is potentially being enhanced and storage costs should be going down.

The other question was around restoration. We touched on that a little bit. The key concept there is this concept of unarchive. You might hear a lot of other companies use that as well. So essentially, if we are archiving your data as I mentioned, that's not just, for example, the case record, right?

When you archive that data, we're really taking an entire hierarchy, backing everything up, double checking, and verifying that we've backed up every record in that hierarchy. And then GRAX will come in and delete all the records within that chosen hierarchy, from the bottom up. Because that's the most efficient way to delete, at least in terms of Salesforce.

So what you'll want to make sure is, if you are archiving a complex hierarchy, that you can essentially put that hierarchy back when you have certain cases, which would be the top level object, let's say, that you chose in your hierarchy that you want to restore back into Salesforce. So how that will work is, maybe, you archived a case, and it still might be relevant, right?

Especially, because a lot of times, you'll expose a lot of that archived data back into Salesforce. So we'll see service rep, sales reps for opportunities, potentially, viewing archived data, and that might still be relevant, right? And there might be a need, in certain scenarios, to restore that record and its children back into Salesforce. So GRAX is tracking that hierarchy. Because if we're removing it, we feel we're responsible to allow you to put that hierarchy, essentially, back the same way it was and recreate those relationships for the selected records.

- Great. Last question here, unless anyone wants to put another one in. We still got some time. The last question, does GRAX only work with Salesforce, or could you use it as a data warehouse?

- I can take that one. So in the context of GRAX archived jobs, most of our customers are using them against their Salesforce data. But if you remember, there was a slide midway through, where we talked more about that downstream consumption of data. So that's actually something that we do make available. And that's probably entirely separate webinar.

But if you remember that slide, it talked about archiving data. So let's think about archiving data out of Salesforce. And you might set it up to back that data up to your own S3 bucket. And then from there-- a lot of customers already have a data lake strategy in place or data warehousing strategy in place, where they'll use tools such as AWS Glue to get a lot of their data lake data and capture that downstream into something like Redshift, for example, and eventually, even a lot of BI tools, so think of Tableau, or Power BI, or NAWS, us QuickSight, things like that.

So that is 100% a need that we've recognized, and something that we actually want to empower a lot of customers to do more of. So just because you're archiving Salesforce data, that doesn't and shouldn't mean that you can't essentially do whatever analysis your data team might need. Take that data. Put it adjacent and alongside a lot of the other data that you might be capturing in your warehouse, and make use of that.

- And my call to action for last question. We had a couple more trickle in. You guys are great. Does Salesforce have a built in archival solution, or do you need to use tools like GRAX?

- So there are a lot of different ways to approach this, right? So mostly, what we see is, as I mentioned, a lot of that custom archiving approach. Because Salesforce is such an open platform. They have the APIs out there. There's a good level of knowledge that I think a lot of companies tend to try to do something, like writing their own scripts using APIs to store that data and then deleting it.

If you just take a simple use case, as a lot of you might be aware, Salesforce does have a Data Explorer feature. It's pretty basic, but they allow you to select certain objects. And they'll spit out tons of CSV zipped up periodically to you. And then you'll need to download those CSVs, figure out where to store them.

But essentially, that's kind of the basic level that we see customers doing is-- let me just try to back up as much data as I can now that I know I'm periodically capturing that data. Let me try and delete it. Because one of the important things with the GRAX archive tool is, we're not doing some sort of secretive special delete behind the scenes.

We're leveraging the same tools out there. Except that when you have GRAX set up, you have this entire sort of pipeline of this data being automatically backed up and sent to your S3 bucket. And only then will we come in and delete it in the most efficient manner.

So you are still deleting according to some of the Salesforce APIs, the REST API, the Bulk API. There are other tools like Asynchronous, APEX that we can leverage to become the most efficient. But there's nothing sort of that we have proprietary other than the actual logic behind the scenes that-- we've done this for a long time, and we've seen a lot of these scenarios.

So GRAX is equipped to actually delete a lot of these complex hierarchies. What we find is, as customers start doing it themselves, you will run into a lot of different restrictions, and errors, and complexities with the Salesforce hierarchy. So it can turn into a nightmare trying to extract data and then making sure you have the data and deleting it afterwards.

- Follow up to that. Sohil, why is it bad to upload a CSV of archived data back into production? Is that how every Salesforce backup and archive tool does it?

- No, I wouldn't say it's bad. It's just a more manual process. So Salesforce has Data Loader. They have tools available where you can upload CSVs, essentially hitting the same APIs that they make available to everyone. I think the point there was just-- it's a manual process. It can involve a lot of error.

And then as you get into that hierarchical upload, it becomes very difficult and even more manual. So if you're literally talking about one object that has no relationship to anything, it might be doable, but that's a very rare use case, especially in the Salesforce world where everything's kind of interlinked, especially in the Sales Cloud, Service Cloud, things like that. So that was the point there.

- Awesome. The other thing I would add to being on the marketing side, Salesforce had a solution called Data Recovery Services, which is Sohil has been hinting at, was super manual. It took six to eight weeks. It was $10,000 a pop.

Basically, your emergency scenario if a user deletes data or something happens in your Salesforce org. They ended that Data Recovery Service last summer. And then actually, based on user feedback, just started it up again a couple of weeks ago. So we've heard feedback from a lot of our customers.

And Anna, you can probably speak to this a little bit more. But that it's not really a great service. We're recording this. So no, I maybe shouldn't have said about Salesforce.

But six to eight weeks and $10,000 to recover your data. And then you get a CSV. And you have to go through this manual process to get it back in. So really, really painful and not a great solution.

- Yeah. And one more thing, I'll just touch on is-- so we've been talking mostly about intentional archiving. So you have a strategy. You know what you want to delete out of Salesforce. There's also that other side of the coin where you have unintentional deletes, whether it's an admin, an integration process that accidentally deletes thousands of records. So that is something that GRAX actually tracks as well.

So let's say you're backing up all your data in your org via GRAX, then you have it set to run daily. So each day you incrementally capture all the new and modified records. Once you have all those records captured and correct, if any of them are deleted, even if they're not deleted by a GRAX archive process, if a user just goes in and manually deletes, GRAX will capture that as well and, essentially, flag the data in your GRAX data vault as deleted. So you'll know that it was deleted manually as opposed to if you set up a GRAX archive.

- So one more follow up there to that response. So to confirm, GRAX allows me to view and access my archived data in production. So I do not have to restore it back into production for team members to view it. Is that right?

- Yeah, exactly. I mean, that's the entire point. We provide the ability to unarchive and restore back these hierarchies. But it's not something that we would recommend you have users do all the time, especially with restore. Typically, we'll see customers put in permissions around that.

And we have some built in GRAX permission sets, so you can control who can actually restore it, who can access the search interface, search all of the data, decide what they want to restore. But the entire point of this is, for a lot of teams, visibility and access to that data is enough. You might occasionally need some of that data in a native Salesforce format. But that's rare, especially for some of the older data that you already know that you can archive, and you don't use it, or it's unused data from years ago.

- All right. And we're back to the last question. Thank you guys for all of the great questions and the follow ups, . Really great. Is your archive data available in Reports

- So yeah. That goes back to the downstream consumption use case that I had mentioned. So there are ways to expose this data in Salesforce reports. And we can certainly send more information on that. What we've seen and the truth on that is, Salesforce actually has a lot of limitations around how much data they allow you to expose from external sources back into the native Salesforce reporting platform.

And those limits for most companies, those limits are so low that it doesn't really make sense to even go with Salesforce. There's a lot of other limitations around how you can do joins. And usually, if you have-- the way most people think about Salesforce reports, it's not as fully functional when you're talking about external data.

And that's one of the reasons that we actually encourage a lot of our customers to integrate with their downstream processes and get a lot of this data since you own it at the end of the day. Integrating it with your strategy, getting it into your data warehouse is typically the most efficient way to view that data. And it buys you a lot of other functionality as well and insights into other data sources that you might be capturing in your warehouse as well.

- Follow up there. I see, like use Power BI instead.

- Yeah, exactly. Or Salesforce is Einstein tool, or what they used to call Einstein, and they're now calling Tableau CRM. So that's a simple use case that would be much more effective than using something like native Salesforce reports.

- All right. With that, that was our last question. If you guys have any more questions, we'll send a follow up email tomorrow with the recording. You can review some additional resources you can look at.

And head to our website grax.com, G-R-A-X.com. Right in the top corner is a Get Started button. You can put in your information and book time to talk to a member of our team right away. And they can get you specific answers for your specific use case.

So again, thank you so much Sohil and Anna. Thank you guys for being a great audience. We hope you learned something today. And if you have any additional questions, please don't hesitate to reach out to us. Have a good rest of your day.

See all
GRAX

Join the best
with GRAX Enterprise.

Be among the smartest companies in the world.