Product Demo Video

GRAX Demo

Backup, archive, recover and access your cloud application data anywhere, anytime

Speakers

GRAX Team

About this talk

Take control and ownership of your data in 3rd party apps with GRAX.

  1. Backup all your Salesforce orgs
  2. Keep archived data 100% accessible in production
  3. Capture up to every single change
  4. 360° Global Visibility

GRAX is the new way that businesses preserve, recover and act on their historical data. Replacing traditional point-in-time snapshots that miss 99% of all changes and store sensitive data in 3rd party clouds, GRAX captures an irrefutable, recoverable record of every single change that happens to data, storing it in the customer’s own environment and making it available for analytics alongside live data. This approach creates a modern, unified data fabric that helps companies understand and adapt to changes faster in their business.

Complete the form to watch a demo video of how GRAX can effortlessly preserve, recover and analyze your historical data right inside of Salesforce - with no code required.

24 min. Published on

Transcript

[MUSIC PLAYING] PETER CLIFFORD: Good afternoon. This is Peter Clifford from the Grax product team. And I'd like to spend a few minutes going through some of the top line features of the product. What I'd like to do is I'll start off by looking at their documentation set. If you do have questions after this demo, this is a great resource for searching information related to any of the details you'd like to dive into after the demo. So this is a documentation.grax.io. You can click on Getting Started.

The one page I wanted to focus on here is Grax does have a number of components. But the primary component that most personas will be interacting with is the Salesforce managed package. That gets installed into your Salesforce instance, either production UAT or sandbox through a managed package install. This is provided to you as a link post purchase. It's a quick, probably like a 5 or 10 minute install into your Salesforce instance. And then once that's done, we're not going to navigate to a org instance that has Grax installed. So what you'll find is a new Grax managed package app on the App choose your menu.

And so when you come into your newly installed org, you'll see Grax there as a new application that you can interact with. So 100% of the UI within Grax is essentially interacted and used through the Salesforce interface. So the personas here are really Salesforce administrators and Salesforce developers, who essentially are going to be managing the Salesforce instance for their end users. They're going to be the ones that are going to be using Grax administrative level as well as implementing some of the end user features that are available within the product, enabling those for their end users.

We find this is important because most of the folks who are intimately involved with data, and file storage, and knowledge of the instance are the Salesforce administrators and developers who have the most domain knowledge. So [INAUDIBLE] we do configure some settings here, which is pretty straightforward. But once that's done, essentially, you're ready to go to start creating your backup jobs.

There's two primary ways to replicate your data or backup your data from Salesforce using Grax to your customer [? owned ?] data lake. And that is primarily is through a scheduled process, which is where you would schedule periodic jobs to run where your data would be replicated from Salesforce backed up into your customer [? own ?] data lake. And thereby, enriching the amount of data that you have available, not only for business continuity so that data can be restored at some point in the future if you've got some sort of a overwritten data or disaster within Salesforce, but also you will have the ability to access that data from within your own AWS GCP or Azure account for potentially mining some of that data, so things like opportunities, and cases, and things like that.

So the more data you push out to that data lake, the richer and richer and more valuable it becomes. So let's go through some of the basic setup here for creating a quick object full backup of your Salesforce instance. So what I would do is go to the Object backup. I'll click on Create, click on Object Backup. And this will bring up a really straightforward dialogue for essentially doing a full backup of your Salesforce system. So I'll give this a quick description or title, for backup.

And then the system inquires and queries the system. It comes back with all of the metadata objects that are contained within this specific instance of Salesforce. So you'll see here that there has a lot of obviously standard custom as well as some system objects. I do have a meter reading object in here, which is a custom object. So that's everything is contained in here, including third party managed packages. All of their custom objects as well would be included in this list.

I wanted to back up specific pieces of the objects. I can click on these individually. I can also click at the top level to just select everything in the org. And this is what we're going to do for our full backup. You can choose a cutoff date and time. I could go back in time, say 10 years, to do a sort of snapshot of data. In this particular case, I want to just remove that data altogether, which will gather everything from the beginning of time within this org.

Under the Schedule section, this is where you would look at setting up a series of jobs that would you'd run this on a periodic basis. You set them up once, then they would run on a periodic basis. So here, I could set this to run a weekly if I wanted to have a weekly backup. You can also set up monthly, daily. You can even go to custom and specify the specific days of the weeks. Or you can actually get down to one hour increments, if you had something like maybe a specific object that you're looking to get very frequent backups. Let's just set this for weekly.

So once that's it for a weekly process, you can then also specify an incremental toggle here, which will run this full back up initially. , And then after one week, it would then just capture changes between the last time the job ran and in the current time. So that's pretty helpful, so that you can just set this to run this full back up. This will do a full backup up initially, and then do incremental. You can also get notified. So you can notify end users. They will receive a report delivered to their inbox, which will show them all of the records that have been backed up in some statistics around the duration that the backup took.

So essentially, this is as simple as it gets to create a full backup for your org. So I'm going to click that. And that Full Org Backup would then be scheduled to run on a weekly basis. You can run this on demand. So you can-- over in this right hand side, you can click on Execute Now. It would kick that off immediately.

So let's also dive into the ability to also archive data. So archive data is done more with what's called our hierarchical process. So in here is where we're going to create a hierarchy process. And what I'll maybe do here is do an account, maybe do a case backup. Maybe the instance has a lot of cases that you really need to-- let's actually do an archive, case archive.

In these use cases where you may have a lot of cases that have been built up over time that you just want to archive out of the Salesforce org to optimize performance within that and to also keep that data off of outside of Salesforce, but still have visibility to this information. And so we'll go through a use case of how we can actually display some of this archive data in line inside of Salesforce for a little further in the demo.

So when I have selected that top level object, we've made an inquiry into the metadata within this Salesforce instance. And we've captured all of the master detail relationships by default here. I can also choose to include additional hierarchies. So if I want to include cases related to cases, I can do that. You can see here that we're including-- if I'd like to include email messages, events, feed items, tasks, all those objects would get captured in this process. So this is a great way to capture all of the related objects, especially when you want to be wanting to do an archive.

So then once you've specified that object hierarchy map, you would then want to specify the criteria. So we do have some [INAUDIBLE] criteria based on a date cutoff. We also have the ability to include a report. So you would include a report here with a prefix of Grax underscore. And that would include that report there. And then you can embed advanced so-called queries in this as well to choose a more sophisticated report, essentially criteria for potentially selecting child objects to use against the case. So if this contains email messages with a particular attachment size or something like that, you could use that criteria to back up or archive those cases.

And then similar thing you'd have the ability to specify a schedule, which we covered in the first backup options. And then lastly here, on this toggle, this is the important toggle for archiving. I can click on this toggle to archive this, create this job as an archive job. And this would then, based on the criteria I specified above, this would archive any of the data that you've included in that criteria set. So that's a good way to essentially archive data out of Salesforce and into your data lake.

So I've got here an account backup and a case back up. It's pretty straightforward once those things are all set up. So I'd like to switch gears now and hop over to our Object Time Machine functionality. So we just cover the scheduled jobs, which is really setting up scheduled jobs to run on a periodic basis to back up or archive [INAUDIBLE] data.

We also have a real-time use case, where if you're looking to capture changes on every like event-based changes. So as soon as the same event is made in Salesforce, that data within that object would be captured and stored in the data lake to capture that version and any other versions going forward. This is what's called our unlimited data capture functionality.

So in this particular case to show this, I'm on a case object. I'm going to update some fields on this case that demonstrate this feature. So I'll update a few of these fields. All right. Click Save. And that's saved. It was obviously sent out to Salesforce.

So I have enabled this case subject with Object Time Machine, so enabled this with a visual force page that comes with their managed package. I'll toggle over to this tab. And you can see here, I've included a few fields. But we can see that I just made this change at 12:56. This [INAUDIBLE] changes made. You can see that it was made by this user. And then I can click on the Details icon here to show all of the fields that were changed on this same event.

So with Object Time Machine, you don't have to specify a specific number of fields that you want to capture ahead of time. And you may only have a limited number of fields. I know Field Audit Trail in Salesforce is a great product. But you do have a limited set of fields that you can capture. With Object Time Machine, you do enable it on an object by object basis. But you can capture an unlimited set of fields. So if this object had 800 fields, it would capture all 800. Any changes that would occur on all of those 800 fields.

You can choose to see all of the fields on the record we capture both before and after s sets on every save event. So you do have a full set of details of the record from before and after. And this is really helpful for compliance reasons if you ever have to go back for compliance reasons to pull out this data.

You also have the ability here to restore the original values. So here, I've got these new values that was just with this recent change. If I wanted to default back to these older values, I have essentially a roll back option here with restore original values. If I click this button and then click on OK, I'll kick off a quick process, which will then roll back those changes that were saved out into the date lake and roll those changes back over onto this record. It's essentially a rolling back the fields to their previous state.

So you can see all those fields were updated to include what was changed. So that's our Object Time Machine functionality. Now, I'd like to take a look at the ability for us to actually visualize archive data that has been archived to the data lake, no longer exists in Salesforce, but you do are going to have the ability to pull this data in to an object in Salesforce and show that to your end users.

So the use case here is this is a solar based company. I'm on an account object. And they have a lot of IoT data coming into this flowing into this or through what's called a meter reading an object. And this is capturing kilowatt hour readings on a monthly basis.

Problem was it was a tremendous amount of data. And they had to archive this data to optimize performance and Salesforce for previous years. So you can see here this is 2020. But they were finding that their salesforce. Their sales reps or field service reps. We're having a tough time troubleshooting this rooftop solar, when they couldn't see the full trending from previous years.

So I'm going to do here is edit this page, layout on the account. And what we'll do is, with our Grax managed package, we have this lightning component that I'm going to drag and drop onto the page. This allows us to do is actually create a related list and allow us to have visibility into data that's out in the data lake, even though it no longer resides in Salesforce.

So this is a really unique feature of Grax. It allows you to leverage the data that you no longer have in Salesforce, but you would continue to like to get use of [INAUDIBLE] as time goes by. So here, I'm just making some changes to this component. The critical thing here that I want to change is the query filter at the bottom. So this allows you to specify a filter, where I can filter out some of that data that's out in the data lake and just display what is important to my end users. So this could be a filter for country, could be case type for cases.

In this particular case, I want to show records on this account object that have been deleted from Salesforce. So I'm going to specify this is deleted equals true as a criteria. So I'm going to save those, changes to the component. Click on Back to come back to the Page Layout on this account. And now, I'm going to scroll down.

And I'll see the meter readings from the current year. They are here. But as I scroll down to the component I just added, I can see records from 2017, which is showing the period ending date. Based in 2017, and I had an archive job that archive this data. And then I can actually see the kilowatt hours here, which looks like it's been steadily declining since 2017. And it's down to only $1,200.14 100 kilowatt hours for the current months.

So it looks like this might be a good candidate to send a field service rep out there to take a look at what's happening. So that's an example of the ability of Grax to be able to render data out of the data lake. The important thing here is this data no longer resides in Salesforce, but it appears too for your end users. So that is our Grax lightning component and pulse can pull data from the data lake dynamically; let's go back and look at restore.

So if you're looking to restore data, we'll go to the Search tab. This is really Search and Restore. And what will happen here is we can come down here and take a look at what is in the data lake. If we wanted to get a summary of the data that's out in the daylight, we could also go to the Summary tab.

So let's just take a quick look here. And for this particular or we can click on a summary. And we can see that there's 2,000 accounts. There's cases, feed items, tasks. This is a relatively new developer org. But I've performed a few backups.

So I click on this Advanced tab. I can look at specifically when these back up jobs ran. So I can see that this was in account hierarchy back up that ran. It ran a couple of times. I can click here and see all of the records that were written out during this back up job.

Accounts cases feed items and opportunities, and then to get the details of that specific run I can look at the details here to select all those objects. So we can see that there's some data that's been written out if I look at this case back up, ran this a few minutes ago, and this backed up cases, feed items, and tasks. So we can see that there's some data out and that out in the data lake.

So if I'm looking to restore or look at the data to be restored out in the Grax data lake, what I would do is come to the Search tab. I'll click on Account. You would start with a top level object or an object that you're looking to restore. You'd include a series of the names of fields that you'd like to include in the report below.

And then I'll click on the Retrieve button here, which will pull back a full set of the data that actually resides out in the data lake. You can see here, these records are backed up. And they still reside in Salesforce. So if these were showing back up records, archived records, or both, if there were archive records here, I can click on the Archived button toggle, and click Retrieve, and that will pull back just the archived records.

I don't have any archive records in this specific work, but that's what you can do. Then you can also specify criteria. So if you had a set of records that maybe you got corrupted or you were looking to bring them back and restore them, I could come in here and choose some fields to narrow down on the filters.

So I might say Amount Type is equal to customer, let's say partner. Click on Retrieve. And we originally had 1,000 records here. And now, we've got 286. I can also put multiple levels of criteria here. So if I want to look at your baby account name, if you're trying to essentially go and find a particular record, you can include multiple sets of filter logic.

And you can also use and, Boolean and, Boolean or here. Click on these too. So that was equals. I want to say it contains their-- and I can narrow this down to a specific record that I found-- maybe a user had it updated incorrectly and I want to restore this.

So if I'm looking to restore, I can then click on Restore, click on that record. You can also Restore Selection and Restore All. But let's go into the restore options and kind of cover these. So this is where you'd want to restore just the parent object or you can restore the parent and the children.

And there may be different use cases for this. If you're looking to-- this can be used for restoring data into a sandbox as well. So you can essentially choose whether you want the parent or you want the parent and children. You can choose I want to upsert this record. And this would be the case if the record still exists in Salesforce. You would do-- perform an upsert, which you're just doing update based on the data that's out in the data lake.

And then the next thing we can do is essentially perform some data mapping. So if you had back this data up a couple of years ago and there were changes to the data model in Salesforce, you could do some field remappings here on the fly when you're going to do this restore process. So you can remap fields from the data lake into fields that are in Salesforce. And this can be done on all the fields. These are all the fields that are specific to that account object.

We can also do some masking and defaulting. So if I had a name field that I wanted to mask, and this is particularly useful when you're going to restore into a sandbox or sandbox seeding. I can perform multiple masking or defaulting options upon a single restore. So if I wanted to then also choose like default billing state to MA, you could do things like this. This allows you to do those kinds of masking operators when this data is being pulled into a sandbox or into back into your production work.

So once I've made those changes, I can click Save, click Close. Then I can click on Restore Selection. And this will then kick off a job to restore this back into Salesforce. It's that simple.

Also, to just cover the Audit tab, the Audit tab integrates with our Object Time Machine Technology. So when you look at that Object Time Machine functionality that we looked at, this was showing the ability to track changes on a specific object. And that page layout was for a specific object.

If I wanted to look at the ability to pull that data in on a global basis, I can look at cases and show me globally what are the changes that are happening across my Salesforce instance, where Object Time Machine is utilized. So here, I can see all of the cases that have Object Time Machine enabled.

And this will show you from a global basis all of the changes that have been made within this org. And then click on the Details icon to be able to see specifically what was changed on each of those same events. So this is again, another good auditing feature set that's part of our Object Time Machine functionality.

We do have the ability to load data up from external sources. So we can bulk load data from say, if you have-- this is used in some use cases where if you have a-- your sunsetting a Salesforce org, some of those objects, you really don't want to carry over into your production org, those custom objects that are really just not being used much anymore. You have the ability to load those object types or that data into the Grax data lake. And then render those using that lightning component that we saw for the [? B readings. ?]

So in a similar fashion, as long as there is a primary key between that object out in the date lake and with the object in Salesforce, you can tie those together and render those in the system. So that's what we have for today. There's a lot more features on tier two and tier three. But I was kind of focusing on tier one feature sets today, which is our backup archive and restore feature sets. Thank you for joining.

[MUSIC PLAYING]

The best way to support stringent regulatory requirements

Four products to help you adapt faster

Watch a Free Product Demo

Backup, recover and act on historical data changes