Product Demo Video

GRAX History Stream

With Snowflake

Speakers

GRAX Team

LinkedIn

About this talk

Take control and ownership of your history in Salesforce with GRAX.

  1. Backup all your Salesforce orgs
  2. Keep archived data 100% accessible in production
  3. Navigate your history
  4. Reuse your historical data

GRAX is the new way that businesses preserve, recover, and act on their historical data. Replacing traditional point-in-time snapshots that miss 99% of all changes and store sensitive data in 3rd party clouds, GRAX captures an irrefutable, recoverable record of every single change that happens to data, storing it in the customer’s own environment and making it available for analytics alongside live data. This approach creates a modern, unified data fabric that helps companies understand and adapt to changes faster in their business. Complete the form to watch a demo video of GRAX History Stream + Snowflake in action.

3:33 min. Published on

Transcript

PRESENTER: Welcome to the next evolution of GRAX History Stream. History Stream is the latest DataOps innovation that helps you unleash the value of your historical Salesforce application data. It has always been GRAX's mission to protect your data, and the value of your data. That's why GRAX was built on the pillars of data ownership, data access, and data reuse. By owning your data in your cloud environment and retaining access to all of your historical data sets, you are now able to actually reuse your history downstream. Did you know that your history starts at back up? Once you start capturing your data over time with GRAX, you actually start feeding your history into your GRAX data vault. Let's take a back up here with the case and opportunity object that we've set up. Once securely stored, you can see your historical versions of data stored as parquet files in your own object bucket, such as your S3 bucket. These versions of your history will continue to be automatically captured each hour. History Stream eliminates API and ETL challenges by using the industry standard parquet format to make cloud application data readily available virtually anywhere. History Stream was built to align tightly with how ETL and ELT data processing works in the modern enterprise. And thus GRAX leans on well-established and understood formats, frameworks, and conventions. With that in mind, let's take a look at reusing this historical data set in Snowflake, where we've used a native scalable Snowflake pattern to load our GRAX History Stream data into the well-known data warehouse. Specifically, as we can see, with a few clicks, we've set up a database, a table, an external stage, and a snowpipe. The table will store our history stream data in a raw format to provide maximum flexibility for common Salesforce schema changes. The stage will monitor the History Stream data at a specific S3 location, or other cloud storage location. The pipe leverages Snowpipe, Snowflake's continuous ingestion service, which monitors the stage and loads the data into the table, additionally providing a notification channel we can use to trigger the process. To that end, we've set up an S3 event on our bucket which triggers the process. This sends a message to a Snowflake-provided SQSQ upon specific event types, all out-of-box native functionality that work seamlessly with GRAX. As we can see, we now have all of our recent opportunity and case historical backup data, automatically piping into Snowflake, where our data team can leverage it to create views, tables, and answer questions for end users and those that want to report on the data. Since I've been backing up and archiving my Salesforce CRM data at the highest fidelity, I can leverage that history to reuse and answer my business questions confidently and simply in a 360 degree approach alongside other key business data. Ready to unleash your Salesforce data's value? Just click the Get Started button in the upper right-hand corner to get in touch, or email at sales@grax.com.

Try History Stream now!

Get your free 7-day trial of GRAX and start piping your Salesforce data into Snowflake and anywhere else Parquet is supported.

Get started