About this talk
Google BigQuery is a powerful SaaS data analytics platform that enables users to process and analyze business-critical datasets. With BigQuery and GRAX, users can quickly transfer data from other sources, such as Salesforce. Once the data is available in an organization's Google Cloud Platform (GCP), they can easily reuse their Salesforce backups as a data source for piping from their Salesforce to BigQuery - or into any other analytics tools of their choosing.
Businesses looking to become more data-driven will need to stream their Customer Relationship Management (CRM) data from Salesforce to Google BigQuery in order to derive actionable insights. When using your customer data found in Salesforce for data analytics, you may want to gain insights from data in BigQuery, such as:
- Improving Sales Performance: Consuming Salesforce leads and opportunities data can help monitor overall sales team performance in real time by tracking key performance indicators (KPIs) like sales revenue, conversion rates, and deal size to identify trends and patterns.
- Creating a 360° Customer View: Analyzing contact and account Salesforce objects can help organizations create customer segmentation by behavior, preferences, and needs. This dataset can help enterprises better understand the customer journey to better position and message them in the future.
- Bettering Outcomes with Predictive Analytics: Organizations can derive and use machine learning algorithms to predict future outcomes by streaming and analyzing key Salesforce objects. This will aid the business with strategic planning and prioritization.
- Reducing Churn Rate: By reviewing case objects, organizations can analyze customer behavior to identify churn risks, improve the user experience, and reduce churn rates. Case data can help them measure customer engagement and customer satisfaction (CSAT) to take proactive measures to retain customers and improve customer loyalty.
Enterprises can accelerate data analysis and maximize the value of their data by piping and reusing Salesforce data in BigQuery with GRAX History Stream. Watch this video to see how GRAX makes downstream data consumption easy by loading the data into Google BigQuery to make it available in real time for SQL queries and run analytics.
Good afternoon, Joe Gaska, head of Product at GRAX. Today I'm going to be demonstrating how quick it is to consume data directly into BigQuery within GCP.
What I'm going to be doing is I'm going to be replicating data directly in, I'm going to use the data transfer job. I'm going to create a transfer job, and I'm actually going to replicate data directly from S3 into GCP.
We highly recommend segmenting your analytics deployment from your graphs deployment. So whether it's running on GCP or aws, replicating your data directly into your analytics environment for consumption segments data and access controls.
So what I'm doing here is I'm actually building a transfer job that I'm going to transfer data directly from AWS into GCP.
And what I'm doing here is I'm just setting up a replication job and I'm actually going to choose to create the Google storage bucket.
And I'm going to quickly do that, say next steps, selecting all the defaults. And I'm going to go ahead and create this transfer job.
And as you can see, the transfer job is actually going to run. It's queued right now, and it's actually going to reach out to AWS S3 and replicate all of the data down.
So as you can see right now, it's in progress. Again, regardless if it's GCP or AWS, where your GRAX deployment is, replicate the data into your analytics environment so you can segment access, controls and privacy within the data.
So again, GRAX is a source of truth. History Stream is outputting the data, and now this is replicating in for consumption.
So again, you can see that was successful in just a few minutes. So I'm simply going to go over to BigQuery.
Now that the data has been replicated and I have nothing deployed in the environment, I'm simply going to click add data.
I'm going to say Google Cloud Storage. I'm actually going to select the bucket where it's deployed. You can see here is all of the data sets that are available.
I'm simply going to choose ‘opportunity’. I'm going to say star Parquet because I wanted to consume all the Parquet data. I'm going to simply say select, and you can see automatically it recognizes the file format as Parquet.
I'm going to go ahead and create a new dataset. I'm going to name this opportunity, and again, I'm going to select the region.
I'm simply going to do South Carolina. So I created a data set. I'm going to give the table a name and I'm simply going to create the table.
So this is automatically going to look in that directory, query the data because of how Parquet and how History Stream works.
And you're going to see how quick this is. So now the opportunity dataset has been created. You can see all the fields.
Again, these are the schema that has come directly from Salesforce. I have linked it up to the bucket and I can simply preview the data as it's coming.
It's just refreshing. Now I'm going to refresh the data, go back to preview in a minute. As you can see BigQuery has read the Parque data, and if you simply click on preview, you can see all the data.
You have every version of all of your opportunity data directly within BigQuery for quick Query ability and access. And not only that, you can export it directly into Google Sheets or you can export it directly into Looker Studio.
So if I click on Looker Studio, you can directly, if you want to build a visualization or if you want to export into sheets for access control, right there.
Again, BigQuery Data Transfer. Within five minutes we had data up and running and going. Thank you very much. Have a great day.