Jack Henry Data Hub

Every financial institution wants to make data-driven decisions, but before any decision can be data-informed, you need the right data to back it up. Jack Henry Data Hub pipelines various data sets to your data warehouse of choice from sources like core, Banno, Jack Henry Platform, and more product groups to come in the future. Having the data in your warehouse enables you to run comparative queries across data sets and connect your data to third-party tools, all while maintaining complete control over who can access your data at all times.

How Does It Work?

Jack Henry Data Hub is a service that pipelines data from your Jack Henry products to your data warehouse of choice, allowing you to analyze and combine the data with third-party data sources in your own data warehouse.

We currently have a few different ways to deliver data, and are working to create delivery methods that are as rapid and seamless as possible:

  • You can receive your data in Google BigQuery in your own Google Cloud Platform (GCP) project.
  • Data can be egressed to alternate destinations such as AWS or your local environment.
  • In the future, data can be shared through platform specific mechanisms for Azure Fabric and Snowflake (in development).

Data Delivery to BigQuery

Financial Institution-Owned GCP Project

These are the high-level steps for your financial institution to have your data provisioned to your own GCP project.

  • Create your own GCP project. Note: Creating your own GCP project does not require an enterprise-level agreement or contract with Google.
  • After setting up your GCP project, you will send your Google account name to Jack Henry so that Jack Henry can provision your data to you through a Google BigQuery shared listing.
  • You will then subscribe to the listing which will allow you to receive your data in your private BigQuery instance.
  • You can control who has access to what data at all times through your GCP project’s Identity and Access Management controls.

Data Egress Options

If your financial institution wants to receive your data in another platform such as Azure, AWS or Snowflake, Jack Henry Data Hub offers data egress options to help you get your data to your desired destination.

  • Seamless data egress from Jack Henry Data Hub. Leveraging powerful services (such as GCP Data Flow, Cloud Data Fusion, and Pub/Sub), we are able to deliver batch and streaming data to other public cloud services like Azure SQL or Azure Data Lake.
  • Batch delivery. We support secure, scheduled batch delivery to external destinations leveraging our Jack Henry File Transfer service. We support exporting data in industry-standard CSV, AVRO, or Parquet formats that are ready to load into platforms like Azure Fabric or SnowFlake.
  • Native data egress through platform specific sharing services (in development). Initial services will target Snowflake Data Sharing and Azure Fabric Data Factory.

Please see our JackHenry.Dev page for more technical information.

FAQ


Where can I find the data schema for each data set?
Data schema for Jack Henry Data Hub data sets can be found on JackHenry.Dev.
How does Jack Henry Data Hub differ from other data solutions at Jack Henry?
Jack Henry Data Hub is a cloud-native data accessibility solution designed for financial institutions with a desire to dive deeper into their Jack Henry product data. It offers near-real-time streaming capabilities on select data sets, speedy query results, third-party data access options, and top-tier security standards.

Unlike other data products, Jack Henry Data Hub does not offer reporting or visualization capabilities. Jack Henry Data Hub’s main purpose is to converge raw data for products across Jack Henry so that financial institution employees can decide what data makes most sense for their financial institution and only consume the data sets they need.


Related