Sharing data products from Snowflake to Databricks has become one of the most common and powerful data team workflows today. As organizations scale, teams inevitably find themselves needing to surface governed, production-grade data inside Databricks for analytics, AI, and machine learning. Whether it's refining ETL pipelines, building feature-rich models, or enabling real-time reporting, that seamless bridge between Snowflake’s warehouse and Databricks’ lakehouse is essential.
But let’s be honest: setting up that bridge can feel like assembling IKEA furniture without the manual. Between configuring JDBC or ODBC drivers, handling connector versions, juggling secret managers and IAM roles, and chasing dependencies across multiple clouds, those integrations can quickly spiral into a major DevOps headache.
What if you could bypass all that glue code and cloud plumbing? That’s where Amplify Data’s platform truly shines. Built specifically to alleviate the friction of modern data product distribution, Amplify empowers teams to publish curated datasets directly from Snowflake and let downstream systems (like Databricks) self-connect, without manual credential hand-offs or SDKs.
With Amplify, you can:
All of this happens without writing custom connectors or worrying about IAM and secrets flow, making it a powerful alternative to hand-rolled JDBC integrations.
If you’re interested in the nitty-gritty of how it works behind the scenes, keep reading: the detailed steps are below.
But if you just want to save time and focus on your data, check out Amplify Data and skip the complexity entirely.
The simplest and often the most robust way to share data:
✅ Works well for batch and large datasets.
✅ Decouples compute.
🚫 Not real-time.
sql
CopyEdit
COPY INTO 's3://your-bucket/path/'
FROM your_snowflake_table
FILE_FORMAT = (TYPE = PARQUET);
python
CopyEdit
df = spark.read.parquet("s3://your-bucket/path/")
df.display()
You can also use Delta Lake format if desired:
spark.read.parquet(...).write.format("delta").save(...)
).
Databricks offers a Snowflake Spark Connector, which allows Databricks to read/write directly to Snowflake via JDBC with pushdown.
✅ Good for interactive and real-time queries.
✅ No intermediate files needed.
🚫 Can incur Snowflake compute costs for each query.
net.snowflake:spark-snowflake_2.12:2.11.0-spark_3.1
— version depends on your Spark version).python
CopyEdit
sfOptions = {
"sfURL": "<account>.snowflakecomputing.com",
"sfDatabase": "<database>",
"sfSchema": "<schema>",
"sfWarehouse": "<warehouse>",
"sfRole": "<role>",
"sfUser": "<user>",
"sfPassword": "<password>"
}
snowflake_df = (
spark.read.format("snowflake")
.options(**sfOptions)
.option("dbtable", "your_table")
.load()
)
snowflake_df.display()
You can also write data back to Snowflake the same way.
If you want to share a managed data product in Snowflake Marketplace or to another account:
However, if you still want that data to end up in Databricks:
Snowflake and Databricks both offer solid connectivity via Spark connectors, JDBC/ODBC, external tables, Delta Sharing, etc., making this one of the most common patterns in modern data architectures. In the traditional manual integration:
🎯 In short: It's powerful, flexible, and production‑grade, but requires significant DevOps, engineering coordination, and ongoing maintenance.
Amp up your workflow with Amplify Data (now part of Monda) - a platform built for sharing governed data products across systems without all the plumbing:
✅ Simplifies sharing into a governed, tracked data marketplace offered to internal/external consumers.
🚀 Frees up engineering time for more valuable work, while your team focuses on data innovation, not connector maintenance.
Skip the technical steps and share data products from Snowflake to Databricks easily using Amplify (powered by Monda). Find out more or get a demo.
150+ data companies use Monda's all-in-one data monetization platform to build a safe, growing, and successful data business.
Explore all featuresMonda makes it easy to create data products, publish a data storefront, integrate with data marketplaces, and manage data demand - data monetization made simple.
Sign up to Monda Monthly, the newsletter for data providers, and get thought leadership, product updates and event notifications.