How to Share Data Products from Snowflake to Amazon S3

Snowflake to Amazon S3
Table of contents

Imagine this: you’ve built powerful data products inside Snowflake - cleaned, transformed, and ready-to-consume. Now your clients or data consumers want those results delivered - fast - to their own infrastructure. And that infrastructure is often Amazon S3, the go-to destination for shared analytics.

Exporting data from Snowflake to S3 is one of the most common workflows you’ll encounter in modern data pipelines. You use the reliable COPY INTO command, set up external stages, configure IAM permissions, design storage‑integration objects, and schedule tasks or ETL jobs. That flow is rock-solid. But let’s be honest, from policy attachments and IAM roles to cryptic SQL and automation scheduling, it’s a pretty tech-heavy setup. One minor permission mismatch, and your data pipeline grinds to a halt. Not exactly the breezy “zero friction” flow that non‑engineer teams crave.

⚡️ The Simpler Way: Amplify Data

If you’d rather skip the cloud‑infra gymnastics, there’s great news: Amplify Data is built for exactly this scenario.

Amplify’s no‑code platform lets you securely expose curated Snowflake Data Products and have them delivered to destinations like S3, without writing COPY statements or wrestling with IAM policies. As a data provider, you can:

  • Configure filters, row‑column visibility, and file formats through a clean point‑and‑click UI
  • Choose S3 as a target without touching a line of infra code or credentials
  • Track usage, delivery success, and customer engagement in real time, right from their dashboard.

In short, Amplify shifts the heavy lifting of data operations to a no‑code platform, freeing your team to focus on building data products, not managing infra.

🧭 Technical Guide to Sharing Data Products from Snowflake to Amazon S3

If you want to share data products from Snowflake to Amazon S3 without using Amplify, there isn’t a direct “one-click” export of a Snowflake Data Product / Data Share to S3 because Snowflake Data Sharing is meant to stay within the Snowflake ecosystem.

However, you can absolutely extract data from Snowflake and deliver it into Amazon S3 in a secure, automated way.

Here is how you can do it, step by step:

📄 Options to share Snowflake data to S3

1. Unload data to S3 using COPY INTO

Snowflake has a built-in COPY INTO command that exports query results or table data to external stages, like an S3 bucket.

Steps:

✅ Create an S3 bucket and ensure you have an AWS IAM user or role with access to it.

✅ Create a Snowflake External Stage that points to the S3 bucket.

✅ Use COPY INTO to export data.

Example:

sql
CopyEdit
-- create an external stage pointing to your S3 bucket
CREATE STAGE my_s3_stage
 URL='s3://my-bucket-name/snowflake-export/'
 STORAGE_INTEGRATION = my_s3_integration;

-- unload data from a table to S3
COPY INTO @my_s3_stage/orders/
FROM orders
FILE_FORMAT = (TYPE = CSV COMPRESSION = GZIP)
OVERWRITE = TRUE;

You can also export query results:

sql
CopyEdit
COPY INTO @my_s3_stage/orders_export/
FROM (
   SELECT order_id, customer_id, total_amount
   FROM orders
   WHERE order_date >= '2024-01-01'
)
FILE_FORMAT = (TYPE = PARQUET);

Then, the files will appear in S3.

2. Use Snowflake + ETL Tools

If you don’t want to manage the COPY INTO jobs yourself, you can use an ETL or orchestration tool to move Snowflake data to S3:

  • Fivetran (Snowflake → S3)
  • Matillion
  • Airflow (with Snowflake and S3 operators)
  • dbt + custom S3 hooks

These tools can schedule, transform, and export data to S3 more robustly.

3. Programmatically using Python/SDK

You can also write a Python script that:

  • Queries Snowflake (using the Snowflake Python Connector)
  • Saves results as CSV/Parquet
  • Uploads the file to S3 (using boto3)

Here’s a mini example:

python
CopyEdit
import snowflake.connector
import pandas as pd
import boto3

# query Snowflake
conn = snowflake.connector.connect(...)
df = pd.read_sql("SELECT * FROM orders", conn)

# save locally
df.to_parquet("orders.parquet")

# upload to S3
s3 = boto3.client('s3')
s3.upload_file("orders.parquet", "my-bucket-name", "orders/orders.parquet")

Security & Permissions

  • In Snowflake, set up a storage integration and IAM role for secure S3 access.
  • Make sure S3 buckets have proper permissions, encryption, and policies in place.

Automating the flow

Once you decide which method to use, you can schedule it with:

  • Snowflake Tasks & Streams (for COPY INTO)
  • Airflow
  • Lambda + EventBridge
  • Cron jobs (for scripts)

TL;DR

If you love full control and don’t mind the extra complexity, stick with Snowflake’s native COPY INTO + storage integration path. But if you want a fast, secure, and maintenance‑free way to deliver curated Snowflake data to S3 (or other destinations), check out Amplify Data for a much friendlier experience.

Monetize your data

150+ data companies use Monda's all-in-one data monetization platform to build a safe, growing, and successful data business.

Explore all features

Related articles

Monda makes it easy to create data products, publish a data storefront, integrate with data marketplaces, and manage data demand - data monetization made simple.

Data Sharing

How to Share Data Products from Snowflake to Google BigQuery

Lucy Kelly

Data Sharing

How to Share Data Products from Snowflake to Databricks

Lucy Kelly

Data Sharing

How to Share Data Products from Snowflake to Snowflake

Lucy Kelly

Monda Logo

Grow your business with one data commerce platform.

Get a demo

Be the best informed in the data industry

Sign up to Monda Monthly, the newsletter for data providers, and get thought leadership, product updates and event notifications.