Categories:

DML Commands - Data Loading

PUT¶

Uploads (i.e. stages) data files from a local directory/folder on a client machine to one of the following Snowflake stages:

  • Named internal phase.

  • Internal stage for a specified table.

  • Internal stage for the electric current user.

Once files are staged, the data in the files tin can exist loaded into a tabular array using the COPY INTO <table> control.

Note

  • PUT does not support uploading files to external stages. To upload files to external stages, employ the utilities provided by the cloud service.

  • The following Snowflake clients do not support PUT:

    • .NET Driver

  • The ODBC commuter supports PUT with Snowflake accounts hosted on the post-obit platforms:

    • Amazon Web Services (using ODBC Driver Version 2.17.5 and college).

    • Google Cloud Platform (using ODBC Driver Version two.21.5 and higher).

    • Microsoft Azure (using ODBC Driver Version two.20.2 and higher).

See too:

GET , List , REMOVE

Syntax¶

                                PUT                file                ://                <path_to_file>                /                <filename>                internalStage                [                PARALLEL                =                <integer>                ]                [                AUTO_COMPRESS                =                TRUE                |                FALSE                ]                [                SOURCE_COMPRESSION                =                AUTO_DETECT                |                GZIP                |                BZ2                |                BROTLI                |                ZSTD                |                DEFLATE                |                RAW_DEFLATE                |                NONE                ]                [                OVERWRITE                =                True                |                False                ]              

Where:

                                    internalStage                  ::=                  @[                  <namespace>                  .]                  <int_stage_name>                  [/                  <path>                  ]                  |                  @[                  <namespace>                  .]%                  <table_name>                  [/                  <path>                  ]                  |                  @~[/                  <path>                  ]                

Required Parameters¶

file:// path_to_file / filename

Specifies the URI for the data file(s) on the client machine, where:

  • path_to_file is the local directory path to the file(due south) to upload. If the files are located in the root directory (or sub-directory) on the client machine:

    Linux/Mac

    Y'all must include the initial frontward slash in the path (eastward.g. file:///tmp/load ).

    Windows

    You must include the drive and backslash in the path (e.one thousand. file://C:\temp\load ).

  • filename is the name of the file(south) to upload. Wildcard characters ( * , ? ) are supported to enable uploading multiple files in a directory.

The URI can be enclosed in single quotes, which allows special characters, including spaces, in directory and file names; notwithstanding, the drive and path separator is a forward slash ( / ) for all supported operating systems (e.g. 'file://C:/temp/load information' for a path in Windows containing a directory named load data ).

internalStage

Specifies the location in Snowflake where to upload the files:

@[ namespace .] int_stage_name [/ path ]

Files are uploaded to the specified named internal stage.

@[ namespace .]% table_name [/ path ]

Files are uploaded to the stage for the specified table.

@~[/ path ]

Files are uploaded to the stage for the current user.

Where:

  • namespace is the database and/or schema in which the named internal phase or table resides. It is optional if a database and schema are currently in utilise within the session; otherwise, it is required.

  • path is an optional example-sensitive path for files in the cloud storage location (i.east. files take names that begin with a common cord) that limits access to a set of files. Paths are alternatively called prefixes or folders by different deject storage services.

Note

If the stage proper name or path includes spaces or special characters, it must be enclosed in unmarried quotes (e.one thousand. '@"my phase"' for a stage named "my stage" ).

Optional Parameters¶

PARALLEL = integer

Specifies the number of threads to use for uploading files. The upload process separate batches of data files by size:

  • Small files (< 64 MB compressed or uncompressed) are staged in parallel every bit individual files.

  • Larger files are automatically split into chunks, staged concurrently, and reassembled in the target phase. A single thread can upload multiple chunks.

Increasing the number of threads tin amend performance when uploading large files.

Supported values: Any integer value from ane (no parallelism) to 99 (utilize 99 threads for uploading files).

Default: iv

Annotation

A sixteen MB (rather than 64 MB) limit applies to older versions of Snowflake drivers, including:

  • JDBC Driver versions prior to 3.12.1.

  • ODBC Commuter versions prior to 2.20.5.

  • Python Connector versions prior to 2.2.0.

AUTO_COMPRESS = True | Imitation

Specifies whether Snowflake uses gzip to shrink files during upload:

  • TRUE : Files are compressed (if they are not already compressed).

  • Imitation : Files are not compressed (i.e. files are uploaded every bit-is).

This choice does not back up other compression types. To use a different compression type, compress the file separately before executing the PUT command. Then, identify the pinch type using the SOURCE_COMPRESSION choice.

Ensure your local folder has sufficient space for Snowflake to compress the data files before staging them. If necessary, set the TEMP , TMPDIR or TMP environment variable in your operating organisation to point to a local binder that contains additional complimentary infinite.

Default: TRUE

SOURCE_COMPRESSION = AUTO_DETECT | GZIP | BZ2 | BROTLI | ZSTD | DEFLATE | RAW_DEFLATE | NONE

Specifies the method of compression used on already-compressed files that are being staged:

Supported Values

Notes

AUTO_DETECT

Compression algorithm detected automatically, except for Brotli-compressed files, which cannot currently be detected automatically. If loading Brotli-compressed files, explicitly use BROTLI instead of AUTO_DETECT .

GZIP

BZ2

BROTLI

Must exist used if loading Brotli-compressed files.

ZSTD

Zstandard v0.8 (and higher) supported.

Deflate

Deflate-compressed files (with zlib header, RFC1950).

RAW_DEFLATE

Raw Debunk-compressed files (without header, RFC1951).

NONE

Data files to load have not been compressed.

Default: AUTO_DETECT

Annotation

Snowflake uses this option to detect how the data files were compressed so that they can exist uncompressed and the data extracted for loading; it does non utilize this option to compress the files.

Uploading files that were compressed with other utilities (e.g. lzip, lzma, lzop, and xz) is not currently supported.

OVERWRITE = True | Simulated

Specifies whether Snowflake overwrites an existing file with the same name during upload:

  • TRUE : An existing file with the aforementioned name is overwritten.

  • FALSE : An existing file with the same name is not overwritten.

    Annotation that a Listing functioning on the stage is performed in the background, which tin bear upon the performance of the PUT functioning.

    If attempts to PUT a file fail because a file with the aforementioned name exists in the target stage, the following options are available:

    • Load the information from the existing file into 1 or more tables, and remove the file from the stage. And so PUT a file with new or updated data to the stage.

    • Rename the local file, and and then attempt the PUT functioning again.

    • Set OVERWRITE = Truthful in the PUT statement. Exercise this merely if it is really rubber to overwrite a file with data that might not withal have been loaded into Snowflake.

Note that if your Snowflake account is hosted on Google Cloud Platform, PUT statements do non recognize when the OVERWRITE parameter is set to Truthful. A PUT operation e'er overwrites any existing files in the target stage with the local files you are uploading.

The following clients support the OVERWRITE option for Snowflake accounts hosted on Amazon Web Services or Microsoft Azure:

  • SnowSQL

  • Snowflake ODBC Driver

  • Snowflake JDBC Driver

  • Snowflake Connector for Python

Supported values: Truthful, Simulated.

Default: FALSE .

Usage Notes¶

  • The command cannot be executed from the Worksheets Worksheet tab page in the Snowflake web interface; instead, use the SnowSQL customer to upload data files, or check the documentation for a specific Snowflake client to verify support for this command.

  • File-globbing patterns (i.eastward. wildcards) are supported.

  • The control does non create or rename files.

  • Uploaded files are automatically encrypted with 128-bit or 256-flake keys. The CLIENT_ENCRYPTION_KEY_SIZE account parameter specifies the size fundamental used to encrypt the files.

  • The command ignores any duplicate files you attempt to upload to the same stage. A duplicate file is an unmodified file with the same proper noun as an already-staged file.

    To overwrite an already-staged file, y'all must alter the file you lot are uploading then that its contents are dissimilar from the staged file, which results in a new checksum for the newly-staged file.

Tip

For security reasons, the command times out afterwards a set period of time. This can occur when loading big, uncompressed data files. To avoid timeout issues, we recommend compressing big data files using one of the supported pinch types before uploading the files. Then, specify the compression type for the files using the SOURCE_COMPRESSION option.

Yous can also consider increasing the value of the PARALLEL option, which can help with performance when uploading large data files.

Furthermore, to take advantage of parallel operations when loading information into tables (using the COPY INTO <table> command), we recommend using data files ranging in size from roughly 100 to 250 MB compressed. If your information files are larger, consider using a third-party tool to dissever them into smaller files before compressing and uploading them.

Examples¶

Upload a file named mydata.csv in the /tmp/data directory (in a Linux or macOS environment) to an internal stage named my_int_stage :

                                    PUT                  file                  :///                  tmp                  /                  data                  /                  mydata                  .                  csv                  @                  my_int_stage                  ;                

Upload a file named orders_001.csv in the /tmp/data directory (in a Linux or macOS environment) to the stage for the orderstiny_ext tabular array, with automated information compression disabled:

                                    PUT                  file                  :///                  tmp                  /                  data                  /                  orders_001                  .                  csv                  @%                  orderstiny_ext                  AUTO_COMPRESS                  =                  FALSE                  ;                

Same example as above, merely using wildcard characters in the filename to upload multiple files:

                                    PUT                  file                  :///                  tmp                  /                  data                  /                  orders_                  *                  01                  .                  csv                  @%                  orderstiny_ext                  AUTO_COMPRESS                  =                  FALSE                  ;                

Upload a file named mydata.csv in the C:\temp\data directory (in a Windows environment) to the stage for the current user, with automatic information pinch enabled:

                                    PUT                  file                  ://                  C                  :\                  temp                  \                  data                  \                  mydata                  .                  csv                  @~                  AUTO_COMPRESS                  =                  TRUE                  ;