Snowflake
Connect Snowflake to your preprocessing pipeline, and use the Unstructured Ingest CLI or the Unstructured Ingest Python library to batch process all your documents and store structured outputs locally on your filesystem.
The requirements are as follows.
-
A Snowflake account and its identifier.
-
The Snowflake username and its password in the account.
-
The Snowflake hostname and its port number in the account.
-
The name of the Snowflake database in the account.
-
The name of the schema in the database.
-
The name of the table in the schema.
Snowflake requires the target table to have a defined schema before Unstructured can write to the table. The recommended table schema for Unstructured is as follows:
SQL
The Snowflake connector dependencies:
You might also need to install additional dependencies, depending on your needs. Learn more.
These environment variables:
SNOWFLAKE_ACCOUNT
- The ID of the Snowflake account, represented by--account
(CLI) oraccount
(Python).SNOWFLAKE_USER
- The name of the Snowflake user, represented by--user
(CLI) oruser
(Python).SNOWFLAKE_PASSWORD
- The user’s password, represented by--password
(CLI) orpassword
(Python).SNOWFLAKE_HOST
- The hostname for the Snowflake account, represented by--host
(CLI) orhost
(Python).SNOWFLAKE_PORT
- The host’s port number, represented by--port
(CLI) orport
(Python).SNOWFLAKE_DATABASE
- The name of the Snowflake database, represented by--database
(CLI) ordatabase
(Python).
These environment variables:
UNSTRUCTURED_API_KEY
- Your Unstructured API key value.UNSTRUCTURED_API_URL
- Your Unstructured API URL.
Now call the Unstructured Ingest CLI or the Unstructured Ingest Python library. The destination connector can be any of the ones supported. This example uses the local destination connector:
Was this page helpful?