The Amazon S3 Storage Connector enables Salesforce Data Cloud to read comma-separated values (.csv) and parquet files from your Amazon S3 buckets. The data is retrieved in a batch job that you can schedule to run as often as hourly or as infrequently as monthly …
Here are highlights from article How to Use the Amazon S3 Storage Connector in Data Cloud
1. Introduction to Amazon S3 Storage Connector:
– Enables Salesforce Data Cloud to read .csv and parquet files from Amazon S3 buckets.
– Data is retrieved in a batch job with flexible scheduling options.
2. Overview of Amazon S3:
– Object storage that allows data retrieval from anywhere on the web.
– Stores data in buckets with unique identifiers.
– Common use cases include backup and storage, media hosting, software delivery, data lakes, static websites, and file storage.
3. Configuring the Amazon S3 Storage Connector:
– Connector is available by default in Salesforce Data Cloud orgs.
– Configuration includes specifying bucket name, access key, secret key, file type, directory, file name, and file source.
4. Creating a Bucket in Amazon S3:
– Access Amazon S3 via the global search functionality in AWS.
– Create a new bucket by providing a name and choosing a region.
– Configure security and other settings as needed.
– Create a folder within the bucket to serve as the directory in Data Cloud.
5. Uploading Files to Data Cloud:
– Upload files to the created folder in the Amazon S3 bucket.
– These files can be ingested in Data Cloud for further processing.
You can read it here: https://sfdc.blog/orsPG
Source from developer(dot)salesforce(dot)com