The CSV Import pulls CSV files from an SFTP server you control and turns each row into a record on a Data Feed. Reach for it when your source system is a nightly export, a partner-supplied feed, or any process that drops a file on a server but doesn’t expose an API.Documentation Index
Fetch the complete documentation index at: https://docs.frontic.com/llms.txt
Use this file to discover all available pages before exploring further.
Auth
SFTP credentials
Update methods
Trigger · Polling
Resources
Content (one record per row)
Install and setup
There’s no plugin to install. You provide an SFTP endpoint and Frontic does the rest.Decide where files will live
Pick a directory on an SFTP server Frontic can reach. Best practice: a dedicated user with read+delete on a single base path, not your whole filesystem.
Add the integration in Frontic
In Frontic admin, Integrations → Add → CSV Import. Enter the SFTP host, port, username, password, and base path.
Test the connection
Frontic opens a connection, lists the base path, and confirms it reached the server. Fix credentials or firewall rules until this test passes.
Connection settings
Currently the only supported transport. Reserved for future protocols.
SFTP server hostname or IP.
SFTP port.
SFTP user. Use a dedicated account scoped to the import directory.
SFTP password. Stored encrypted; redacted in logs.
The root directory the connector lists from. Feed-level configurations are evaluated relative to this path.
Channels
Unlike connectors with source-side segmentation (Shopware sales channels, Storyblok spaces), CSV files aren’t natively segmented. The CSV Import’s channels exist purely to declare which project translations each CSV row should be resolved into. Each row gets the same payload across every translation on the channel — see “Same row across all locales” under Good to know for the implication. Per channel, the connector stores:A label for the channel in Frontic.
Project locale keys the CSV’s rows resolve into. Every row gets the same payload across all of these.
The translation used as fallback for missing values.
Data Feeds
The CSV Import exposes a single resource type — Content (one record per row). Each Data Feed binds a filename pattern under your base path; you can have many feeds per integration if you have many file types (products, customers, categories, …). The standard Settings → Updates → Schema setup wizard applies — see Data Feeds in the overview. For CSV Import specifically:- Updates step — Polling and manual trigger are supported. The Ingest API isn’t used.
- No feed Refresh. Each polling pass reprocesses every matching file. Use Cleanup files so processed files aren’t picked up next time, or version filenames so the regex matches only new ones.
Feed configuration
Subdirectory under the base path where this feed’s files live. Empty = base path itself.
Field delimiter. Allowed values:
,, ;, \t — anything else is rejected.Regex pattern matched against filenames in the start dir. For example
^products_\d{8}\.csv$ matches products_20260426.csv.When on, before reading any file the connector checks for a lock file in the same folder. If a lock file exists, the run aborts — no files are processed. Useful for partner uploads that drop a
.lock while writing.(Sub-config of Use lockfile.) The exact filename to look for as the lock file. Defaults to empty (no check).
When on, files are processed in filename order. Useful when filenames carry timestamps and order matters.
(Sub-config of Sort files and folders.) Either
ASC or DESC.When on, the record’s Source ID is built from one or more named columns instead of an
id column. Multi-column identifiers are joined with -.(Sub-config of Use custom identifier.) Column names to use as the Source ID.
Delete each file after a successful run.
(Sub-config of Cleanup files.) After cleaning files, also remove directories that ended up empty.
What the data looks like
One row, one record
The header row of the CSV becomes the field keys; each subsequent row becomes a record. Whitespace in headers is trimmed. The Source ID for a record is taken from theid column unless you’ve set Custom identifiers.
1001 and 1002. As CSV files come in, the feed’s schema fills in automatically from the column headers — no need to declare fields up front in the wizard’s Schema step.
Good to know
- CSV only. No JSON, no XML, no Excel. If you need other formats, convert upstream or use the Custom Integration and the Ingest API.
- Same row across all locales. A CSV row produces the same payload for every locale on the channel. The CSV Import doesn’t read column-name suffixes like
name_de/name_ennatively. To split by locale: ingest the fields raw and resolve translations in your Value Composer, or split your data into per-locale files and feeds and map them via the Data Sync. - No real-time. The connector polls. Webhook-style “the moment a file arrives” delivery isn’t supported — set the polling cadence for your acceptable lag.
- Every run re-reads every matching file. There’s no file-level “I’ve seen this before” check — if a file stays in the directory, it’s parsed again on the next run. Rows that haven’t materially changed still short-circuit at the storage layer (change detection), so downstream impact is contained — but each row counts as an API Update at intake. Use Cleanup files to delete processed files, or version filenames so the regex picks up only the new ones.
Related
Custom Integration
For source systems with their own API — push records directly via the Ingest API.
Ingest API
The HTTP push path when SFTP polling isn’t the right fit.