There is currently no way to import data directly from a CSV file to Sanity. Still, achieving what you want is pretty straight forward. Briefly, this is want you'll want to do:
- Parse CSV file
- Structure incoming data to match you schema
- Write new documents to a newline-delimited JSON file
- Import that file to Sanity
Say your CSV file named studios.csv
looks something like this:
NAME,WEBPAGE,MOVIES
Paramount,paramountstudios.com,Ghost in the Shell;Arrival
DreamWorks,dreamworksstudios.com,Ghost in the Shell;Minority Report;Transformers
The below code uses csv-parser, but it should still serve as an example if you want to use some other package for gobbling CSV.
const csv = require('csv-parser')
const fs = require('fs')
const sanityClient = require('@sanity/client')
const client = sanityClient({
projectId: 'my-project-id',
dataset: 'my-dataset',
useCdn: false
})
function appendToFile(document) {
const docAsNewLineJson = `${JSON.stringify(document)}\n`
fs.appendFileSync('ready-for-import.ndjson', docAsNewLineJson, {flag: 'a+'})
}
function moviesByTitles(titles) {
return client.fetch('*[_type == "movie" && title in $titles]', {titles: titles})
}
fs.createReadStream('studios.csv')
.pipe(csv())
.on('data', data => {
// Assuming movie titles are semi-colon separated
const titles = data.MOVIES.split(';')
// Fetch movies with these titles
moviesByTitles(titles).then(movies => {
// Build a Sanity document which matches your Studio type
const document = {
_type: 'studio',
name: data.NAME,
webPage: data.WEBPAGE,
movies: movies.map(movie => {
return {
_ref: movie._id,
_type: 'reference'
}
})
}
// Append the document to a file for later import
appendToFile(document)
)}
})
You'll end up with the file ready-for-import.ndjson
containing Sanity documents ready for import, so now you can simply:
sanity dataset import ready-for-import.ndjson <my-dataset>
It might prove useful to include an _id
field with a unique, non-random value on each studio, e.g. studio_${data.NAME.toLowerCase().replace(' ', '-')}
. This will allow you to import your documents multiple times (using the --replace
flag), without getting duplicates.