Prisma does not read CSV files directly. The practical workflow is to parse the CSV in your app, validate and normalize each row, and then write the cleaned records into your database with Prisma Client.
If your current article idea is “Prisma read from CSV,” this is the implementation most teams actually need. We use a CSV parser for input, map the columns to a Prisma model, batch the inserts, and log any rows that fail validation so the import stays predictable.
TL;DR
- Prisma is an ORM for Node.js and TypeScript, not a CSV parser.
- Use a CSV parser such as
csv-parseorfast-csvto read the file first. - Validate and normalize rows before calling Prisma.
- Insert data in batches with
createMany()for better performance. - Log rejected rows instead of silently dropping them.
When This Pattern Makes Sense
This approach is a good fit when you already use Prisma in a backend application and need to import data from spreadsheets, exports, vendor files, or internal CSV dumps. It is especially useful for one-time migrations, recurring ETL tasks, seed data, and admin-side bulk uploads.
If your file is extremely large, your transformation rules are heavy, or the import belongs in a dedicated data pipeline, Prisma should usually be only one step in a broader ingestion workflow.
What We Are Building
In this example, we will import a CSV file of users into a relational database. The flow looks like this:
- Read the CSV file from disk
- Trim and normalize each field
- Reject invalid rows
- Batch the valid rows
- Insert them with Prisma
createMany()
Prerequisites
- A Node.js or TypeScript project with Prisma already configured
- A database connection working through Prisma
- A CSV file with stable headers
- Installed dependencies:
@prisma/client,prisma, andcsv-parse
If you are new to Prisma itself, the official Prisma ORM introduction and the CRUD guide for Prisma Client are the best starting references.
Example Prisma Schema
Let us assume the CSV will populate a simple User table.
model User {
id Int @id @default(autoincrement())
email String @unique
fullName String
role String?
city String?
createdAt DateTime @default(now())
}
Example CSV Format
| full_name | role | city | |
|---|---|---|---|
| ana@example.com | Ana Gomez | analyst | Bengaluru |
| li@example.com | Li Wei | engineer | Singapore |
| sam@example.com | Sam Patel | Mumbai |
Good imports start with predictable headers. Before you touch Prisma, make sure the CSV column names and value formats are consistent.
Step 1: Parse the CSV File
We read the file, parse the header row, and keep the raw records in memory. For medium-sized files this is fine. For very large files, use a streaming approach instead of loading everything at once.
import { readFile } from "node:fs/promises";
import { parse } from "csv-parse/sync";
const csvText = await readFile("./data/users.csv", "utf8");
const rows = parse(csvText, {
columns: true,
skip_empty_lines: true,
trim: true,
});
console.log(`Loaded ${rows.length} rows from CSV`);
Step 2: Validate and Normalize Each Row
This is the part that makes the import trustworthy. Instead of inserting raw strings straight into the database, clean each value and reject rows that do not meet the minimum rules.
type CsvRow = {
email?: string;
full_name?: string;
role?: string;
city?: string;
};
type UserInsert = {
email: string;
fullName: string;
role?: string;
city?: string;
};
function normalizeRow(row: CsvRow): UserInsert {
const email = row.email?.trim().toLowerCase();
const fullName = row.full_name?.trim();
const role = row.role?.trim() || undefined;
const city = row.city?.trim() || undefined;
if (!email || !email.includes("@")) {
throw new Error("Missing or invalid email");
}
if (!fullName) {
throw new Error("Missing full_name");
}
return {
email,
fullName,
role,
city,
};
}
Notice what this function does for us:
- lowercases emails so duplicates are easier to detect
- trims accidental whitespace
- turns empty optional fields into
undefined - stops invalid rows from silently entering the database
Step 3: Collect Valid Rows and Log Bad Ones
const validRows: UserInsert[] = [];
const rejectedRows: Array<{ rowNumber: number; reason: string; row: CsvRow }> = [];
rows.forEach((row: CsvRow, index: number) => {
try {
validRows.push(normalizeRow(row));
} catch (error) {
rejectedRows.push({
rowNumber: index + 2,
reason: error instanceof Error ? error.message : "Unknown error",
row,
});
}
});
console.log(`Valid rows: ${validRows.length}`);
console.log(`Rejected rows: ${rejectedRows.length}`);
Adding the row number makes debugging much easier, because you can go back to the original CSV and fix the exact broken line.
Step 4: Insert in Batches with Prisma
Prisma createMany() is the right tool here because it inserts multiple records in one call. According to the official Prisma CRUD docs, createMany() is designed for bulk inserts and can be much more efficient than row-by-row writes.
import { PrismaClient } from "@prisma/client";
const prisma = new PrismaClient();
const batchSize = 500;
async function importUsers(data: UserInsert[]) {
for (let index = 0; index < data.length; index += batchSize) {
const batch = data.slice(index, index + batchSize);
await prisma.user.createMany({
data: batch,
skipDuplicates: true,
});
console.log(`Inserted batch ${index / batchSize + 1}`);
}
}
try {
await importUsers(validRows);
} finally {
await prisma.$disconnect();
}
Important: skipDuplicates is useful when your target table has a unique field such as email. It helps prevent the import from failing just because the CSV contains repeated records. Check the Prisma documentation for provider-specific support before relying on it in production.
Step 5: Save Rejected Rows for Review
Do not treat rejected rows as disposable. Save them to a JSON file or admin log so the import can be corrected and rerun later.
import { writeFile } from "node:fs/promises";
await writeFile(
"./data/rejected-users.json",
JSON.stringify(rejectedRows, null, 2),
"utf8"
);
A Practical Use Case
Imagine you are migrating sign-up records from a legacy CRM into a new SaaS application. The export arrives as CSV, but the destination application already uses Prisma and a PostgreSQL database. In that setup, Prisma should not be asked to “understand CSV.” Instead, Prisma should do what it is best at: writing validated application data into the database once your parser and normalization layer have done their job.
Common Mistakes to Avoid
- Inserting raw CSV rows directly: this usually creates inconsistent casing, null handling issues, and dirty text values.
- Using placeholder field mappings: tutorials that stop at
field1andfield2are not enough for real projects. - Skipping validation: one malformed email or date can break a batch unexpectedly.
- Importing huge files in a single call: batching is safer and easier to monitor.
- Ignoring rejected rows: failed records should be reviewed, not forgotten.
When Not to Use Prisma for CSV Imports
If you are importing hundreds of millions of rows, doing warehouse-scale transforms, or loading data into analytics infrastructure, a dedicated ETL stack is usually a better fit. Prisma is strongest when the import belongs close to application logic, business rules, and relational models already defined in your codebase.
Related Reading
If your CSV data needs cleanup before import, you may also find these articles useful: How to Replace NaN Values with 0 in pandas DataFrame and Best 5 SQL Books for Data Analysts in 2026.
Conclusion
The right way to handle a Prisma CSV workflow is straightforward: parse first, validate second, and insert third. Once you stop thinking of Prisma as a CSV reader and start using it as the database layer it is meant to be, the implementation becomes much cleaner and more reliable.
For application-side imports, this pattern is usually enough: stable headers, explicit field mapping, batch inserts, and a clear rejected-row log. That combination solves most real CSV import problems without turning the article into theory-only advice.