Documentation

Import Overview


The Import wizard pulls CSV or Excel files into your project. It walks you through five steps: upload, choose a mode, pick a target, map columns, run the import.

This page covers the parts every import has in common — modes, file prep, column mapping, error handling, duplicates. For mode-specific walkthroughs see:

Open the import wizard

From the Dashboard, click Import Data.

Choose the right mode

There are four modes. The right one depends on what’s in your file and what you want to end up with.

ModeUse when
Single DomainOne CSV or one Excel sheet with one drillhole’s data for one table (e.g., assays for DDH-001)
Bulk DomainOne CSV or one Excel sheet with many drillholes’ data for one table (rows are tagged with a Hole ID column)
CollarA collar/header file — creates new drillholes from each row
WorkbookA multi-sheet .xlsx — each sheet maps to a different data table or holds collar data

If you’re not sure, pick the mode that matches your file’s shape — the wizard auto-suggests one based on what it detects.

Step 1 — Upload the file

Drag a file onto the upload zone or click to browse.

Supported formats:

  • CSV (.csv) — comma-separated values, UTF-8 expected
  • Excel (.xlsx) — single or multi-sheet workbooks

There’s no hard file-size limit enforced by the app, but very large files (tens of thousands of rows) may take a while to parse in the browser. Splitting into smaller files often imports faster than one giant file.

Step 2 — Choose the import mode

Pick from the four modes above. The wizard’s suggested mode is highlighted.

Step 3 — Pick a target

What appears here depends on the mode:

  • Single Domain — pick the target drillhole and the target data table. You can create a new table on the fly if the right one doesn’t exist yet.
  • Bulk Domain — pick the target data table. Drillholes are matched per-row from the Hole ID column.
  • Collar — pick the field mapping (no target hole, since you’re creating them).
  • Workbook — assign each sheet to a target table, or set the sheet to be skipped.

Auto-detection

To save you mapping, the wizard scans your file’s headers and tries to detect:

  • Hole ID column — common aliases: HoleID, hole_id, DHID, DrillholeID, Hole_ID
  • Depth columnsFrom, To, Depth_From, Depth_To, from_m, to_m, depth_from, depth_to

Detected mappings are pre-filled in step 4. You can change them manually if the detection is wrong.

Step 4 — Map columns

A side-by-side mapping interface shows your source columns on the left and your destination columns on the right. Drag or click to link them.

  • Mapped — value flows into the destination column
  • Unmapped — column is ignored on import
  • Create new column — if the destination table doesn’t have a column you need, create it from this screen and map to the new one

The auto-detected matches from step 3 are pre-filled. You can override any of them.

Step 5 — Import

Click Import to run the bulk insert. A progress bar tracks completion. After the import finishes, a summary shows:

  • Imported — rows successfully inserted
  • Skipped — rows that failed validation or were malformed
  • Errors — specific issues per row (e.g., “row 47: missing required column depth_from”)

You can scroll through the error list and copy the offending rows for review.

How duplicates are handled

Imports are bulk inserts, not upserts. The app does not detect or update existing rows.

  • If you import a file with a row that already exists in your table, a duplicate row is created.
  • This is by design — Blue Butterfly cannot reliably know which combination of columns identifies a row in your data.

To avoid duplicates when re-importing:

  • Delete first. Empty the table for the affected drillhole (or the whole table) before re-importing.
  • Filter the source. Trim the source file to only the new rows.
  • Use mapping. Skip the columns that would create duplicates by leaving them unmapped.

Imports run on local data

Imported rows are written to local IndexedDB first, just like data you type in. They sync to the cloud on the next sync cycle (every 30 seconds by default). See Offline & Sync. For very large imports, the sync engine pages rows in batches of 5,000.

Tips

  • Sort by depth before importing interval data. Rows display in creation order, and creation order matches import order. A pre-sorted file gives you a tidy spreadsheet.
  • Use the mapping step’s “create new column” feature. Faster than going to Templates first, then coming back.
  • Test on a small subset. Try the import on the first 20 rows before committing the full file. Catches mapping or unit mistakes early.
  • Check your validation rules. Strict rules can reject many rows. If you’re seeing high error counts, look for off-by-one units (g/t vs ppm), timezone issues on dates, or capitalization in dropdown values.