Storage estimates

Flag: ⭐️ Modified: February 14, 2026 7:58 PM Created: September 17, 2025 3:27 PM Master Type: Notes Hide: No Starred: Yes Status: Unassigned

Yes, that helps a lot. Using your file sizes, here’s what the storage footprint looks like per item and at collection scale. I split it between what typically lives in the DAMS (TIFFs + JPGs) and what usually sits in your backup/originals storage (RAWs + PSDs), since you keep those outside the DAMS.

Per-item storage (using your sizes)

Per capture/view inputs

Scenarios

Item type Views captured Positives generated DAMS (TIFF+JPG) per item Backups (RAW+PSD) per item Total per item
Print, front only 1 0 135–157 MB 750–950 MB 0.86–1.08 GB
Print, front + back 2 0 270–314 MB 1.50–1.90 GB 1.73–2.16 GB
Negative, front+back with positives 2 2 (front/back positives) 540–628 MB 1.50–1.90 GB 2.0–2.47 GB

Notes:

What that means at collection scale

I ran two mix scenarios (you can swap in your own percentages if you prefer):

Totals below are ranges (min–max), in terabytes.

75,000 items

100,000 items

200,000 items

Read on your 200 TB Google Workspace pool

Translation:

What to adjust (if you want more headroom)

  1. Keep JPGs out of cloud backups if the DAMS can generate them on demand. That only saves ~20–30 MB per negative item, so it’s small compared to RAW/PSD, but every bit helps.
  2. Revisit PSD retention after QA sign-off (controversial). If you trust the TIFFs as preservation masters and can reconstruct a workflow, retiring PSDs yields big savings (300–500 MB per view).
  3. Move RAW/PSD backups to cheaper object storage (Wasabi/Backblaze B2 with immutability) while keeping TIFFs in the DAMS. Your DAMS storage remains ~40–90 TB over time; your cheaper bucket carries the heavy RAW/PSD volume.
  4. Dedicate Google Workspace to “living” collections work and collaboration, not long-term cold backup. Keep it responsive; push the bulk to object storage that scales cleanly.

Quick recommendations based on your setup

If you want, I can plug your exact item counts and mix into a small calculator and hand you a one-pager that says: “At X items you’ll hit 200 TB on Google; here’s how much sits in DAMS vs backups; here’s what to move where.”

ED50C03B-D17D-4578-AC8D-10975CE0ABF0.png

Here are three storage projections 📊 based on your file sizes:

  1. DAMS only (TIFF + JPG) → Even at 200,000 items, you’re well under the 200 TB cap.
  2. Backups only (RAW + PSD) → This grows much faster, getting close to the 200 TB cap at around 150,000–200,000 items.
  3. Combined total (DAMS + Backups) → At the higher estimates, you’ll exceed 200 TB before hitting 200,000 items.

👉 This means Google Workspace’s pooled 200 TB could work for DAMS-only storage, but not if you also expect to keep all RAWs + PSDs there long-term. Would you like me to model what happens if you offload backups to Wasabi/Glacier and keep only DAMS derivatives (TIFF/JPG) in Google?

So I should consider storage backup separate from DAMS

6246AB94-00A3-4670-8143-493C7040B66A.png
848C5A5B-DEA6-49E3-9011-94A087A4C0E0.png
57038BD4-ED1D-45B9-B625-E84A56F9C248.png

I built the models and tables you asked for and displayed them as a downloadable table. Summary and interpretation below.

What I modelled

Table (key columns shown)

You can download the full table from the Python output if you want. Columns include:

Quick takeaways (from the table)

Next steps I can do right away

Pick one and I’ll produce it:

  1. Export the full table to CSV or Excel for your records and to share with your boss.
  2. Regenerate the table with your exact PSD size (300 vs 500) or different RAW counts (if sometimes fewer than 3).
  3. Produce charts (TB vs items, and annual cost vs items) for one or more mixes so you can include visuals in a presentation.
  4. Draft the lifecycle policy text to match the hybrid option you prefer (e.g., 12-month hot window, then cold archive with manifest and SLA).

Which of those would you like next?