Container fields are FileMaker's mechanism for storing binary data. Images, PDFs, documents, audio, video, signatures, and any other file type a solution might need to manage. On the surface, they're one of FileMaker's simpler features: drag a file in, it stores. Click the field, the file opens. What could go wrong?
Quite a lot, as it turns out. Container fields sit at the intersection of FileMaker's database engine, the server's file system, client network connections, and increasingly complex business requirements around file access, backup, and performance. Decisions made early in a solution's life compound over time into some of the most operationally painful problems in FileMaker administration: a 200GB database file that should be 2GB, gigabytes of orphaned external files with no database references, container data that's inaccessible after a server migration, mobile clients that can't retrieve files over cellular, and the particular nightmare of migrating container storage strategy on a live hosted solution without data loss.
I've seen all of these play out in real solutions. This post covers exactly how container field strategy goes wrong, why those mistakes are so expensive to fix, and how to design container storage correctly from the start.
How Container Fields Work: The Storage Model
Before getting into what goes wrong, you need a clear mental model of how FileMaker stores container data. There are three storage modes, and the distinction between them is the foundation of every container-related architectural decision.
Embedded Storage. The file's binary data is stored directly inside the FileMaker database file (.fmp12). The file bytes live in the database alongside your text fields, number fields, and every other record in the table. Embedded storage requires no additional configuration. It works immediately with no setup. It's appropriate for small files (thumbnails, small icons, signature images) or for solutions where the total container data volume will remain small, a few hundred megabytes at most.
Here's the critical consequence: every byte of container data in embedded mode inflates the .fmp12 file size. A solution with 10,000 records each carrying an embedded 1MB PDF has a database file that's 10GB larger than it would be if those PDFs were stored externally. FileMaker Server hosts this entire file in memory for caching purposes. Backup times, restore times, and network transfer times all scale with file size. A 50GB database file that should be 500MB because everything is embedded is a performance and operations liability.
External Storage (Managed). The file's binary data is stored outside the .fmp12 file, in a directory on the server's file system managed by FileMaker Server. FileMaker maintains the directory structure and the association between database records and their external files. The .fmp12 file stores a reference to the external file, not the file's binary content.
External managed storage is configured in Manage > Containers in the data file. You define a base directory, the root path under which FileMaker will organize all externally stored container data. FileMaker creates a subdirectory structure beneath that, organized by table and field, and manages file placement automatically. This is the correct storage mode for the vast majority of production solutions. The database file stays small, file system backups can handle container data separately from the database backup, and FileMaker's internal reference tracking ensures that container references remain valid as long as the base directory structure isn't externally modified.
External Storage (Open). Like managed external storage, the data lives outside the .fmp12 file. Unlike managed storage, the files are stored at paths that are visible and accessible to the file system. You control where files go. Open external storage is useful for integration scenarios where external systems need to access the same files that FileMaker stores, like a web server serving the same files that FileMaker manages. But it introduces file system management complexity and risks that managed storage avoids, and it's rarely the right choice for general document management.
Insert vs. Reference: A Critical Distinction
When a file enters a container field, it can be inserted in two ways.
Insert File (copy) means FileMaker copies the file data into the container. The original file is irrelevant after the insert. The container field holds its own copy. This is the standard behavior.
Insert File (reference only) means FileMaker stores only the file path, not the file data. The container field shows the file's content by reading from the original path at display time. If the original file moves or is deleted, the container shows an empty or broken reference.
The reference-only approach is a trap in almost every scenario. It creates a hidden dependency on the file system path being stable and accessible from every client, which is rarely guaranteed in practice. References work when everyone is on the same local network with access to the same file paths, and break the moment a file is accessed from a different machine, a mobile device, or after the original file is moved or deleted.
The rule: Always copy file data into the container. Never use reference-only storage for production solutions unless you have a specific, well-understood reason and have designed around the access limitations.
The Thumbnail Cache
FileMaker generates and caches thumbnails for images and PDFs stored in container fields. These thumbnails are used for display in list views and other contexts where showing the full file would be prohibitively slow. The thumbnail cache lives separately from the container data itself and is rebuilt as needed.
Understanding the thumbnail cache matters because it affects performance diagnostics. A layout that appears to load container data slowly may actually be rebuilding the thumbnail cache, not loading the full files. These are different problems with different solutions.
The Embedded Storage Bloat Crisis
This is the most common container field failure mode, and the one with the most severe operational consequences. A solution is built with embedded container storage (the default) because it works immediately with no setup. Files go in. Everything works. The solution ships.
Months or years later, the database file is enormous. On FileMaker Server, the file takes minutes to open. Backups take hours. Progressive backup can't keep up with the rate of change because every container insert forces a large database write. Restoring from backup to a test environment takes so long that nobody ever does it. The solution has become operationally fragile not because of any logic error but because 50,000 embedded PDFs have turned a database file into a multi-gigabyte monolith.
Why it happens. The mistake is almost never deliberate. Developers don't choose embedded storage because they've evaluated the alternatives and decided embedded is best. They choose it because they didn't configure anything else, and embedded is what you get by default. The volume problem is also predictable in retrospect but invisible during development. In a development environment with 50 test records and a few small test images, a database file of a few megabytes doesn't feel like a problem. The math that produces a 100GB production file is only obvious when you multiply test-time data volumes by production usage patterns, which developers often don't do.
The migration nightmare. Switching an existing solution from embedded to external storage is not a simple configuration change. You can't simply check a box and have FileMaker move the existing embedded data to the file system. The migration requires:
- Enabling external storage for the container field
- Writing a script that iterates through every record, reads the existing embedded container data, and re-inserts it into the field using the new storage mode
- Running that script on tens of thousands (or hundreds of thousands) of records in a hosted environment without disrupting users or exceeding server resources
- Verifying that every record's container data was migrated correctly and that the external files are present
- Confirming that the database file has actually shrunk (it doesn't immediately, because FileMaker's space reclamation requires a Save a Compact Copy operation, which requires taking the file offline)
That last point is worth emphasizing. After migrating embedded data to external storage, the .fmp12 file doesn't automatically shrink. The space that held the embedded data is marked as free within the file's internal structure but not released to the file system until you perform a compact operation. For a 100GB database with 90GB of embedded container data, the compact operation (which requires copying the entire file to a new location while the file is offline) can take an hour or more.
This migration is not impossible, but it is disruptive, time-consuming, and risky. Doing it right on a live production solution with no data loss requires careful planning, a maintenance window, thorough testing in staging, and robust verification scripts. It's a significant project, not an afternoon task.
Prevention. Configure external managed storage before any container data enters the solution. In Manage > Containers, set the base directory and enable external storage for every container field the solution uses. Do this during initial development, before testing with real data. The five minutes this takes at setup prevents a multi-day migration project later.
Orphaned File Accumulation
External managed storage solves the database file bloat problem but introduces a different one: orphaned files. An orphaned file is an external container file that exists on the server's file system but has no corresponding database record referencing it. Orphaned files accumulate over time and can consume significant storage.
How orphans are created. There are several common causes:
- Record deletion without container cleanup. When a record containing an externally stored container is deleted, FileMaker should remove the associated external file. In most cases it does. But under certain conditions (script-based deletions that bypass FileMaker's normal cleanup, interrupted delete operations, database corruption followed by recovery), the external file is not cleaned up. The record is gone; the file remains.
- Interrupted container inserts. A file is partially inserted into a container field and the network connection drops, the script is aborted, or the server crashes mid-write. FileMaker may have created the external file before the failure, but the database record was never committed with the container reference.
- Development and testing debris. During development, test files are inserted and records are deleted repeatedly. If the solution was using managed external storage during this phase, test-phase orphans accumulate in the base directory.
- Import operations. Bulk import scripts that insert files into containers and then partially fail leave files created for records that weren't committed.
- Separation model deployment. When a UI file is replaced, any container data written during the old UI file's session through external storage is correctly stored in the data file's base directory. But if the base directory path changes between data file deployments or server migrations, references in old records may point to files that are now at a different path.
Why orphans are hard to detect. FileMaker provides no native tool for identifying orphaned external container files. The base directory is a standard file system directory, and FileMaker doesn't maintain a manifest of which files within it are currently referenced by database records. The only way to audit for orphans is to build the audit yourself: a script that iterates through all records with external container data, extracts the external file path for each container (using GetContainerAttribute(field; "filename") and related functions), builds a list of referenced paths, and then compares that list against the actual files present in the base directory. This is a non-trivial script to write and an expensive one to run on large tables.
Prevention and mitigation. A few practices go a long way here:
- Use FileMaker's native delete to remove records. Scripts that delete records by direct database manipulation (rather than FileMaker's
Delete Record/Requestscript step) may bypass the container cleanup mechanism. Always use the standard delete mechanism. - Implement a soft-delete pattern for records with valuable container data. Rather than hard-deleting records, set a
Deletedflag and filter them from views. Hard delete only during scheduled maintenance operations where you can verify cleanup. - Schedule periodic orphan audits. A server-side scheduled script that runs monthly, builds the list of referenced external paths, compares against the file system, and logs discrepancies to a maintenance table. The log doesn't automatically clean up orphans (automated deletion of files is risky), but it surfaces them for manual review.
- Don't reorganize the base directory. Every time someone manually reorganizes the external storage base directory (moves files, renames subdirectories, deletes what looked like test debris), they create orphaned references or dangerously broken references. The base directory is managed by FileMaker. External tools should not touch it.
Server Migration Data Loss
A server migration is one of the highest-risk operations for container field data. The combination of a new server, a new file system structure, and the path sensitivity of external container storage creates multiple opportunities for data loss or inaccessibility.
The path problem. External managed storage stores each container file at a path relative to the base directory configured in Manage > Containers. When the data file moves to a new server, two things must be true for container data to remain accessible: the external files must be copied to the new server along with the database file, and the base directory configuration in the database must point to the correct location on the new server.
If only the database file is copied and the external files are left behind, every container field in the database shows an empty reference. If both the database file and the external files are copied but the base directory configuration points to the old server's path, FileMaker Server on the new server can't find the files. If the files are copied to a different path than the original base directory structure, FileMaker can't reconstruct the relative paths between the database records' references and the actual file locations.
The silent failure mode. The worst aspect of server migration container data loss is that it's often not immediately obvious. The database opens. Users can log in. Records exist. Most features work. The containers appear empty, with users seeing blank container fields where they previously saw images or document previews. In a solution where container fields aren't prominently displayed on the primary workflow layouts, this might not be noticed for days or weeks. By then, confirming whether the files were ever copied (and if not, whether they're still on the old server) requires forensic investigation.
Migration protocol for container data. I've found the following protocol to be reliable:
- Document the current configuration before migration begins. In Manage > Containers on the source server, record the exact base directory path. This is the root of everything you need to copy.
- Copy the external files along with the database file. The external container directory must be treated as part of the database. It's not a secondary or optional component. Use a method that preserves directory structure exactly (rsync, robocopy, or FileMaker Server's own backup mechanism which includes container files in the backup set).
- Verify the copy. Before opening the database on the new server, confirm that the container file count and directory structure on the new server matches the source. A simple recursive file count comparison catches missing files.
- Configure the base directory on the new server. Open the database in FileMaker Pro on the new server, go to Manage > Containers, and update the base directory path to the new server's path. If the directory structure beneath the base directory is preserved identically, this single path change is sufficient.
- Test container access after opening on the new server. Before declaring migration complete, open several records with container data, verify that files display correctly, and test both read (display) and write (new insert) operations.
- Keep the old server accessible for a validation period. Don't decommission the source server immediately. If container access problems are discovered after migration, the source server's files are the recovery option.
Mobile Access and Network Performance
Container fields create a specific performance problem for FileMaker Go clients on cellular or slow Wi-Fi connections. Every container field in the current layout's context is potentially transmitted over the network connection when the layout renders.
When a layout containing container fields is displayed, FileMaker transmits the container data from the server to the client. For small images this is fast. For large PDFs or documents, even in a list view where only a thumbnail is displayed, FileMaker may transmit more data than the visible thumbnail would suggest, depending on how the container data is stored and how the layout is configured. In a list view with 50 records visible, each with a 2MB embedded PDF, FileMaker potentially needs to transfer 100MB of data to render the layout. On a cellular connection, this is a painful experience. On a slow hotel Wi-Fi, it can appear to hang.
Progressive loading and container optimization. FileMaker's container fields support progressive downloading, the ability to fetch container data on demand rather than as part of the initial layout render. This is configured per-field in the Inspector. The key settings:
- Optimize for Interactive Content. Full file data is loaded for the field, enabling interaction (PDF scrolling, video playback). Appropriate for detail views where the user will actively work with the file. Expensive on slow connections.
- Optimize for Images. FileMaker transmits an optimized version of image data for display. Faster than interactive, still transfers file content.
- Thumbnail Only. FileMaker transmits only the cached thumbnail. Fastest. Appropriate for list views where full file interaction isn't needed.
A common mistake is using interactive content optimization in list views, transmitting full file data for every visible record. The correct approach is thumbnail-only in list views, with interactive content available when the user explicitly opens a file for viewing on a detail layout or card window.
Designing for mobile container performance. For solutions used on FileMaker Go over cellular, here's what I recommend:
- List views: Container fields set to thumbnail only. Never full interactive content in a list view.
- Detail views: Container fields set to interactive only when the user intends to view the file. Use a button or action to trigger the interactive view rather than loading it automatically.
- Large file storage: For solutions that need to store and retrieve large files (engineering drawings, video files, multi-page documents), consider whether FileMaker container fields are the right storage mechanism for mobile clients. A hybrid approach (container fields for metadata and thumbnails, cloud storage like S3, SharePoint, or Google Drive for the large files, accessed via URL from a web viewer) may provide better mobile performance than storing everything in containers.
- Offline access: FileMaker Go supports local container data for offline-enabled solutions. Synchronizing large container data to a local file has significant implications for device storage and sync time. Design offline container sync with explicit control over which files are downloaded locally and which are fetched on demand.
Backup Strategy Misalignment
External container storage is not automatically included in all backup approaches. This is a silent risk: a backup strategy that appears complete may be leaving all container data unprotected.
What FileMaker Server's backup includes. FileMaker Server's built-in backup mechanism (configured in the Admin Console) backs up hosted .fmp12 files. In FileMaker 19+, FileMaker Server's backup also includes the external container data directories associated with hosted databases. This is the correct behavior and the main reason to use FileMaker Server's built-in backup for solutions with external container data. However, this automatic inclusion requires that the container base directory is configured correctly and that the backup destination has sufficient space for both the database files and the external container files. A backup configuration that was sized for a 2GB database file may run out of space after external container data grows to 50GB.
Progressive backup and container data. FileMaker Server's progressive backup captures changes to the database file continuously (by default, every minute). Progressive backup captures changes to external container data as well, but the behavior depends on the container storage configuration and the FileMaker Server version. In configurations where progressive backup is the primary recovery mechanism, verify explicitly that external container data is included in the progressive backup set. An incomplete progressive backup that captures database changes but not container changes creates a recovery scenario where the database and its container files are out of sync.
File system backup gaps. Many organizations layer FileMaker Server's backup with external file system backups (Veeam, Time Machine, Snapshot, cloud backup agents). If the file system backup is configured to exclude certain directories (for example, to reduce backup volume), the container base directory may be excluded. Verify explicitly that the external container base directory is included in every backup mechanism the organization uses. Don't assume. Check the backup job configuration and confirm that a recent backup includes the container files.
Testing backup restorability. The only way to know whether your backup strategy actually protects container data is to test a restore. At least once per year, restore both the database file and the container data files to a test environment and verify that container fields display correctly. A backup strategy that has never been tested is an assumption, not a guarantee.
The Reference-Only Trap in Production
I mentioned the reference-only insert earlier, but it deserves deeper attention because its failure mode is particularly insidious.
Why reference-only looks attractive. Reference-only storage is attractive when files already exist on a shared network drive and duplicating them into FileMaker seems wasteful. "The files are already there. Why copy them? Just reference them." This seems pragmatic and storage-efficient.
How reference-only fails. Reference-only container fields store a file system path. That path must be accessible from every client that needs to view or interact with the container field. In a single-office environment on a reliable LAN with stable file paths, this can work. Outside that narrow scenario, it breaks:
- FileMaker Go on cellular: Mobile devices cannot access UNC paths (
\\server\share\file.pdf). The container shows empty. - Remote/VPN users: VPN connections may not map file server shares the same way local connections do. The path resolves for some users, not others.
- WebDirect users: Browser sessions cannot access file system paths. All reference-only containers show empty in WebDirect.
- File server path changes: When the file server is reorganized, renamed, or migrated, every reference-only container reference breaks simultaneously. You now have a database full of broken references and a file system that's been reorganized. Reconstructing the mapping is a forensic exercise.
- FileMaker Server backups: A backup of the .fmp12 file that uses reference-only storage does not back up the referenced files. The backup contains references to paths that may not exist or may be different when restoring to a different environment.
Converting reference-only to copied. If an existing solution uses reference-only container storage and you need to convert it to copied storage, the process is:
- Enable external managed storage for the container field (to avoid embedded bloat)
- Write a script that iterates through every record, reads the reference-only container value, and re-inserts the file as a copy using
Insert Filewith the file path extracted from the reference - Verify that the file was successfully copied (check that the container no longer shows a reference icon, check that the file path is now within the managed external storage directory)
- Handle records where the referenced file no longer exists (path is broken) and log these for manual remediation
The script is straightforward in concept but operationally demanding for large record sets. It requires that the referenced file paths are still accessible from the machine running the script. If the original file server has been decommissioned, conversion is impossible without the original files.
Developer Container Debris During Testing
Here's a specific failure mode in solutions under active development: developers insert real or large files during testing without thinking about the storage implications. By the time the solution reaches production, the development or staging database contains gigabytes of container data (test PDFs, sample images, dummy documents) stored in the base directory that was never designed to hold real data volume.
When the solution is promoted to production, this test data either comes with it (bloating the production database from day one) or is left behind (but the migration process now needs to handle the difference between a development base directory and a production base directory).
Prevention. Use small, synthetic test files during development. A 1KB placeholder image tests the container field behavior just as well as a 5MB photograph. A three-page PDF tests PDF display just as well as a 200-page document. Save large file testing for staging, where the environment is production-like and the data is expected to be cleaned up before going live. When promoting from development to production, always use a clean data file (or a production backup) rather than the development data file. Development data, including development container files, should never enter the production environment.
Designing Container Storage Correctly from the Start
All of the failure modes above are substantially easier to avoid than to fix. Here's the complete container storage design approach for a new solution.
Estimate total data volume. Before writing a line of FileMaker code, estimate how many records will contain container data, what the average file size is, what the growth rate looks like (records per month), and what the retention policy is. Multiply records by average file size to get expected total container data volume. If this is more than a few hundred megabytes, embedded storage is not appropriate. Configure external managed storage from the start.
Choose the base directory location. For hosted solutions, the base directory must be on the FileMaker Server machine (or on network storage accessible to the server with consistent permissions). The typical location is within FileMaker Server's RC_Data_FMS directory or a subdirectory of the server's documents folder, depending on OS and server configuration. Choose a location that's included in all backup mechanisms, has sufficient capacity for expected data volume plus growth headroom, is on a volume with good I/O performance (not a slow NAS over high-latency connections), and has a stable path that won't change with server OS updates or reorganizations.
Configure external managed storage before any data enters. In Manage > Containers, configure the base directory and enable external storage for every container field before inserting any production data. Verify the configuration by inserting a test file and confirming that it appears in the base directory on the file system, not embedded in the .fmp12 file.
Design for mobile from the start. If FileMaker Go clients will access container data, set list view container fields to thumbnail-only in the Inspector, reserve interactive content for detail views where file interaction is the user's intent, estimate the data transfer cost of your layouts at the expected record count, and for very large files on mobile, design a separate access mechanism (web viewer + cloud storage URL) rather than transmitting large binaries over cellular.
Align backup strategy. Before the solution goes live, verify that FileMaker Server's backup is configured to include external container data, that file system backup includes the container base directory, that a restore test has been performed that includes container files, and that backup storage capacity accounts for container data volume.
Plan the orphan management strategy. Before the solution goes live, decide how orphaned files will be detected (periodic audit script), what the cleanup process looks like for confirmed orphans (manual review, then deletion), how long deleted records are retained in soft-delete before hard deletion, and who is responsible for monitoring storage volume and triggering audits.
The Compact Operation
If a solution already has significant embedded container data and you've successfully migrated it to external storage, the database file will still be large. The freed space is internal to the file, not released to the operating system until a compact operation is performed.
The compact operation in FileMaker is Save a Compact Copy, accessible from File > Save a Copy As > Compacted Copy in FileMaker Pro. This creates a new copy of the database file with all internal free space eliminated, producing a smaller file. For a hosted file, this requires closing the file on the server, opening it locally in FileMaker Pro (or using fmsadmin to run the compact via command line), saving the compacted copy to a new location, and replacing the hosted file with the compacted copy.
The compact operation on a large file can be time-consuming. A 100GB file takes significant time to compact even on fast hardware. Plan a maintenance window accordingly. And keep in mind that compacting the database file does not affect external container data. Only the .fmp12 file is compacted. The container files remain in the base directory, unchanged. The compact only recovers space that was used by now-removed embedded container data (or other freed internal structures).
A Practical Checklist
In practice, I've found it helpful to run through this checklist when building or auditing a FileMaker solution that uses container fields.
Initial Design
- Total container data volume estimated (records x average file size x retention period)
- Storage mode chosen based on volume estimate (external managed for anything over a few hundred MB)
- Base directory location selected with adequate capacity, backup coverage, and stable path
- Mobile access requirements documented (FileMaker Go, WebDirect, cellular)
- Reference-only storage explicitly ruled out (or documented with full understanding of access limitations)
Configuration
- External managed storage configured in Manage > Containers before any production data is inserted
- Base directory verified by inserting test file and confirming external file appears on file system
- List view container fields set to thumbnail-only in Inspector
- Detail view container fields set to appropriate optimization level for use case
- No reference-only inserts in any production workflow
Backup and Recovery
- FileMaker Server backup includes external container directory
- File system backup includes container base directory
- Backup storage capacity accounts for container data volume plus growth headroom
- Restore test performed including container data files
- Restore procedure documented including container file restoration steps
Operations
- Orphan audit script exists and scheduled to run periodically
- Soft-delete pattern implemented for records with container data (or delete cleanup verified)
- Storage volume monitoring in place with alert threshold before capacity is exhausted
- Container base directory path documented and change-controlled (not reorganized without update to Manage > Containers)
Server Migration
- Container base directory path documented before migration begins
- Container files copied to new server with directory structure preserved
- File count verified on new server before opening database
- Base directory reconfigured in Manage > Containers after move
- Container access tested (display and insert) before declaring migration complete
- Old server retained for validation period before decommission
Separation Model Deployments
- Container base directory lives in data file configuration (not UI file)
- UI file replacements do not affect container file access
- Post-deployment container access tested as part of deployment verification
Final Thoughts
Container fields aren't a complex feature, but they sit at the junction of several complex systems: the database engine, the file system, the network stack, the backup infrastructure, and the deployment workflow. Decisions about how container data is stored, where it lives, how it's backed up, and how clients access it have consequences that compound over the lifetime of a solution.
The failure modes covered here (embedded storage bloat, orphaned file accumulation, server migration data loss, mobile performance degradation, backup strategy gaps, reference-only brittleness, and test data contamination) share a common characteristic. They're almost entirely preventable by making the right decisions at the beginning, and they're disproportionately expensive to fix after the fact.
The right decisions aren't complicated. Configure external managed storage before data enters the solution. Estimate data volumes before choosing a storage approach. Never use reference-only storage for anything that needs to be accessible from all clients. Include container files in every backup mechanism. Test restorability. Keep the container base directory stable and change-controlled. Audit for orphans periodically.
None of these is technically difficult. They're discipline failures, not knowledge failures. Skipped steps and deferred decisions that accumulate into operational crises. The antidote is treating the container storage configuration as a first-class architectural decision rather than a detail to sort out later, and building the operational infrastructure that keeps it healthy over the solution's lifetime.
The five minutes spent configuring external managed storage before the first container insert is one of the highest-return-on-investment decisions in FileMaker development. The alternative is a migration project that takes weeks.