The separation model is the gold standard architecture for serious FileMaker solutions. Split your solution into two hosted files — a UI file containing layouts, scripts, and interface logic, and a data file containing tables, fields, and records — and you gain the ability to push interface updates to production without touching the data, without requiring users to disconnect, and without the risk of schema changes accidentally breaking live data. It's a powerful architecture. In theory.
In practice, the separation model is where some of the most operationally painful FileMaker problems live. Not because the architecture is flawed, but because the deployment workflow it requires is significantly more complex than single-file development, and most teams underestimate that complexity until they've already shipped a broken update to production.
The problems are specific and recurring: the file reference that breaks when the data file moves or is renamed, the UI update that locks out users who are mid-session, the developer who tests against the development data file and deploys a UI file that still points at dev, the global field that silently loses its value when the UI file is replaced, the script that navigated correctly in the old UI file and silently fails in the new one because a layout was renumbered. None of these are exotic edge cases. They're the routine consequences of operating a two-file architecture without the deployment discipline the architecture demands.
This post covers the separation model in depth: why it exists, how it works under the hood, where deployment goes wrong and why, and how to build the operational infrastructure and development discipline that makes a separated solution genuinely maintainable over its lifetime.
What the Separation Model Is and Why It Exists
In a single-file FileMaker solution, everything lives together: tables, fields, relationships, layouts, scripts, value lists, custom functions, and all the data records. This is convenient during development. Everything's in one place, there's no inter-file complexity, and testing is straightforward.
The problem emerges at the intersection of multi-user deployment and ongoing development. When you need to update a layout or modify a script in a single-file solution hosted on FileMaker Server, users must disconnect before you can close the file, make changes, and reopen it. In a business environment where the solution runs continuously, coordinating downtime for every UI change is disruptive. And because the file contains both the schema and the data, every time you open the file for development, you're working with a file that is structurally identical to production, which creates risk.
The separation model addresses this by dividing the solution into two files with clearly defined responsibilities.
The Data File contains all base tables and fields, all relationships between tables (in the data file's own relationship graph), the actual data records, field-level validation rules and auto-enter calculations, and value lists based on data where appropriate.
The UI File contains all layouts, all scripts, all custom functions, themes and styles, value lists (static and those used for UI purposes), external table occurrences that reference the data file's tables via file reference, and custom menus.
The UI file connects to the data file through a file reference — a named connection that resolves to a hosted file path. Every table occurrence in the UI file's relationship graph is an External Data Source occurrence pointing to a table in the data file.
The reason this architecture exists is that you can close, replace, and reopen the UI file on the server without touching the data file. Users who are connected to the data file through the UI file will be disconnected when the UI file closes, but their data is untouched. When the new UI file opens, they reconnect and continue working with the updated interface, updated scripts, and updated logic.
Schema changes (adding tables, fields, relationships) still require touching the data file, which requires a data file maintenance window. But interface changes, script updates, layout redesigns, and logic fixes can be deployed through the UI file alone — faster, safer, and with less user impact.
Beyond deployment flexibility, the separation model provides several additional benefits. The data file can be backed up on its own schedule without the UI file, and backup restoration touches only data, not the interface. You can run multiple UI files against one data file — a desktop UI file and a FileMaker Go mobile UI file can both connect to the same data file, each optimized for its platform, sharing a single data layer. Developers get a cleaner workflow: they can work on the UI file while the data file remains hosted and accessible to other users, with no need to take the entire solution offline for interface work.
How the File Reference Works
Understanding file references is essential to understanding why separation model deployments fail. A file reference in FileMaker is a named connection definition stored in the UI file. It contains a display name (used internally and in the relationship graph) and one or more file paths, listed in priority order, that FileMaker tries when resolving the reference.
When the UI file opens and needs to access data through an external table occurrence, FileMaker works through the file reference's path list until it finds a path that resolves to an open, hosted file. If none of the paths resolve, the external table occurrences show no data and scripts that navigate to external tables fail.
FileMaker file references support multiple path formats. filewin:/ or filemac:/ paths are absolute file system paths for local files. fmnet:/ paths are network paths for hosted files (for example, fmnet:/ServerHostname/FileName). And relative paths resolve relative to the current file's location.
For hosted solutions, the relevant path format is fmnet:/. A typical file reference path looks like:
fmnet:/192.168.1.100/CompanyApp_Data
fmnet:/filemaker.company.com/CompanyApp_Data
FileMaker tries each path in the list in order. If the first path fails (wrong IP, wrong hostname), it tries the second. This fallback behavior is both a feature (resilience to environment changes) and a source of subtle bugs (a path that should fail succeeds against a wrong data file).
The most common file reference configuration for separation model solutions includes multiple paths for different environments:
Path 1: fmnet:/dev-server.company.com/CompanyApp_Data
Path 2: fmnet:/staging-server.company.com/CompanyApp_Data
Path 3: fmnet:/prod-server.company.com/CompanyApp_Data
FileMaker resolves to whichever path it can reach first. In theory, this means one UI file works across environments. In practice, this creates a specific class of deployment error: a developer tests the UI file and it connects to dev-server. The developer deploys that UI file to production. On the production server, dev-server is also reachable. FileMaker resolves Path 1 — the development data file — rather than Path 3 (the production data file). Production users are now reading and writing to the development data. The developer doesn't notice because the solution "works."
This scenario is not hypothetical. It has happened in real production environments.
The file reference is the architectural joint between the UI file and the data file. When it works, it's invisible. When it breaks, the entire solution stops functioning. I've found that the most common causes of breakage include: the data file being renamed on the server, the server hostname or IP changing, the data file being moved to a different folder on the server, the data file being closed on the server (scheduled maintenance, crash recovery), SSL certificate hostname mismatches, and the UI file being deployed to the wrong environment.
Where Deployment Goes Wrong: Seven Failure Modes
The Wrong Environment Connection. As described above, the UI file's file reference resolves to the development or staging data file in production because the path list order doesn't enforce environment separation. The solution appears to work in production, but data changes don't appear in the real production data file. It's often discovered when production reports show no recent activity, or when a developer notices that "production" records are in the development database.
Prevention: environment-specific UI files with environment-specific file reference configurations. The production UI file's path list should contain only the production server path — not dev or staging paths. Use a build/deployment process that configures the file reference correctly for each environment rather than relying on path fallback ordering.
The Broken File Reference After Data File Rename or Move. A data file is renamed for organizational reasons (say, AppData becomes CompanyApp_Data_v2) or moved to a different server directory. The UI file's file reference still points to the old path. Every external table occurrence in the UI file shows no data. Scripts that navigate to external tables fail silently or with context errors.
Prevention: treat the data file name and server path as permanent infrastructure decisions. Change them only when absolutely necessary, and when you do, update the file reference in the UI file and test before deploying. Keep the data file name stable across versions — use a version number scheme that doesn't require renaming the file (internal version fields, not file name version suffixes).
Mid-Session User Disconnection Without Warning. When the UI file is closed on the server to push an update, all connected users are immediately disconnected. FileMaker Server lets you send a disconnect message with a delay, but those features only work if the deployment process uses them. A developer who deploys the new UI file by directly replacing it via Admin Console or FTP without sending a disconnect message first will abruptly disconnect every active user. Any unsaved work (records in edit mode, unsaved forms) is lost.
Prevention: a deployment protocol that includes a pre-deployment notification, a server-side disconnect message with delay (fmsadmin sendmessage or via Admin Console), and a defined maintenance window communicated to users before any UI file replacement.
Global Field Values Lost on UI File Replacement. Global fields in FileMaker are table-specific. In a separated solution, globals that live in the UI file lose their session values when the UI file is replaced, because those values exist in memory for the current session. More subtly: some developers store configuration values in global fields in the UI file under the assumption that "the global will persist." Global fields in FileMaker do persist their last-committed value in the file itself, but this is the value committed by the last session to explicitly commit the global field. When the UI file is replaced with a fresh copy, that fresh copy has whatever default value was set at development time, not the value that was operational in production.
Prevention: store persistent configuration values in the data file, not the UI file. A UIPreferences table in the data file, keyed by account name or device, survives UI file replacements. Global fields in the UI file should be treated as session-only values — populated at session open from the data file, cleared when the session closes, never relied upon for persistence across deployments.
Layout Number Drift Breaking Script Navigation. FileMaker scripts that use Go to Layout [by number] (as opposed to Go to Layout [by name]) reference layouts by their internal ordinal position in the file. When layouts are added, deleted, or reordered in a new version of the UI file, layout numbers shift. A script that navigated to layout number 14 in the old UI file now navigates to a completely different layout in the new UI file.
Prevention: never use Go to Layout [by number]. Always use Go to Layout [by name] with an explicit layout name. FileMaker resolves the name at runtime against the current file's layout list, which is always correct regardless of layout ordering or additions. Audit all scripts in the UI file before deployment for Go to Layout [by number] usage — this can be found in the DDR.
The Stale Development File Reference. A developer builds a new feature in a UI file connected to a local development data file. They test thoroughly. The feature works perfectly. They deploy the UI file to production. The file reference in the deployed UI file still has the local development data file path as Path 1. On the server, Path 1 doesn't resolve (the developer's local machine isn't a FileMaker Server), so FileMaker falls back to Path 2 — the production data file. The deployment appears to work.
But Path 1 is still there. If the developer's machine is ever reachable from the production environment (VPN, same network, cloud deployment with open access), Path 1 resolves. Production connects to the developer's local file. Development data enters the production environment.
Prevention: a deployment checklist that includes verifying and updating the file reference before deployment. A build process that strips local paths from the file reference and sets only the production path. Never deploy a UI file whose file reference hasn't been audited.
Script Compatibility Breaks Between UI Versions. The UI file is replaced, but certain scripts that call subscripts or reference layouts by name fail because layout or script names changed between the old and new versions. This is less common if the developer controls the entire UI file, but in environments where customizations are layered on top of a vendor-provided UI file, the vendor's new version may rename or restructure scripts that the customization layer calls.
Prevention: a version compatibility audit before deployment. If scripts call other scripts by name (via Perform Script), verify that all called script names exist in the new UI file. If the solution uses external script calls between files, verify both files' script name contracts are consistent.
Building the Deployment Infrastructure
The separation model demands more operational discipline than single-file deployments. The right response is to build infrastructure that makes discipline easy — not to rely on individual developers remembering the right steps every time.
A mature separation model deployment needs at least two environments, ideally three.
Development is a local or hosted environment where developers work. Data is synthetic or anonymized. Schema changes, new features, and layout work happen here. No production data ever touches this environment.
Staging (Pre-production) is a hosted environment that mirrors production infrastructure. The UI file and data file versions here match what's about to go to production. UAT and integration testing happen here. The staging data file may be a sanitized copy of production data for realistic testing.
Production is the live environment. Users work here. Data is real. Changes are deployed through a controlled process, not by individual developers making ad-hoc updates.
Each environment has its own FileMaker Server instance (or at minimum, separate hosted file sets on the same server with distinct names). The UI file deployed to each environment has a file reference configured specifically for that environment — not a fallback path list that could accidentally resolve to the wrong environment.
The cleanest approach to environment-specific file references is to maintain separate UI file variants per environment, or to use a deployment script that updates the file reference before deployment. A utility script in the UI file — accessible only to admin accounts — can display and update the file reference path, allowing a developer to configure the correct production path before uploading the file to the server. Alternatively, maintain a config table in the data file that stores the expected data file name and server hostname. An OnFirstWindowOpen script in the UI file reads this config and validates that it's connected to the correct data file, alerting the administrator if the connection is to an unexpected environment.
The Deployment Checklist
Every UI file deployment should follow a written checklist. Not a mental checklist — a document that a developer works through before and during every deployment. I've found that deployment errors are almost always the result of skipping a known step under time pressure, not the result of encountering a truly novel problem.
Pre-Deployment:
- New UI file tested against staging data file with all affected features verified
- File reference audited — contains only production server path, no dev/staging paths
- No
Go to Layout [by number]in any script (DDR audit) - All persistent configuration values confirmed to live in data file, not UI file globals
- Deployment maintenance window scheduled and communicated to users
- Data file backup completed and verified before UI file change
During Deployment:
- Admin Console disconnect message sent with minimum 5-minute warning
- Confirmation that all users have disconnected (Admin Console connection list shows zero)
- Old UI file archived (renamed with date stamp, not deleted) before replacement
- New UI file uploaded to server
- New UI file opened on server
- File reference verified to resolve to correct data file (test login, confirm data is visible and correct)
- Key workflows tested by administrator before opening to users
Post-Deployment:
- Deployment logged (date, version, deploying developer, changes included)
- Users notified that system is available
- Old UI file archived for rollback if needed (minimum 30-day retention)
Version Numbering and Change Logging
Every UI file version should carry a version number — stored in a global field, displayed on the login layout or a system info layout, and logged in a SystemVersions table in the data file on first open.
A SystemVersions table with fields for version number, deployment date, deployed by, and change summary gives you a complete deployment history without relying on external documentation. The UI file's OnFirstWindowOpen script writes a record to this table on each new version's first open, creating an automatic deployment log.
// In OnFirstWindowOpen:
// Check if this version has been logged before
Set Variable [$version; Value: $$AppVersion]
// $$AppVersion set at top of OnFirstWindowOpen from a constant
Set Variable [$alreadyLogged; Value:
ExecuteSQL(
"SELECT COUNT(*) FROM \"SystemVersions\" WHERE \"Version\" = ?" ;
"" ; "" ; $version
)
]
If [GetAsNumber($alreadyLogged) = 0]
// First open for this version — log it
Go to Layout ["SystemVersions" (SystemVersions)]
New Record/Request
Set Field [SystemVersions::Version; $version]
Set Field [SystemVersions::DeployedAt; Get(CurrentTimestamp)]
Set Field [SystemVersions::DeployedBy; Get(AccountName)]
Commit Records/Requests [No dialog]
End If
The Re-Link Utility
Every separated solution should have a re-link utility — a script or mechanism that allows an administrator to update the file reference path without requiring a developer to open the file in FileMaker Pro and manually edit the file reference. In practice, this is essential for two scenarios.
First, server migration: the solution moves from one server to another. The data file is now at a different hostname or IP. The UI file needs its file reference updated to point to the new server.
Second, environment promotion: a UI file is being promoted from staging to production. Before opening it on the production server, the file reference needs to be updated from the staging data file path to the production data file path.
FileMaker doesn't provide a scripted way to modify file references (they're configured in Manage > External Data Sources, accessible only in FileMaker Pro). The practical re-link approach is a deployment-time script that validates the current connection and provides clear instructions for manual re-linking if the connection is wrong.
A more sophisticated approach uses a startup validation script that reads the current connected data file's name using Get(FileName) on an external table occurrence, compares it to the expected data file name stored in a constant or config record, and if they don't match, displays a clear administrative alert. Optionally, it locks the solution for non-admin accounts until the re-link is confirmed.
// Startup validation — runs in OnFirstWindowOpen
Set Variable [$connectedFile; Value: GetValue(
Substitute(GetFieldName(ExternalTable::ID); "::"; ¶) ; 1
)]
// This technique extracts the external file's name from the field reference
Set Variable [$expectedFile; Value: "CompanyApp_Data"] // constant
If [$connectedFile ≠ $expectedFile]
Show Custom Dialog [
"Environment Mismatch" ;
"This UI file is connected to '" & $connectedFile &
"' but should be connected to '" & $expectedFile &
"'. Update the file reference before proceeding."
]
// Lock to admin only until resolved
If [Get(AccountPrivilegeSetName) ≠ "[Full Access]"]
Show Custom Dialog ["System unavailable. Contact your administrator."]
Close Window [Current Window]
End If
End If
This doesn't automate the re-link, but it prevents a misconfigured UI file from going unnoticed — which is the more important goal.
Schema Changes: The Data File Maintenance Window
UI file deployments are relatively low-risk once you have the deployment infrastructure in place. Data file changes are a different matter. Any change to the data file's schema — adding tables, adding or modifying fields, changing relationships — requires opening the data file in FileMaker Pro, which requires closing it on the server, which disconnects all users.
Things that require a data file maintenance window include adding new tables, adding or modifying or deleting fields, changing field types or validation rules or auto-enter options, modifying relationships in the data file's graph, changing indexing strategy on existing fields, and running data migration scripts.
Things that don't require a data file maintenance window include layout changes, script changes, custom function changes, value list changes (for non-data value lists), and theme and style changes. All of those live in the UI file only.
The separation model's core value proposition is in that second list. The majority of ongoing development work involves the UI file, not the data file. Schema changes are less frequent than interface changes in a mature solution. The maintenance windows are reserved for genuinely structural changes, not routine feature development.
When a data file schema change requires populating new fields or restructuring existing data, the migration runs as part of the data file maintenance window. I've found a few best practices here. Write the migration script before the maintenance window. Test it against a copy of the production data file in staging. Know exactly how long it will take before you start the production window. Use PSOS for migration scripts that process large record sets — it's faster and doesn't tie up a client connection. Log migration progress to a log table, because if the migration is interrupted, you need to know how far it got. Make migrations idempotent where possible — a migration that can be run twice without causing errors is safer than one that requires a single clean run. And always back up the data file before running any migration.
Multiple UI Files Against One Data File
One of the separation model's most powerful features is running multiple UI files against a single data file — for example, a desktop UI file optimized for FileMaker Pro, and a mobile UI file optimized for FileMaker Go, both sharing the same data.
Both UI files need file references pointing to the same data file. Both share the same table structure, but can have entirely different layout sets, script libraries, and navigation patterns. Changes to the data file affect both UI files simultaneously — a field added to the data file is available to both UIs. A field deleted from the data file breaks references in both UIs if they used that field. Schema changes require coordinated updates to both UI files.
When deploying a change that touches both the data file and one or more UI files, sequence matters. Deploy the data file change first (maintenance window, all users disconnected). Deploy the UI file updates immediately after the data file maintenance window closes. Verify both UI files connect correctly to the updated data file before opening to users. Never deploy a UI file update that depends on a new data file field before the data file is updated — the external table occurrence will show an empty/broken field reference until the data file has the field.
In a multi-UI-file architecture, it's possible for the desktop UI file and mobile UI file to be at different versions — the desktop was just updated but the mobile wasn't. The data file needs to support both. This creates a compatibility constraint: the data file schema must be backward-compatible with the oldest UI file version currently in use. Track this explicitly. A UIFileVersions table in the data file records the minimum data file version required by each UI file. The data file's OnOpen trigger can check whether connected UI files meet the minimum version requirement and alert administrators if a UI file is out of date.
Development Workflow: Working Safely in a Separated Solution
The deployment infrastructure matters, but it's only half of the discipline equation. The development workflow — how developers work day-to-day against the separated files — is equally important.
Never develop against the production data file. This sounds obvious but happens constantly. A developer needs to test a new script quickly. The staging data file isn't set up yet. The production data file is right there. They connect to it "just to check something." Every time a developer connects their development UI file to the production data file, they risk accidentally creating test records in production, running a buggy script against real data, locking production records during testing, and submitting partial changes to production records during debugging. The rule is absolute: development UI files connect only to development data files. Staging UI files connect only to staging data files. Production UI files connect only to production data files. The file reference configuration enforces this — it's not a matter of developer judgment in the moment.
Use clones for development data. Development should use either synthetic test data or an anonymized clone of production data. Take a backup of the production data file, restore it to the development server, run a data anonymization script that scrambles PII (names, emails, phone numbers, SSNs) while preserving data structure and relationships, and use this anonymized clone as the development data file. The anonymized clone gives developers realistic data volumes and structures for testing without exposing real customer or business data.
Branching and feature development. In a separation model solution, the UI file is the primary development artifact. A development workflow that mirrors software branching looks like this: a main branch (the current production UI file, where hotfixes branch from), a development branch (a UI file copy where new features are built, connected to the development data file), and optionally feature branches (for large features developed by multiple developers, separate UI file copies per feature, merged into the development branch when complete).
"Merging" FileMaker UI file changes isn't like merging code — there's no diff tool, no automatic merge. It's a manual process of identifying what changed in the feature branch and replicating those changes in the development branch. This is painful for large divergences, which is an argument for keeping feature branches short and merging frequently. Tools like FileMaker's built-in DDR and third-party comparison tools (Base Elements, Inspector Pro) can help identify differences between two versions of a UI file, making the manual merge process more systematic.
The development data file version contract. When a developer adds a new table or field to the data file during feature development, they need to document this as a schema change that must be applied to the production data file before the UI feature can be deployed. A simple schema change log, maintained alongside the UI file, tracks what schema changes are required, which UI file version requires them, and whether the production data file has been updated to support the feature. A UI file version should never be deployed to production if the data file hasn't been updated to match the schema requirements of that UI file version.
Container Field Considerations
Container fields deserve special attention in separated solutions because they introduce a storage dimension that doesn't exist for other field types.
Container fields in FileMaker can store data embedded in the database file or externally in the file system. External storage is almost always preferable for production solutions — it keeps the database file size manageable and allows file system backup strategies to handle container data. External container storage is configured in the data file (in Manage > Containers). The external storage base directory is a path relative to the data file's location on the server. When the data file moves (server migration, file reorganization), this path may need to be reconfigured.
In a separated solution, the UI file has no direct configuration of container storage — that all lives in the data file. But scripts in the UI file that insert files into containers or export container data depend on the container storage working correctly. A broken external storage path (data file moved, base directory changed) will cause container inserts and exports to fail in ways that aren't immediately obvious. After any server migration or data file relocation, verify container field behavior explicitly — insert a test file, verify it stores correctly, export it, verify it retrieves correctly.
When creating a development clone from production data, external container data may not be included if the cloning process only copies the database file and not the external container file system directory. A clone that has container field references but no external files will show empty containers in development — which may cause test failures in scripts that process container data. Document whether the development clone process includes container file system data, and if not, ensure that any testing of container-dependent features happens in staging against a full data clone.
An Operations Checklist for Separation Model Solutions
Solution Architecture:
- Data file contains all base tables, fields, and relationships
- UI file contains all layouts, scripts, custom functions, and themes
- No tables in the UI file (except a UI-only globals table, if needed, with no persistent data)
- Persistent configuration values stored in data file, not UI file globals
- External container storage configured in data file with a documented base directory path
File Reference Configuration:
- Each environment's UI file has a file reference pointing only to that environment's data file
- No dev or staging paths in the production UI file's file reference
- File reference display name is descriptive and environment-specific
- Startup validation script checks connected data file name against expected value
Development Workflow:
- Developers never connect development UI files to production data file
- Development data file uses anonymized or synthetic data
- Schema changes documented in a change log alongside the UI file
- No schema changes deployed to production data file without corresponding UI file update staged and ready
UI File Deployment:
- Deployment checklist exists in writing and is followed for every deployment
- Pre-deployment testing completed against staging environment
- File reference verified to point to correct production data file
- No
Go to Layout [by number]in any script (DDR audit) - Maintenance window scheduled and communicated
- Disconnect message sent with adequate warning time
- Old UI file archived before replacement (not deleted)
- Post-deployment testing completed before opening to users
- Deployment logged in SystemVersions table
Data File Changes:
- Schema changes only made during a scheduled maintenance window
- Production data file backed up before any schema change or migration
- Migration scripts tested in staging against production data clone before production run
- Both UI files (if multiple) updated to match new schema requirements
- UI file deployment staged and ready to deploy immediately after data file change
Rollback Readiness:
- Previous UI file version archived and accessible for rollback
- Previous data file backup available and tested for restorability
- Rollback procedure documented and tested (not just assumed to work)
- Rollback decision criteria defined — what conditions trigger a rollback vs. a hotfix forward
Wrapping Up
The separation model is the right architecture for serious FileMaker solutions. It enables independent deployment of the UI layer, supports multiple device targets from a single data layer, simplifies backup and recovery, and reduces the risk of interface changes corrupting data. These are real benefits that make a material difference in the operability of a mature solution.
But the separation model is not a set-it-and-forget-it architecture. It requires deliberate operational discipline at every stage: file reference configuration that enforces environment separation, a deployment process that protects users during transitions, a development workflow that keeps environments cleanly separated, and infrastructure — version logging, startup validation, re-link utilities, deployment checklists — that makes correct behavior easy and incorrect behavior visible.
The failure modes in separated solutions aren't mysterious. They're the predictable consequences of specific operational shortcuts: a file reference with dev and prod paths in the same list, a deployment that skips the disconnect notification, a developer who connects to production "just to check something," a global field in the UI file that was expected to persist across deployments. Each of these has a corresponding discipline: environment-specific file references, a written deployment checklist, absolute separation of development from production data, and persistent configuration in the data file.
The teams that run separated solutions well don't succeed because they're more careful than other teams. They succeed because they've built infrastructure and process that makes the right approach automatic and the wrong approach obvious. That infrastructure is the real discipline of the separation model — not a skill you apply once, but a system you build and maintain alongside the solution itself.