ExecuteSQL is one of the most useful functions in FileMaker. Introduced in FileMaker 12, it brought the ability to query data across tables without relationship graph dependencies, calculate aggregates that summary fields couldn't easily reach, and build reports that would otherwise require elaborate virtual list setups. For developers who came to FileMaker from SQL backgrounds, it felt like a long-overdue addition. For developers who hadn't worked with SQL before, it felt like a superpower.

And then schemas evolved. Table names changed. Field names were renamed. And ExecuteSQL queries that had worked perfectly for months started silently returning ?. That's FileMaker's way of indicating that a calculation produced an error, without telling you what the error was or where it occurred.

This is ExecuteSQL fragility. Unlike most FileMaker functions that reference fields and tables through the relationship graph, where a renamed field triggers a broken reference warning and highlights every affected script step and calculation, ExecuteSQL queries are string literals. FileMaker has no way to know that "SELECT SUM(Amount) FROM Invoices WHERE Status = 'Open'" refers to a field called Amount in a table called Invoices. When you rename that field to InvoiceAmount, the query keeps running. It just returns ? instead of a number. No warning. No broken reference indicator. No error dialog. Just silence.

This post covers exactly why ExecuteSQL is fragile, where the fragility shows up in practice, how to diagnose it when it fails, and how to architect your use of ExecuteSQL so that it remains maintainable, debuggable, and reasonably resilient to the schema evolution that every live solution inevitably undergoes.

What ExecuteSQL Is and What It Isn't

Before addressing fragility, it's worth establishing a clear mental model of what ExecuteSQL actually is in the FileMaker context. Developers frequently misapply it by confusing it with what it would be in a traditional SQL environment.

ExecuteSQL( sqlQuery ; fieldSeparator ; rowSeparator { ; arguments... } ) is a FileMaker calculation function that accepts a SQL SELECT statement as a string and returns the results as a delimited string. The field separator and row separator parameters control how the results are formatted.

ExecuteSQL(
    "SELECT FirstName, LastName FROM Customers WHERE Status = 'Active'" ;
    ","  ;   // field separator
    "¶"      // row separator
)

This returns something like:

Alice,Smith
Bob,Jones
Carol,Williams

Which can then be parsed with FileMaker's text functions, or more cleanly with a virtual list technique or a Let block.

What it can do: SELECT queries with WHERE, ORDER BY, GROUP BY, and HAVING. JOINs (INNER, LEFT OUTER) across FileMaker tables. Aggregate functions like SUM, COUNT, AVG, MIN, and MAX. Subqueries. CASE expressions. A subset of string and date functions specific to FileMaker's SQL dialect. And parameterized queries using ? placeholders with the arguments parameter.

What it can't do: INSERT, UPDATE, or DELETE. ExecuteSQL is read-only. You can't modify data with it. It can't access external SQL sources directly. It doesn't use FileMaker-specific relationship graph logic. It queries base tables, not table occurrences. It can't access global fields in queries, and it can't use FileMaker portal filters or relationship predicates.

Here's the critical architectural point that most developers don't fully internalize: ExecuteSQL does not use the relationship graph. It queries base tables directly by their actual table names, and fields by their actual field names, as defined in Manage Database. Table occurrences, relationship predicates, portal filters: none of these exist in the ExecuteSQL universe.

This is both its power (you can query any combination of tables without building graph relationships) and its primary source of fragility (there's no graph mechanism to track references and alert you to broken ones).

The Anatomy of ExecuteSQL Fragility

ExecuteSQL fragility has several distinct failure modes, each with a different cause and a different diagnostic signature.

Table Name Changes

This is the most common cause of ExecuteSQL failures in mature solutions. Your base table is named Invoices. Your query reads:

SELECT ID, CustomerID, TotalAmount FROM Invoices WHERE Status = 'Open'

At some point, the solution is refactored. The Invoices table is renamed to SalesInvoices to distinguish it from a new PurchaseInvoices table. The developer renames the table in Manage Database, runs FileMaker's broken reference check, and everything looks clean.

The ExecuteSQL query still says FROM Invoices. The table no longer exists by that name. The query returns ?. No warning was given at rename time. No broken reference was flagged.

Why doesn't FileMaker catch this? The rename operation in Manage Database updates all relationship graph references, all field references in calculations that use standard graph syntax (TableOccurrence::FieldName), and all layout field references. It doesn't parse string literals in calculation fields. It has no way to know that the string "SELECT ... FROM Invoices ..." contains a reference to the table being renamed.

Field Name Changes

The same problem at the field level. A field called Amount is renamed to InvoiceAmount for clarity. Every standard field reference (INVOICES::Amount) is updated automatically by FileMaker. Every ExecuteSQL query that contains the string Amount is not.

-- Before rename: works
SELECT SUM(Amount) FROM Invoices

-- After rename: returns ?
SELECT SUM(Amount) FROM Invoices
-- field no longer named Amount; now InvoiceAmount

This is particularly painful because field renames are common refactoring operations, especially in solutions where the naming convention evolves over time.

Case Sensitivity Variations

FileMaker's SQL dialect is case-insensitive for SQL keywords (SELECT, FROM, WHERE), but the behavior regarding table and field name case varies by platform and version. On some configurations, FROM Invoices and FROM invoices are treated identically. On others, they're not.

This creates a specific failure mode: a query written and tested on one platform works, gets deployed to another platform or a different OS, and silently fails because of case sensitivity differences. The safe practice is to always match the case of table and field names exactly as they appear in Manage Database.

Special Characters and Reserved Words in Names

FileMaker's SQL dialect requires that table and field names containing spaces, special characters, or SQL reserved words be quoted with double quotes:

-- Field named "First Name" (with space): must be quoted
SELECT "First Name", "Last Name" FROM Customers

-- Table named "Order" (SQL reserved word): must be quoted
SELECT * FROM "Order"

If a field is renamed from FirstName (no space) to First Name (with space), any ExecuteSQL query referencing that field now needs double quotes around the name. Without them, the query fails. FileMaker gives no warning at rename time.

Best practice: always double-quote table and field names in ExecuteSQL queries, regardless of whether the current name requires it. This makes the query resilient to future name changes that introduce spaces or reserved words.

-- Consistently quoted — resilient to naming evolution
SELECT "CustomerID", "FirstName", "LastName", "Status"
FROM "Customers"
WHERE "Status" = 'Active'

The Silent ? Return Value

All of the above failure modes share one characteristic: they return ? instead of an error message. This is FileMaker's behavior when a calculation function returns an error. It substitutes ? for the result rather than surfacing an error code.

The ? return is particularly damaging because it propagates silently through calculations that use the ExecuteSQL result. Layouts display ? or show empty fields depending on context. No error dialog appears. The developer who changed the field name may not even know the query exists. And the failure may not be noticed until a report is run or an audit catches an incorrect value, potentially weeks or months after the schema change.

Version-Specific SQL Dialect Differences

FileMaker's SQL dialect is a subset of SQL-92 with FileMaker-specific extensions and limitations. The supported function set, behavior of date/time literals, and handling of NULL values have changed across FileMaker versions.

A query that uses a date literal format that works in FileMaker 18 may behave differently in FileMaker 21. A query written against a Claris Cloud deployment may behave differently than the same query against on-premise FileMaker Server. This isn't strictly a fragility in the schema-evolution sense, but it manifests the same way: a query that worked starts returning ? or incorrect results after a platform or version change.

Why ? Is So Hard to Debug

The ? return value is FileMaker's generic calculation error indicator. When an ExecuteSQL call fails, you get ?. When a calculation divides by zero, you also get ?. When a function receives an argument of the wrong type, you get ?. The ? itself tells you nothing about which function failed, why it failed, or what the error code was.

This is compounded by how ExecuteSQL queries are typically stored: embedded as string literals inside calculation fields, often deeply nested inside Let() blocks alongside other calculation logic. When the outer calculation returns ?, it's not immediately obvious whether the failure is in the ExecuteSQL call itself, in the parsing of its output, or in some other part of the calculation.

The Silent Coercion Trap

I've found this to be one of the most dangerous aspects of ExecuteSQL fragility. A common pattern for using ExecuteSQL results involves downstream type conversion:

Let([
    ~result = ExecuteSQL("SELECT SUM(Amount) FROM Invoices WHERE Status = 'Open'"; ""; "");
    ~total = GetAsNumber(~result)
];
    ~total
)

When ExecuteSQL fails and returns ?, GetAsNumber("?") returns 0. The calculation doesn't return ?. It returns 0. A zero total looks plausible. It may not be noticed. The bug is invisible at the output level.

This pattern, where the error value from ExecuteSQL is silently coerced into a plausible-looking result by downstream functions, is among the most quietly damaging failures you'll encounter. The failure is invisible not just at the error level, but at the result level.

Detecting ? Explicitly

The fix is to check for the error value explicitly before using the result:

Let([
    ~result = ExecuteSQL("SELECT SUM(Amount) FROM Invoices WHERE Status = 'Open'"; ""; "");
    ~isError = (~result = "?");
    ~total = If(~isError; -1; GetAsNumber(~result))
    // -1 as sentinel value indicates query failure, distinguishable from 0
];
    ~total
)

Or more explicitly with an error string:

Let([
    ~sql = "SELECT SUM(\"Amount\") FROM \"Invoices\" WHERE \"Status\" = 'Open'";
    ~result = ExecuteSQL(~sql; ""; "");
    ~error = Case(
        ~result = "?" or IsEmpty(~result); "ExecuteSQL failed — check table/field names";
        ""
    )
];
    If(IsEmpty(~error); GetAsNumber(~result); ~error)
)

This pattern makes the failure visible as a string error message rather than a silent numeric coercion.

The Maintenance Trap: Where Queries Live and Why It Matters

ExecuteSQL fragility is fundamentally a maintenance problem. Understanding where queries live in your solution is the first step to managing them.

Queries in Calculation Fields

The most common location for ExecuteSQL queries is inside unstored calculation field definitions. These fields recalculate on demand, which means the query runs frequently and fails silently whenever the schema it references has changed.

The problem with this approach is discoverability. There's no FileMaker feature that shows you a list of all ExecuteSQL queries in your solution. To find them, you need to search the DDR for the string "ExecuteSQL" or open every calculation field definition manually. In a solution with hundreds of calculation fields, this isn't a practical manual process.

Queries in Script Variables

ExecuteSQL queries also commonly appear inside scripts, stored in Set Variable steps:

Set Variable [$sql; Value: "SELECT CustomerID, SUM(TotalAmount) FROM Invoices GROUP BY CustomerID"]
Set Variable [$result; Value: ExecuteSQL($sql; ","; "¶")]

These are somewhat more discoverable (the script editor's search function can find ExecuteSQL within scripts) but still reference table and field names as string literals with no dependency tracking.

The DDR as Your Only Inventory Tool

The Database Design Report (DDR) is the only native FileMaker tool that can help you inventory ExecuteSQL usage. Export the DDR as XML, then search the output for the string ExecuteSQL. This gives you a list of every calculation field, script variable, and custom function that contains an ExecuteSQL call. That's your complete query inventory.

This inventory is the foundation of safe schema refactoring. Before renaming any table or field, run this search. If your query inventory shows that Amount appears in seven ExecuteSQL queries across four calculation fields and two scripts, you know exactly what needs to be updated before you rename the field.

Architectural Strategies for Managing ExecuteSQL Fragility

The goal isn't to avoid ExecuteSQL. It's genuinely useful and sometimes the right tool. The goal is to use it in ways that minimize fragility, make failures visible, and make the query inventory maintainable.

Centralize Queries in Custom Functions

The single most effective structural improvement is to move all ExecuteSQL query strings out of individual calculation fields and scripts, and into a library of custom functions. Each custom function wraps one query (or one parameterizable query pattern), gives it a meaningful name, and provides a single place to update it when the schema changes.

// Custom Function: SQL_GetOpenInvoiceTotal( customerID )
// Returns total amount of open invoices for a given customer
// Tables: Invoices | Fields: TotalAmount, CustomerID, Status

Let([
    ~sql = "
        SELECT SUM(\"TotalAmount\")
        FROM \"Invoices\"
        WHERE \"CustomerID\" = ?
        AND \"Status\" = 'Open'
    ";
    ~result = ExecuteSQL(~sql; ""; ""; customerID);
    ~isError = (~result = "?")
];
    If(~isError; ""; GetAsNumber(~result))
)

Now every calculation field or script that needs this value calls SQL_GetOpenInvoiceTotal(CUSTOMERS::ID). It's a named, documented function. When TotalAmount is renamed to InvoiceTotal, there's exactly one place to update: the custom function definition.

I've found it helpful to use a consistent prefix that immediately identifies these as SQL wrappers: SQL_GetOpenInvoiceTotal, SQL_CountActiveCustomers, SQL_GetRecentTransactions. The prefix groups SQL functions together in the custom function list, making the inventory visible at a glance. It also signals to any developer reviewing a calculation that this function contains a SQL query with schema dependencies that require care.

Use FileMaker Field Name References as Anchors

One technique for reducing fragility is to use FileMaker's GetFieldName() function to inject field names into queries programmatically rather than hardcoding them as strings.

GetFieldName(TableOccurrence::FieldName) returns the base table field name as a string. If the field is renamed, GetFieldName() returns the new name, and your query automatically uses the new name.

Let([
    ~tableField = GetFieldName(Invoices::TotalAmount);
    // Returns "Invoices::TotalAmount" — updates automatically if field is renamed
    ~fieldName = Middle(~tableField; Position(~tableField; "::"; 1; 1) + 2; 100);
    ~sql = "SELECT SUM(\"" & ~fieldName & "\") FROM \"Invoices\" WHERE \"Status\" = 'Open'";
    ~result = ExecuteSQL(~sql; ""; "")
];
    GetAsNumber(~result)
)

This creates a live dependency: if Invoices::TotalAmount is renamed, the calculation containing GetFieldName(Invoices::TotalAmount) shows as a broken reference in the FileMaker field reference system. You get a warning and a visible broken reference to fix, rather than a silent ? at runtime.

The pattern extends to table names as well:

Let([
    ~tableField = GetFieldName(Invoices::TotalAmount);
    ~parts = Substitute(~tableField; "::"; ¶);
    ~tableName = GetValue(~parts; 1);
    ~fieldName = GetValue(~parts; 2);
    ~sql = "SELECT SUM(\"" & ~fieldName & "\") FROM \"" & ~tableName & "\" WHERE \"Status\" = 'Open'"
];
    ExecuteSQL(~sql; ""; "")
)

Now both the table name and field name are derived from a live field reference. A rename of either triggers a broken reference in the calculation editor.

The trade-off: this technique makes queries harder to read. The SQL string is assembled dynamically, which makes it more difficult for a developer to understand the query at a glance. For high-churn schema areas or fields that are likely to be renamed, it's worth it. For stable, well-established field names, the simpler hardcoded approach may be preferable.

Parameterized Queries

ExecuteSQL supports parameterized queries using ? placeholders, with actual values passed as additional arguments. This is both a security practice (preventing SQL injection from user-entered data) and a maintainability practice (separating the query structure from its runtime values).

// Non-parameterized — fragile for user input
ExecuteSQL(
    "SELECT ID, Name FROM Customers WHERE Status = '" & Table::UserEnteredStatus & "'" ;
    "," ; "¶"
)

// Parameterized — safe and clean
ExecuteSQL(
    "SELECT \"ID\", \"Name\" FROM \"Customers\" WHERE \"Status\" = ?" ;
    "," ; "¶" ;
    Table::UserEnteredStatus
)

Parameterized queries also handle data type conversion correctly. Dates, numbers, and text are passed in their native FileMaker types without manual conversion.

For multi-parameter queries:

ExecuteSQL(
    "SELECT \"ID\", \"InvoiceDate\", \"TotalAmount\"
     FROM \"Invoices\"
     WHERE \"CustomerID\" = ?
     AND \"InvoiceDate\" >= ?
     AND \"Status\" = ?" ;
    "," ; "¶" ;
    Customer::ID ;    // First ?
    startDate ;       // Second ?
    "Open"            // Third ?
)

A Diagnostic Wrapper Function

In practice, every solution that uses ExecuteSQL should have a diagnostic wrapper custom function that makes query failures visible during development and testing:

// Custom Function: SQL_Debug( query ; fieldSep ; rowSep )
// Wraps ExecuteSQL and returns descriptive error info on failure
// REPLACE with SQL_Exec() before production deployment

Let([
    ~result = ExecuteSQL(query; fieldSep; rowSep);
    ~isError = (~result = "?") or IsEmpty(~result)
];
    If(
        ~isError;
        "SQL ERROR | Query: " & query & " | Result: " & ~result;
        ~result
    )
)

During development, use SQL_Debug() instead of ExecuteSQL() directly. The function returns the full query string alongside the error indicator when something goes wrong, making it immediately clear which query failed and what it was trying to do. Before deploying to production, replace SQL_Debug() calls with production-appropriate error handling.

Performance: The Other Dimension of Fragility

Fragility isn't only about schema evolution. ExecuteSQL queries can also become "fragile" in a performance sense, working correctly at low data volumes during development and then degrading severely as the solution scales.

Stored vs. Unstored Calculation Fields

An ExecuteSQL query stored in an unstored calculation field runs every time that field is displayed, printed, or referenced by another calculation. In a list view showing 100 records, if each record has an unstored calculation field containing an ExecuteSQL query, that query executes 100 times as the layout renders.

Each ExecuteSQL call has overhead: the query is parsed, planned, and executed against the base table. Even a fast query adds latency when multiplied by the number of visible records. A query that takes 50ms in isolation adds 5 seconds to the rendering of a 100-row list view.

The mitigation: never place ExecuteSQL queries in unstored calculation fields displayed in list views or portals. Instead, run ExecuteSQL in scripts that execute once and store the result in a global variable or stored field. Use OnLayoutEnter triggers to pre-populate global fields before the layout renders. Cache results in stored fields where the value doesn't need to be recalculated per-record.

The Full Table Scan Problem

FileMaker's SQL engine doesn't always use FileMaker's value indexes when executing ExecuteSQL queries. With complex WHERE clauses, JOINs, or non-equality predicates (LIKE, >, <), ExecuteSQL often performs a full table scan, reading every record in the table to evaluate the condition.

For tables with thousands of records, this is acceptable. For tables with hundreds of thousands of records, an unindexed full table scan can take seconds per query. In a solution where this runs on every layout view, the cumulative cost is significant.

What helps: filter early in the WHERE clause on indexed fields. Use equality predicates on ID fields (always indexed) as the first filter condition. Avoid LIKE predicates against large tables, as these always trigger full table scans. For aggregate queries against large tables, consider maintaining pre-aggregated summary records updated on write rather than recalculated at read time.

The Unstored Calculation Cascade

An ExecuteSQL query in an unstored calculation field becomes even more expensive when other unstored calculations reference it. Each reference causes the query to re-execute. A calculation dependency chain where five unstored fields each reference an ExecuteSQL result causes that query to run five times per record evaluation.

Map your calculation dependency chains when using ExecuteSQL. If multiple calculations need the same query result, have one unstored calculation field execute the query and the others reference that field.

Schema Refactoring: A Safe Process

Given all of the above, here's a safe process for any schema refactoring operation in a solution that uses ExecuteSQL.

Step 1: Build Your ExecuteSQL Inventory

Before any refactoring, export the DDR and search for ExecuteSQL. Document every occurrence: which file, which object (field, script, custom function), and which table and field names appear in the query string.

Use a spreadsheet with columns for object name, object type, the full query string, table names referenced, field names referenced, and whether the proposed change affects this query. This inventory is your change impact assessment. It should be created and reviewed before any rename operation begins.

Step 2: Check Every Query Against the Proposed Change

For each proposed rename (e.g., table Invoices to SalesInvoices), scan the table names column of your inventory for the old name. Every row that shows Invoices needs to be updated. Do the same for field renames.

This step often reveals that a "simple" rename has more impact than expected, particularly for common field names like Status, Amount, Date, or Notes that appear across many queries.

Step 3: Update Queries Before Renaming

Update every affected ExecuteSQL query to use the new name before performing the rename in Manage Database. This is counter-intuitive. The query will temporarily reference a name that doesn't exist yet. But it ensures that the moment the rename completes, all queries are immediately correct.

If you rename first, the queries break immediately and the solution is degraded until you finish updating them. If you update queries first, the rename can be completed immediately after, minimizing the broken window.

In a hosted multi-user solution, schedule schema refactoring during off-hours and complete the entire rename-plus-query-update cycle in a single maintenance window.

Step 4: Test Every Affected Query After the Rename

After the rename and query updates are complete, systematically test each affected query by viewing the layout or running the script that uses it. Confirm that the result is non-? and numerically correct. Don't assume that a syntactically updated query is correct. Verify it against known data.

Step 5: Leverage Centralization

If you're using the custom function centralization strategy, Step 3 may be as simple as updating one custom function definition. A solution with 40 ExecuteSQL queries distributed across 40 calculation fields requires 40 individual edits for a single field rename. The same solution with all queries centralized in custom functions requires at most a handful of edits, often just one.

ExecuteSQL vs. The Relationship Graph: Choosing the Right Tool

Part of managing ExecuteSQL fragility is knowing when not to use it.

Use ExecuteSQL when:

  • You need cross-table aggregates that would require an awkward multi-level relationship chain in the graph
  • You need a COUNT or SUM without a relationship, counting all records in a table that match a condition unrelated to the current context
  • You need GROUP BY. FileMaker's summary fields require specific layout contexts; ExecuteSQL GROUP BY works independently
  • You need to avoid adding TOs to the graph for a one-off lookup that doesn't warrant a permanent relationship
  • You're building a virtual list and need to query multiple tables in a single call
  • You need a set of values that can't be efficiently retrieved through portal filtering

Don't use ExecuteSQL when:

  • A simple relationship and portal would serve the purpose. The graph approach is dependency-tracked and refactoring-safe.
  • You need live, per-record display in a list view. Unstored calculation fields with ExecuteSQL in list views are a performance liability.
  • The query would be trivially expressed as a filtered portal. Portals are visible in the UI, easy to understand, and maintain themselves through field references.
  • You need write access. ExecuteSQL is read-only.
  • The schema is volatile. In early development where tables and fields change frequently, ExecuteSQL adds churn to every rename. Add it after the schema stabilizes.

A Practical ExecuteSQL Style Guide

These conventions, applied consistently, reduce fragility and improve maintainability across a solution's lifetime.

Always double-quote table and field names:

SELECT "CustomerID", "TotalAmount" FROM "Invoices" WHERE "Status" = 'Open'

Format multi-line queries for readability using Let blocks:

Let([
    ~sql = "
        SELECT
            \"CustomerID\",
            SUM(\"TotalAmount\") AS \"Total\"
        FROM
            \"Invoices\"
        WHERE
            \"Status\" = 'Open'
            AND \"InvoiceDate\" >= ?
        GROUP BY
            \"CustomerID\"
        ORDER BY
            \"Total\" DESC
    ";
    ~result = ExecuteSQL(~sql; ","; "¶"; startDate)
];
    ~result
)

Always check for ? before using the result:

Let([
    ~result = ExecuteSQL(~sql; ""; "");
    ~isError = ~result = "?"
];
    If(~isError; ""; ~result)
)

Use parameterized queries for any runtime values. Never use string concatenation:

WHERE "CustomerID" = ? AND "Status" = ?

Document the query's purpose, tables used, and fields referenced in a comment above the field or function:

// ExecuteSQL: Returns total open invoice amount for a customer
// Tables: Invoices
// Fields: CustomerID, TotalAmount, Status
// Last verified against schema: 2025-03
// Update required if any of the above table/field names change

Centralize in custom functions, not scattered across calculation fields.

Never place in an unstored calculation displayed in a list view without a caching strategy.

A Practical Checklist

Before Writing a Query

  • Is ExecuteSQL genuinely the right tool, or would a relationship and portal be simpler and safer?
  • Will this query go into a centralized custom function rather than a one-off calculation field?
  • Is the schema in this area stable, or likely to evolve?

Writing the Query

  • All table and field names are double-quoted
  • All runtime values use ? parameterization, not string concatenation
  • The query is formatted for readability in a Let block
  • The result is checked for ? before use
  • Error handling returns a meaningful sentinel value or error string, not a silently coerced zero

After Writing the Query

  • The custom function or containing field/script is documented with tables and fields referenced
  • The query is in the solution's ExecuteSQL inventory
  • The query has been tested with known data to verify correct results

During Schema Refactoring

  • DDR has been searched for ExecuteSQL before any rename operation
  • All affected queries have been identified in the inventory
  • Query updates are completed before or immediately after the rename
  • All affected queries verified as non-? after refactoring

Performance

  • No ExecuteSQL queries in unstored calculation fields displayed in list views without caching
  • Aggregate queries against large tables have been tested at production data volumes
  • Calculation dependency chains don't cause repeated ExecuteSQL execution for the same query

Wrapping Up

ExecuteSQL is a powerful function that operates outside FileMaker's normal dependency-tracking infrastructure. Because query strings are string literals, FileMaker can't parse them for field and table references, can't update them when schemas change, and can't warn you when a rename breaks them. The result is a class of silent failures that can persist undetected in production, return plausible-looking wrong values when coerced by downstream functions, and accumulate technical debt with every schema evolution that isn't carefully cross-referenced against the query inventory.

The fragility is real but manageable. The strategies that work:

Centralize all queries in named custom functions with a consistent prefix, creating a single location per query that must be updated when schemas change. This turns a 40-edit refactoring operation into a 1-edit operation.

Anchor field and table names to live FileMaker field references using GetFieldName() where the maintenance benefit justifies the verbosity, creating broken-reference warnings instead of silent ? failures.

Inventory all ExecuteSQL usage in the DDR before any schema refactoring, and treat the inventory as a mandatory step in the change process, not an optional audit.

Check for errors explicitly in every calculation that uses ExecuteSQL, making failures visible as error strings rather than silently coerced plausible values.

Choose the right tool. ExecuteSQL earns its place in aggregation, cross-table queries, and reporting scenarios where the relationship graph approach would be genuinely more complex. For per-record display, simple related field access, and volatile schemas, the graph is safer and more maintainable.

Used with these disciplines, ExecuteSQL becomes a reliable part of a solution's architecture. Used without them, it becomes the source of the most quietly damaging class of failures in FileMaker development: the ones you don't know exist until a user notices the numbers are wrong.