One of the most common questions I get about Map/Reduce scripts is what happens when you skip the map stage entirely. The answer is straightforward, but the details matter if you want to avoid subtle bugs.
When there is no map function defined, NetSuite passes the raw input data from getInputData directly to reduce. But the structure of what arrives isn't what most developers expect on the first pass.
What Happens Internally
Without a map function, NetSuite uses the key/value pairs from getInputData directly as input to reduce. Specifically:
- Each key passed to
reduceis the string representation of the input index:"0","1","2", and so on. - Each value is an array containing a single element, which is the JSON-serialized string of the original object.
NetSuite essentially does an implicit passthrough. It's the same contract as if map had run and emitted each result as-is with the index as the key.
A Working Example
/**
* @NApiVersion 2.1
* @NScriptType MapReduceScript
*/
define(['N/log'], (log) => {
const getInputData = (inputContext) => {
return [
{ id: 1, name: 'Alpha' },
{ id: 2, name: 'Beta' },
{ id: 3, name: 'Gamma' }
];
};
// No map function defined
const reduce = (context) => {
try {
// context.key is the string index: "0", "1", "2"
const key = context.key;
// context.values is an array of strings — always an array,
// even though each element was a single object from getInputData
const values = context.values;
// Each element in values is a JSON-serialized string — must be parsed
const record = JSON.parse(values[0]);
log.debug({
title: `Reduce key: ${key}`,
details: `ID: ${record.id}, Name: ${record.name}`
});
} catch (e) {
log.error({ title: e.name, details: `${e.message} — ${e.stack}` });
}
};
const summarize = (summary) => {
summary.reduceSummary.errors.iterator().each((key, error) => {
log.error({ title: `Reduce error for key: ${key}`, details: error });
return true;
});
};
return { getInputData, reduce, summarize };
});
Key Behaviors
Five things to internalize:
context.keyis always a string. Without a map stage, it's the positional index of the result ("0","1","2"), not a meaningful business key.context.valuesis always an array of strings, even when only one value maps to a key. Each string is a JSON-serialized representation of the original object.- Parsing is required. You must call
JSON.parse(context.values[0])to get the original object back. It arrives as a string, not an object. - One value per key. Without a map stage, each key will have exactly one value in the array, since no grouping has occurred.
- Grouping doesn't happen. Grouping multiple values under a single key only occurs when
mapexplicitly emits the same key for multiple inputs. Without map, every input gets its own unique index key.
With vs. Without a Map Stage
The difference is worth seeing side by side.
With a map stage:
context.keyis whatever keymapemitted viacontext.write(key, value).context.valuescontains all values emitted under that key. Can be multiple.- Multiple inputs can share a key, arriving together in one
reducecall. - Primary use case: aggregating related records (all lines for an order, all transactions for a customer).
Without a map stage:
context.keyis the positional string index ("0","1","2").context.valuescontains a single serialized element.- Each input gets its own
reducecall. No grouping. - Primary use case: processing each input independently in parallel.
When Is Skipping Map Appropriate?
Omitting map is valid and common when:
- Each input record needs to be processed independently with no grouping required.
- You want to use Map/Reduce purely for its parallel processing and governance management benefits.
- Your
getInputDataalready returns data in the exact shapereduceneeds.
It's not appropriate when you need to group multiple inputs under a shared key before processing. That grouping logic must live in map.
A Note on Governance
Without a map stage, each item from getInputData triggers one reduce invocation. At high volumes, make sure your input result set size is intentional. Ten thousand input items means ten thousand separate reduce calls, each consuming their own governance budget.
This is by design and is one of Map/Reduce's strengths. But size the input accordingly. If you're pulling more data than you need, you're burning governance on records that don't need processing.