‘mapping file error’ -Lookup Stage

Join versus Lookup

The Lookup Stage: It memory maps files. This means that you MUST have enough system memory available to store the entire contents of the file AND you must have enough disk space on the resource disk (defined in the APT_CONFIG_FILE) to shadow the file in memory.

‘Each physical process can address only up to 4 GB of memory because it is a 32-bit application. The Windows version of the InfoSphere DataStage Parallel Engine is available only with 32-bit pointers. ‘

It is not that you dont have enough memory on the system, but that to load the whole map into memory hits the limit. The issue was worked around by using a join stage instead of lookup.

Another option is to Change the job to re-partition both the reference input to the lookup and the primary input to lookup to match on the keys. Because the lookup is running 4 way parallel and because we have explicitly partitioned the data, the lookup will disable memory sharing and the per process memory requirement is reduced on the reference input because of the data distribution. This will enable the job to complete.

 

Fatal Error: APT_Communicator::shmemInitBuffers: createMapFile (/tmp) failed: Not enough space on node.

This message can also be caused by system-wide limit on the number of mmap’ed shared memory segments. Often this issue occurs when the value for APT_DEFAULT_TRANSPORT_BLOCK_SIZE is set too high. Please check the environmental variable APT_DEFAULT_TRANSPORT_BLOCK_SIZE. The default for this variable is 131072 or 128 KB. The maximum value should be is 1048576 or 1MB. Please see page 79 on the Parallel Job Advanced Developer Guide for additional information regarding this environmental variable.

Resolving the problem
To resolve this issue, ensure that there is adequate space in the assigned TMPDIR (temporary directory) or verify the settings on the environmental variable APT_DEFAULT_TRANSPORT_BLOCK_SIZE.

Leave a Reply

Your email address will not be published.