dsjob return code 269 Datstage v8.5

dsjob command in V8.5 has error. if you want to revert back old functionality of dsjob –  

Apply APAR JR43437 and set environment variable(APAR JR43437)

DSE_SLAVE_CLOSE_SOCKET_ON_EXEC=1 in $DSHOME/dsenv.

Workaround – Set the DataStage Administrator Client inactivity timeout value to a number of seconds that is greater than your longest-running job

Source: http://www-01.ibm.com/support/docview.wss?uid=swg21621459

Failed to invoke GenRuntime using phantom process helper

When
When compiling a job

Causes and resolutions

A)  Server’s /tmp space was full Clean up space in /tmp
B) Jobs status incorrect. DataStage Director->Job->Clear Status File
C) Format problem with projects uvodbc.config file Confirm uvodbc.config has the following entry/format:
[ODBC DATA SOURCES]

DBMSTYPE = UNIVERSE
network = TCP/IP
service = uvserver
host = 127.0.0.1

D) Corrupted DS_STAGETYPES file Connect to the DataStage server,change directory to DSEngine, source dsenv ( . ./dsenv)
$ bin/uvsh
>LOGTO projectname (case sensitive)
Set a file pointer RTEMP to the template DS_STAGETYPES file
>SETFILE /Template/DS_STAGETYPES RTEMP
Check that all of the entries in the template DS_STAGETYPES file are present in the project’s DS_STAGETYPES file
>SELECT RTEMP
* this will return a count of records found in the template DS_STAGETYPES file
>COUNT DS_STAGETYPES
* this will return a count of records found in the project’s DS_STAGETYPES file
* These numbers should be the same
If the numbers differ and some records are missing from the project’s DS_STAGETYPES file
>COPY FROM RTEMP TO DS_STAGETYPES ALL OVERWRITING
exit Universe shell
>Q
E) Internal locks Connect to the DataStage server,change directory to DSEngine, source dsenv ( . ./dsenv)
Change directory to the projects directory that has the job generating the error.
Execute the following replacing with the actual job name.
$ $DSHOME/bin/uvsh “DS.PLADMIN.CMD NOPROMPT CLEAR LOCKS “

Still unresolved? try the following:

Turn on server side tracing, attempt to compile the problem job, turn off server side tracing, and gather the tracing information.

  1. Turn on server side by connecting to the server with the DataStage Administrator client.
  2. High light the project which has the problem job.
  3. Click on the Properties button.
  4. In the Properties window, click the Tracing tab
  5. Click on the Enabled check box
  6. Click the OK button
  7. With a new DataStage Designer connection, attempt to compile the job.
  8. With the DataStage Administrator client, go back into the projects properties
  9. Select the Tracing tab
  10. Uncheck the Enabled check box
  11. For each DSRTRACE entry do the following:
    a) High light the entry and click View
    b) High light the contents of the display and click on Copy
    c) Paste the copied information into Notepad
  12. Open a PMR with support and supply the Notepad information.

Host name invalid





81011 means that the host name is not valid or the server is not responding. 
Do an exact search on 81011, you will find many discussions around it.

  • Verify if Datastage is up and running. You can either do a command line ping or go to control panel and check if the services are running. 
  • Try specifying the IP address rather than the host name.
  • Check for any firewalls
  • try in your Command window pinging your localhost. 
  • >ping localhost
  • If Server and Client are in the same machine try 127.0.0.1 Or ‘localhost’

Failed job – **** Parallel startup failed ****

Problem(Abstract)

A parallel DataStage job with configuration file setup to run multiple nodes on a single server fails with error:
Message: main_program: **** Parallel startup failed ****

Resolving the problem

The full text for this “parallel startup failed” error provides some additional information about possible causes:
    This is usually due to a configuration error, such as not having the Orchestrate install directory properly mounted on all nodes, rsh permissions not correctly set (via /etc/hosts.equiv or .rhosts), or running from a directory that is not mounted on all nodes. Look for error messages in the preceding output.

For the situation where a site is attempting to run multiple nodes on multiple server machines, the above statement is correct. More information on setting up ssh/rsh and parallel processing can be found in the following topics: 
Configuring remote and secure shells
Configuring a parallel processing environment

However, in the case where all nodes are running on a single server machine, the “Parallel startup failed” message is usually an indication that the fastname defined in the configuration file does not match the name output by the server’s “hostname” command. 

In a typical node configuration file, the server name where each node runs is indicated by the fastname, i.e., /opt/IBM/InformationServer/Server/Configurations/default.apt:

{
  node "node1"
  {
     fastname "server1"
     pools ""
     resource disk "/opt/resource/node1/Datasets" {pools ""}

      resource scratchdisk "/opt/resource/node1/Scratch" {pools ""}
  }
  node "node2"
  {
     fastname "server1"

      pools ""
     resource disk "/opt/resource/node2/Datasets" {pools ""}
     resource scratchdisk "/opt/resource/node2/Scratch" {pools ""}
  }
}

Login to the DataStage server machine and at the operating system command prompt, enter command:
hostname

If the hostname output EXACTLY matches the fastname defined for local nodes, then the job will run correctly on that server. However, if the “hostname” command outputs the hostname in a different format (such as with domain name appended) then the names defined for fastname will be considered remote nodes and a failed attempt will be made to access the node via rsh/ssh.

Using the above example, if the hostname output was server1.mydomain.com then prior to the “Parallel startup failed” error in job log you will likely see the following error:

    Message: main_program: Accept timed out retries = 4
    server1: Connection refused

The above problem will occur even if your /etc/hosts file maps server1 and server1.mydomain.com to the same address since it is not the inability to resolve either address that causes this issue, but rather that the fastname in node configuration file does not exactly match the system hostname (or value of APT_PM_CONDUCTOR_NODE if defined).

You have several options to deal with this situation:

  • change fastname for nodes in configuration file to exactly match the output of hostname command.
  • set APT_PM_CONDUCTOR_NODE to the same value as fastname. This would need to be defined either in every project or every job.
  • You should NOT change the hostname of server to match fastname. Information Server / DataStage stores some information based on the current hostname. If you change the hostname after installation of Information Server / DataStage, then you will need to contact support team for additional instructions to allow DataStage to work correctly with the new hostname.