Pages

Monday, June 17, 2013

HTTPS Client using X.509 Certificate

Hypertext Transfer Protocol Secure (HTTPS) is a communication protocol for secure communication. HTTPS is widely used on the internet to secure the communication on public websites. Technically it is not a protocol itself; rather, it is the result of layering the HTTP over SSL/TSL protocol to add the security capabilities of SSL/TSL to the standard HTTP protocol. More formally it is HTTP Over TLS according to RFC 2818 on IETF.

All the communication between the client and the server is encrypted using public key certificates. On connection request from the client, HTTPS server presents a certificate to the browser with its public key and the subsequent communication between the server and the client is encrypted using the public key in the certificate.

Public-key infrastructure (PKI) is especially suitable for web because it still provides some protection even if only one side of the communication link is authenticated. As long as the server is authenticated, any client can download the public key certificate from the server and use the public key to encrypt the data before sending it over to the server. That however means it is the responsibility of the client to examine the public key certificate and make sure that the certificate is valid for that server, i.e., the server is actually who it says he is.

Normally every certificate is signed by some authority to mark that the certificate is trusted and can be used for secure communication. Most websites use well know certificate authorities (e.g. VeriSign/Microsoft/etc.) to sign the certificate for them. Most of the browsers have a list of well know certificate authorities. When a server presents a certificate signed by one of the certificate authorities in the browser’s list, the browser use them to establish a secure connection with the server. However, when the signatory is not in the list of browser then they present a message to the user that the certificate is not signed by a trusted authority and let the users decide whether they want to trust the server or not.

Most of the browsers come equipped with this functionality. When you connect to a website using HTTPS URLs the browsers take care of the details for the users. If the certificate presented by the server is signed by one of the well-known Certificate Authority then the browsers simply use that certificate and in case of a self-signed certificate the browsers ask users to validate and accept the certificate before establishing the data communication.

The bottom line is that the certificate must be marked as trusted before it can be used to establish a connection with the server.

  • Using Certificate to open HTTPS Connection

Java uses the keystore to achieve this functionality. Any certificate that exists in the keystore is considered trusted and can be used to establish a connection with the server. To create a HTTPS connection programmatically, we first need to add the certificate to the keystore. A very simple way to get the certificate is to point your browser to the https:// url and ask to view the certificate and then download it.

I used FireFox to get the certificate.

  • Go to the https url, because it is self-signed, FireFox will display the warning that the connection is untrusted.
  • Go to I Understand the Risk and click Add Exception.
  • That will open the Add Exception dialog, click on the View button to see the certificate details.
  • This will open the certificate details for you. Click on the Details Tab to see the details of the certificate.
  • On this tab you have an option to Export the certificate.

When you click Export, it will open the save file dialogue. Save the certificate to an appropriate location on your disk. The certificate will be saved with either .crt or .perm extension.

Once you have the certificate then the next step will be to add that in your keystore. You can do it by using the keytool command in java as follows.

keytool -import -keystore yourKyeStore.jks -file YourCert.crt
This command will create a keystore for you and import the certificate to your keystore. If the keystore does not exist then it will create a keystore for you. For an existing keystore, it will ask for the password of your keystore, and for a new keystore it will ask for a password for the keystore and then ask again to confirm the password and on confirmation create a new keystore with the provided password.

At this point it will show you the detials of the certificate and ask you if you trust the certificate.


C:\temp\certs>keytool -import -keystore store.jsk -file cert.crt
Enter keystore password:
Re-enter new password:

Owner: CN=www.certificateserverurl.com, OU=Sun GlassFish Enterprise Server, 
 O=Sun Microsystems, L=Santa Clara, ST=California, C=US
Issuer: CN=www.certificateserverurl.com, OU=Sun GlassFish Enterprise Server, 
 O=Sun Microsystems, L=Santa Clara, ST=California, C=US
Serial number: 5013bd9b
Valid from: Sat Jul 28 11:23:23 BST 2012 until: Tue Jul 26 11:23:23 BST 2022
Certificate fingerprints:
         MD5:  90:03:8C:BA:32:1F:AD:96:40:CE:49:1D:A3:A3:F6:72
         SHA1: 8D:AE:25:8F:9C:3A:70:81:55:03:5E:B7:92:D2:0A:E6:CB:99:A3:59
         Signature algorithm name: SHA1withRSA
         Version: 3

Extensions:

#1: ObjectId: 2.5.29.14 Criticality=false
SubjectKeyIdentifier [
KeyIdentifier [
0000: F9 A7 36 D2 9B 4D 10 68   0F 22 2F 31 16 39 59 1A  ..6..M.h."/1.9Y.
0010: 65 70 F6 56                                        ep.V
]
]

Trust this certificate? [no]:  y
Certificate was added to keystore

C:\temp\certs>

Enter "y" on this prompt to add the certificate to your keystore.

Once you have the certificate added to the JVM Truststore, all you need to do is to tell your Java Program to use the truststore before opening a HTTPS connection. This is how you do it:


 System.setProperty("javax.net.ssl.trustStore", keystore);
 System.setProperty("javax.net.ssl.keyStorePassword", "changeit");
 
 //con = openHttpsConnection(queryString);
 URL url = new URL(queryString);
 con = (HttpsURLConnection)url.openConnection(); 
 
Java adds all the trusted certificates in a keystore. Here you are specifying the keystore and the password for the keystore to enable java to search for an appropriate certificate from this store. When a connection request is made the server will present the certificate to the java client. The java client will look into this keystore to see if we already have this certificate added as a trusted certificate in the keystore. If the certificate is found then a HTTPS connection can be established.

Supposing you have a user search service running on a HTTPS server. Below is the code you will use to call that secure service to search for registered users in your server.


package user;

import java.net.URL;

import javax.net.ssl.HttpsURLConnection;

import user.xml.UserXMLHandler;
import user.xml.generated.Users;

public class TestUserServlet {
 
 private static final String strUrl = "https://www.yourserver.co.uk/users/";
 private static final String keystore = "C:\\temp\\certs\\store.jks";
 
 private static final String find = "profile?";
 
 private static final String FNAM = "firstName";
 private static final String LNAM = "lastName";
 
 public static void main(String s[]) throws Exception{
  
  Users users = null;
  
  System.out.println(strUrl);
  
  TestUserServlet test = new TestUserServlet();
  
  users = test.doFindUser("raza", "abidi");
  System.out.println(users.toString());
  
 }
 
 private Users doFindUser(String firstName, String lastName) throws Exception{
  
  String queryString = strUrl + find + FNAM + "=" + firstName + "&" + LNAM + "=" + lastName;
  
  HttpsURLConnection con = null;
  
  try{
   
   // Set the keystore to use
   System.setProperty("javax.net.ssl.trustStore", keystore);
   System.setProperty("javax.net.ssl.keyStorePassword", "changeit");
   
   URL url = new URL(queryString);
   con = (HttpsURLConnection)url.openConnection();
  }catch(Exception e){
   e.printStackTrace();
   if(con!=null)
    con.disconnect();
   
   throw e;
  }
  
  if(con==null){
   throw new Exception ("Failed to open HTTPS Conenction");
  }
  
  Users users = null;
  try{
   users = new UserXMLHandler().getUsersXML(con.getInputStream());
  }finally{
   con.disconnect();
  }
  return users;
 }
}
All we are doing here is telling java which keystore to use to find a certificate before trying to open a connection to the secure server. Java will look into the keystore and will use the most appropriate certificate from the list of certificates in the keystore. The certificate already has information about the issuing server and java will use that information to find out which certificate to use to establish a secure connection with the server.

  • HTTPS Connection without Certificate

I would strongly advise against using this approach when connecting to a secure server. Getting a certificate from the secure server and installing it in the JVM keystore is a pretty trivial task and there is no need to bypass that process otherwise what is the point of implementing a secure server connection. However, sometimes, most probably in your test environments, you may need to connect to the HTTPS server without using the certificate. On these occasions the workaround is to provide your own implementation of the TrustManager and override the security methods to trust any given certificate.

Even in this scenario, all the communication between server and client will still be encrypted, the only problem is that you blindly trust every certificate as it is and that opens the system vulnerable to security breaches. That is the reason why you should avoid bypassing the keystore approach. However, for odd occasions when it is needed, here is how you do it.


package user;

import java.net.URL;
import java.security.cert.CertificateException;

import javax.net.ssl.HostnameVerifier;
import javax.net.ssl.HttpsURLConnection;
import javax.net.ssl.SSLContext;
import javax.net.ssl.SSLSession;
import javax.net.ssl.TrustManager;
import javax.net.ssl.X509TrustManager;

import user.xml.UserXMLHandler;
import user.xml.generated.Users;

public class TestUserServlet {

 private static final String strUrl = "https://www.yourserver.co.uk/users/";

 private static final String find = "profile?";

 private static final String FNAM = "firstName";
 private static final String LNAM = "lastName";


 public static void main(String s[]) throws Exception{

  Users users = null;

  System.out.println(strUrl);

  TestUserServlet test = new TestUserServlet();

  users = test.doFindUser("raza", "abidi");
  System.out.println(users.toString());

 }

 private Users doFindUser(String firstName, String lastName) throws Exception{

  String queryString = strUrl + find + FNAM + "=" + firstName + "&" + LNAM + "=" + lastName;

  HttpsURLConnection con = null;

  try{
   con = openHttpsConnection(queryString);
  }catch(Exception e){
   e.printStackTrace();
   if(con!=null)
    con.disconnect();

   throw e;
  }

  if(con==null){
   throw new Exception ("Failed to open HTTPS Conenction");
  }

  Users users = null;
  try{
   users = new UserXMLHandler().getUsersXML(con.getInputStream());
  }finally{
   con.disconnect();
  }
  return users;
 }

 // Creating our own implementation of an all trusting trust manager
 private HttpsURLConnection openHttpsConnection(String queryString) throws Exception{

  // Create a trust manager that does not validate certificate chains
  TrustManager[] trustAllCerts = new TrustManager[] { 
    new X509TrustManager() {

     @Override
     public void checkClientTrusted(java.security.cert.X509Certificate[] arg0,
       String arg1) throws CertificateException {
      // TODO Auto-generated method stub

     }

     @Override
     public void checkServerTrusted(java.security.cert.X509Certificate[] arg0,
       String arg1) throws CertificateException {
      // TODO Auto-generated method stub

     }

     @Override
     public java.security.cert.X509Certificate[] getAcceptedIssuers() {
      // TODO Auto-generated method stub
      return null;
     }

    } 
  };

  // Install the all-trusting trust manager
  final SSLContext sc = SSLContext.getInstance("SSL");
  sc.init(null, trustAllCerts, new java.security.SecureRandom());
  HttpsURLConnection.setDefaultSSLSocketFactory(sc.getSocketFactory());

  // Create all-trusting host name verifier
  HostnameVerifier allHostsValid = new HostnameVerifier() {
   public boolean verify(String hostname, SSLSession session) {
    return true;
   }
  };

  // Install the all-trusting host verifier
  HttpsURLConnection.setDefaultHostnameVerifier(allHostsValid);

  URL url = new URL(queryString);
  HttpsURLConnection con = (HttpsURLConnection)url.openConnection();

  return con;
 }
}

Here we are providing our own implementation of the TrustManager instead of using the trusted certificates from our keystore. All the magic is happening in the openHttpsConnection() method where we are passing the URL to connect to and the method is returning a HTTPS connection.

Notice the lines where we are calling the static methods setDefaultSSLSocketFactory and setDefaultHostnameVerifier of HttpsURLConnection class. Here we are setting our own HostnameVerifier and SSLSocketFactory implementation.

Notice the verify() method of the HostnameVerifier class. This method is simply returning true for every host and thus any host becomes valid for the HTTPS connection.

Also notice how the SSLContext is initialised. Here we are passing the customer implementation of X509TrustManager interface where we are simply overriding all the security methods with empty implementations; essentially doing nothing to prevent a security breach.

That means that any certificate from the HTTPS server is accepted as trusted and used for encrypting communication between the client and the server. This allows you to connect to a HTTPS server without installing the certificate in the JVM before making any connections.

Thursday, May 16, 2013

Implementing Observer Pattern

The Observer Pattern is a design pattern in which an object (called subject) is designed in such a way that other objects can be added as observers of the Subject, and any change in the state of the Subject can be notified to the Observers. This is very useful when you want to implement some event driven logic such as implementing a threshold of stock and notifying the purchasing application if stock goes below the set threshold.

In this way we completely separate the concerns of different areas of the application and can implement functions that are fired up based on the events happening in your application. This programming model is called Even Driven Programming where you implement the logic based on the events happeing in the application. Without going into the details of Event Drive programming; let’s see how this is implemented in Java.

Java provides a class Observable and an interface Observer in the java.util package to implement the Observable paradigm. This is what you do:

  • To Create Observable (the Subject), extend the Observable class
  • To create Observers, implement Observer interface and override update(Object obj) method
After that all you need to do is to addObserver() instances in your Observable class. Let’s see how it is done; we start by creating our Subject.

package observertest;

import java.util.Observable;

public class MyObservable extends Observable {
 
 public int sum(int a, int b){
  
  // If not set then observers will think nothing is changed
  // hence no action required. 
  setChanged();
  
  // Perform the business logic
  int c = a+b;
  
  System.out.println("Notifying Observers");
  // A call to notifyObservers() also clears the changed flag
  notifyObservers(new Integer(c)); 
  
  return c;
 }

}

This is a very simple Observable with only one business method sum(int a, int b) . Things to note here are calls to setChanged() and notifyObservers(). The setChanged method is a marker that means something is changed in your Observable object. If the marker is not set then a call to notify will have no effect. This is where you control whether to notify the observers or not.

The other interesting bit is the notifyObservers() method. A call to this methid will fire a notification to ALL the observers observing this object. Note that it takes an Object as a parameter so we have to convert any primitive types to their object wrappers. This is Java’s mechanism to pass parameters to the Observers if we want. The same parameter will be passed to all the observers listening to any changes to this object.

Now let’s see the Observer. To make the point clear, I created two observers on the same Subject.

package observertest;

import java.util.Observable;
import java.util.Observer;

public class MySecondObserver implements Observer {

 @Override
 public void update(Observable o, Object arg) {
  
  System.out.println("Second Observer Notified:" + o + "  :  " + arg);

 }

}

And the second Observer for our example

package observertest;

import java.util.Observable;
import java.util.Observer;

public class MySecondObserver implements Observer {

 @Override
 public void update(Observable o, Object arg) {
  
  System.out.println("Second Observer Notified:" + o + "  :  " + arg);

 }

}

As you can see, all we are doing here is implementing the Observer interface and overriding the update() method. A call to the notifyObservers() from the Observable will result in a call to the update() method eventually. The first parameter is the Object under observation and second parameter is the value you passed to the notifyObservers() method from your Observable.

Let’s see this in action. I am creating a main class where I will create instance of the Observable and will add these Observers to the Observable. Here we go….

package observertest;

public class MainClass {

 public static void main(String[] args) {
  
  int a = 3;
  int b = 4;
  
  System.out.println("Starting");
  MyObservable ob = new MyObservable();
  
  // Add observers
  System.out.println("Adding observers");
  ob.addObserver(new MyFirstObserver());
  ob.addObserver(new MySecondObserver());
  
  System.out.println("Executing Sum :  " + a + " + " + b);
  ob.sum(a, b);
  System.out.println("Finished");
  
 }
}

Here we are simply creating an instance of the Observable class and adding the two Observers to our class. After that we call our business method and this is where the Observers are notified. The Observable does not know nor does it care how many Observers are listening to changes to its state, all it does is notify everyone that something is changed and send some information about the changes.

Similarly, the Observers are completely independent of the Observable. They are interested only in the Subject for a change of state and what action they want to perform if they are notified of that change is completely up to the Observers.

This is the output when we run the MainClass

Starting
Adding observers
Executing Sum :  3 + 4
Notifying Observers
Second Observer Notified:observertest.MyObservable@1b499616  :  7
First Observer Notified: observertest.MyObservable@1b499616  :  7
Finished

Note that your output may be different as we have no control over which observer will be notified first. However, all of the Observers are going to receive the same data, the Subject, and some details passed by the Subject, an Integer in our case.

For more details look at the JavaDocs for Observer and Observable. Some study of Event Driven Programming paradigm would also give you better insight into the pros and cons of this programming model.

Monday, May 13, 2013

SSH - SFTP Communication

FTP is the most popular protocol to transfer files over the network. The protocol has been around since the very early days of computer networks and is still widely used. FTP protocol provides functions to upload, download and delete files, create and delete directories, read the contents of a directory.

There are several libraries for almost every programming language that provide set of APIs that can be used to work with the FTP commands programmatically. In Java there are several open source libraries that can be used. The most popular is Apache Commons Net library that provides easy to use APIs to work with FTP file transfer. You can go to Commons NET link for more details including several working sample applications that you can use.

SFTP however is a completely different story. To begin with, it has nothing to do with the FTP protocol and unlike the common perception, architecturally they are completely different. SFTP abbreviation is often mistaken as Secure FTP which is not entirely correct. Another perception is that SFTP is some kind of FTP over SSL or SSH. In fact SFTP is abbreviation of "SSH File Transfer Protocol". This is not FTP over SSL and not FTP over SSH. SFTP is an extension of the Secure Shell (SSH) protocol which provides the file transfer capabilities. See this SFTP Wiki page for more details.

SFTP works over a secure channel, i.e. SSH. First you connect to the secure channel, as soon as the connection is established; the server presents a public key to the client, any subsequent communication between the server and the client will be encrypted using the public key presented by the server. After establishing the connection, you then need to authenticate the communication using any supported authentication mechanism, i.e. Public Key or Username-Password. Successfully authentication creates a secure channel on which you create the SFTP connection for secure file transfer to and from the SFTP server.

That all sounds very nice and interesting, but the story starts getting muddy after that. Unfortunately there are not many complete open source implementations of SFTP in Java. What I could find so far are these two implementations of SFTP in the open source world, one is JSch and the other one is SSHTools All other implementations are either a fork of one of these two or they are in a very initial stage.

I will be using SSHTools for this example. I found that comparatively easy to use and it does what it says on the tin. You however, are welcome to try both and see which one you like the most. Both of these libraries provide very similar interfaces and are not very difficult to use. To use the SSHTools libraries, all you need to do is to download the j2ssh-core-0.2.9.jar from SSHTools website and place it on your classpath. For this example I will be creating a SFTP Client class to connect to an SFTP server using the UserName-Password authentication.

Any SFTP communication starts by creating the secure channel. First thing is to create an SSH connection using SshClient to the SFTP server and then authenticate your credentials using an instance of PasswordAuthenticationClient with your credentials and then pass it to the SshClient for authentication.


 // Create SSH Connection. 
 SshClient ssh = new SshClient();
 ssh.connect("sftp_server", new ConsoleKnownHostsKeyVerification());
 
 // Authenticate the user
 PasswordAuthenticationClient passwordAuthenticationClient = new PasswordAuthenticationClient();
 passwordAuthenticationClient.setUsername("user_name");
 passwordAuthenticationClient.setPassword("password");
 try{
  int result = ssh.authenticate(passwordAuthenticationClient);
  if(result != AuthenticationProtocolState.COMPLETE){
   throw new Exception("Login failed !");
  }
 }catch(Exception e){
  throw new Exception("Authentication Failure: " + e.getMessage()); 
 }
 
 //Open the SFTP channel
 try{
  sftp = ssh.openSftpClient();
 }catch(Exception e){
  throw new Exception("Failed to open SFTP channel: " + e.getMessage());
 }
 

Interesting bits to observe in this piece of code are connection and authentication related lines. First of all the ConsoleKnownHostsKeyVerification class that we pass as parameter to the connect() method. This is because when you connect to any SSH server, it supplies the public key to client and the client will use this public key for any further communication with the server. That means the login and password that we are passing to the SSH server for authentication is encrypted using this public key before it is sent over to the server for authentication.

When we pass only the host name to the connect method, it will by default try to find the known_hosts file in $HOME/.ssh/known_hosts and failing to find this file or the host in this file will prompt the user to verify the server public key signature and the following prompt will come up.

The host your.sftp.server is currently unknown to the system
The host key fingerprint is: 1028: 69 54 9c 49 e5 92 59 40 5 66 c5 2e 9d 86 af ed
Do you want to allow this host key? [Yes|No|Always]:

From this prompt, you have to manually enter one of these options to continue

  • Yes will use this host for the current session
  • No will not continue with the communication
  • Always will add this host to the known host file in your system
If you select the Always option then the host will be added to the known_hosts file and any subsequent communication will not ask for verification of the public key signature.

When using the ConsoleKnownHostsKeyVerification class in the connect method, the SshClient uses the instance of this class to negotiate the protocol and exchange the key with the SSH Server on your behalf, when it returns the connection becomes ready for communication. Thus avoiding any need of user interaction to verify the server signature and manually negotiate the SSH connection. Now the credentials can be encrypted using the Public Key of the server and sent over to the server for authentication.

When your login is authenticated, then you can open an SftpClient over this SSHConnection. This is the hard work done; once you have the SftpClient then you have all the standard FTP operations at your disposal. When you finish with your FTP operations, get, put, ls, mkdir, etc. then make sure you disconnect from both the SFTP channel and the SSH connection.


 public void disconnect() throws Exception{
  
  if(sftp == null)
   throw new Exception("SFTP channel is not initialized.");
  
  if(ssh == null)
   throw new Exception("SSH session is not initialized.");
  
  try{
   sftp.quit();
  }catch(Exception e){
   throw new Exception("Failed to disconnect from the server: " + e.getMessage());
  }
  
  try{
   ssh.disconnect();
  }catch(Exception e){
   throw new Exception("Failed to disconnect from the server: " + e.getMessage());
  }
 }
 

There are a few lines of code that will be repeated for every SFTP operation, i.e. connection and authentication code and disconnect. Let’s create a wrapper for our SFTP communication and then we can use this wrapper in our application to do our SFTP operations. Encapsulating these methods in a few easy to use interfaces will make our life a lot easier.


package ftp;

import com.sshtools.j2ssh.SftpClient;
import com.sshtools.j2ssh.SshClient;
import com.sshtools.j2ssh.authentication.AuthenticationProtocolState;
import com.sshtools.j2ssh.authentication.PasswordAuthenticationClient;
import com.sshtools.j2ssh.transport.ConsoleKnownHostsKeyVerification;

public class SFtp {
 
 private String host;            // Remote SFTP hostname

 private SshClient ssh;
 private SftpClient sftp;
 
 public SFtp(String host) {
  
  this.host = host;
  this.ssh = null;
  this.sftp = null;
 }
 
 public void connect(String user, String password) throws Exception{
  
  // Connect to SSH. 
  ssh = new SshClient();
  try{
   ssh.connect("sftp_server", new ConsoleKnownHostsKeyVerification());
  }catch(Exception e){
   throw new Exception("SSH connection failure: " + e.getMessage());
  }
  
  // Authenticate the user
  PasswordAuthenticationClient passwordAuthenticationClient = new PasswordAuthenticationClient();
  passwordAuthenticationClient.setUsername(user);
  passwordAuthenticationClient.setPassword(password);
  try{
   int result = ssh.authenticate(passwordAuthenticationClient);
   if(result != AuthenticationProtocolState.COMPLETE){
    throw new Exception("Login failed !");
   }
  }catch(Exception e){
   throw new Exception("Authenticvation Failure: " + e.getMessage()); 
  }
  
  //Open the SFTP channel
  try{
   sftp = ssh.openSftpClient();
  }catch(Exception e){
   throw new Exception("Failed to open SFTP channel: " + e.getMessage());
  }
 }
 
 public void cd(String remoteDir) throws Exception{
  
  if(sftp == null)
   throw new Exception("SFTP channel is not initialized.");
  
  if(remoteDir==null || remoteDir.trim().length()==0)
   throw new Exception("Remote directory name is not provided.");
  
  try{
   sftp.cd(remoteDir);
  }catch(Exception e){
   throw new Exception("Failed to change remote directory: " + e.getMessage());
  }
 }
 
 public void put(String fileName) throws Exception{
  
  if(sftp == null)
   throw new Exception("SFTP channel is not initialized.");
  
  if(fileName==null || fileName.trim().length()==0)
   throw new Exception("File name is not provided.");
  
  //Send the file
  try{
   sftp.put(fileName);
  }catch(Exception e){
   throw new Exception("Failed to upload file: " + e.getMessage());
  }
 }
 
 public void disconnect()throws Exception{
  
  if(sftp == null)
   throw new Exception("SFTP channel is not initialized.");
  
  if(ssh == null)
   throw new Exception("SSH session is not initialized.");
  
  try{
   sftp.quit();
  }catch(Exception e){
   throw new Exception("Failed to disconnect from the server: " + e.getMessage());
  }
  
  try{
   ssh.disconnect();
  }catch(Exception e){
   throw new Exception("Failed to disconnect from the server: " + e.getMessage());
  }
 }
 
}

Since we are not using the SSH connection ever in our application directly, there is no need to provide any details of that in our wrapper. All we are interested in is the SFTP connection. That’s why the connect method takes the user and password as the parameters and does all the work of authenticating the users on the SSH channel and creating the SFTP connection.

In other implemented methods, cd, put, and disconnect, I am checking for a valid SFTP connection before any operation. The interesting bit here is the disconnect() method where we are making sure that both SFTP and SSH are disconnected.

And here is a sample client application that is using our wrapper to upload a file to an SFTP server.


package ftp;

public class SFTPTester {

 // Set these variables for your testing environment:
 private static String host = "your.sftp.server";  // Remote SFTP hostname
 private static String userName = "your_user";     // Remote system login name
 private static String password = "your_pswd";     // Remote system password
 private static String remoteDir = "remote_dir";   // Directory on SFTP Server
 
 public static void main(String argv[]) throws Exception {
 
  SFtp sftp = new SFtp(host, port);
  sftp.connect(userName, password);
  sftp.cd(remoteDir);
  sftp.put(filePath);
  sftp.disconnect();
  
 }
}

As you can see, creating a wrapper does encapsulates most of the code from your application and you end up with easy to use very simple interface to connect to your SFTP Server to upload files from your local directory. This is a very simple wrapper implementing only a few SFTP methods. Now you can add your own implementation of other FTP methods that you require.

Thursday, May 9, 2013

Queue Map Hybrid -- Creating Data Structures in Java

Recently I came across a problem where I was looking for a Queue implementation that can store Key-Value pairs. The benefits I was looking for were two folds, first of all, it must behave in a FIFO fashion, and secondly, I should be able to lookup an item by the Key without removing it from the structure. An Ideal implementation for me would be a hybrid of Queue and Map data structure implementations already available in the Collection Framework.

Like any modern programmer :-) my first attempt was to search for the available implementations and to my surprise I could not find anything that could fit to my criteria. I am surprized that no one ever considered such data structure or is it something that is so specialized that no one ever bothered to publish that, whatever is the reason; I did not find any clean implementation that I could use for my requirements.

That gave me a motivation to create my own and publish it for the community, maybe there is someone else looking for a similar solution and can benefit from the work I have already done. However, instead of simply posting my solution here, I am also taking this as an opportunity to provide some guidelines for students and junior programmes on how to design a new data structure. In this post I will try to explain what are the data structures and how we design the data structures.

Data Structures

Data structures are a special way of storing and organizing data in computer’s memory. In addition to storing the data, they also provide some functionality to manipulate the data stored in the structure. What functionality is provided depends on the data structure. Typical functionality is to Add, Remove, Find, First, Last, etc. Different kind of data structures are suited for different applications, some are very basic, like arrays, and some are highly specialized like B+ Tree. Bottom line is, you store some related data in-memory and provide appropriate operations on that data.

Functional Requirements

The first step on defining a data structure is to gather the requirements, what exactly are you looking to store in your structure and what behaviour is expected from that structure. For the purpose of this exercise, I have created this list of requirements that I am looking to achieve. The new structure must be able to

  • Store Key-Value pairs
  • Have a fixed size of structure
  • Remove last entry when adding a new pair
  • Find a Value by Key
  • Store any Object as Key or Value
  • Update Value of a Key
  • Remove any item using Key
Now that we have our requirements laid down, we can start looking at the existing solutions and how much of the given functionality they provide. I had in mind the Queue and the Map from Java Collection Framework, combining these two will give me all of the above functions.

Designing the Interface

An Interface is a set of functions that will be available for the users of that structure. In our case, all the public methods of the Data Structure are going to be the interface for that data structure. Now that we have the functional requirements, we can define the public methods of the new structure that we are going to create. This is what I came up with:

 public synchronized void addItem(K key, V value);
 public synchronized V getItem(K key);
 public synchronized void remove(K key);
 public int size();
 public void clear();

These methods will cover the functional requirements that we set out in our Gathering Functional Requirements phase. Now we can worry about the actual implementation of these methods.

Before we start implementing these methods, we must first look at the underlying structures that we are going to use. Remember, we have two basic requirements we set out in the beginning, it must behave like a Queue, and it must be able to store Key-Value pairs; for that we already have two very nice interfaces in Java Collection Framework; Queue and Map. Both of these interfaces provide functions to add, get, and remove elements from the structures. However, they are interfaces, and we need to choose the correct implementations of these interfaces in our new structure, we don’t want to re-invent the wheel, do we?

For the purpose of this exercise we are going to use the LinkedList implementation of Queue and HashMap implementation of Map; simply because they are the most basic ones. Now let’s start implementing the Structure. We begin by declaring the class and instance variables.


public class QueuedMap<K, V> {

 private static final int MAX_SIZE=1024;

 private int size;
 private Map<K, V> values;
 private Queue<K> keys;

 public QueuedMap() {
  this(64);
 }

 public QueuedMap(int size) throws IllegalArgumentException{

  if(size<=0){
   throw new IllegalArgumentException("Size can only be a +ive Integer");
  }

  if(size > QueuedMap.MAX_SIZE)
   throw new IllegalArgumentException("Size cannot be more than " + QueuedMap.MAX_SIZE);

  this.size = size;
  this.values = new HashMap<K, V>(this.size);
  this.keys = new LinkedList<K>();
 }

}

Here are a few interesting things to note, First of all the use of Generics. (If you are new to Generics the follow this nice Oracle Tutorial or this Wikipedia Page for more information) This class declaration covers our "Store Key-Value pairs" requirement by using the Map and also the "Store any Object as Key or Value" requirement by allowing creating the class instance of any Type. There is also an element of Type Safety by introducing the Generics, refer to the links above for details of how Generics achieve that.

The other important bit is the constructor of the structure. The default constructor initializes the structure with a default size of 64 but the overriding constructor takes the size as a parameter and initializes the structure with the given size. This fulfils the "Have a fixed size of structure" requirement. The Structure can have virtually any size, but once initialized, it cannot be changed. The MAX_SIZE constant that restricts the size of our Structure is there only for a reference to let you define a maximum size of your structure if you want to impose that restriction, Also notice the Map and Queue initialized as HashMap and LinkedList in the constructor.

Now let’s look at the implemented methods of our DataStructure. I will start from the easiest ones first, namely size() and clear() methods. These are the standard methods that should be implemented by any Data Structure.


 public int size(){
  return this.keys.size();
 }

 public void clear(){
  this.values.clear();
  this.keys.clear();
 }
 

As you can see, we are simply encapsulating the methods provided by our underlying Data Structures and providing a wrapper on these methods. In the size() method, we are simply returning the size of our Queue and in the clear() method we are simply calling the clear() method of both our Queue and our Map. Since we already have methods available for these functions in our underlying data structures, we don’t have to reinvent the wheel here and simple encapsulation is more than adequate.

Now let's look at the other methods in our Data Structure. Notice the synchronized keyword on all of our operations; this is because both the underlying Data Structures that we are planning to use are not synchronized, and we have to provide our own thread safety mechanisms.


 public synchronized void addItem(K key, V value){

  if(key == null || value == null)
   throw new NullPointerException("Cannot insert a null for either key or value");

  // First see if we already have this key in our queue
  if(this.keys.contains(key)){
   // Key found. 
   // Simply replace the value in Map
   this.values.put(key, value);
  }else{
   // Key not found
   // Add value to both Queue and Map
   this.enqueue(key, value);
  }
 } 
 

This is a very simple method exploiting the actual implementation from the underlying structures. The first thing to check before we add this Key-Value pair is that the Key and Value must not be null. If we have a value for both objects then we check if we already have this Key in our structure. In that case, simply replace the Value in our Map with the new Object received. If not then add this Key-Value pair to our Data Structure. This is how we are tackling our "Store any Object as Key or Value" and the "Update Value of a Key" requirements. The actual work of storing a Key-Value pair is rather involved and we use a private method enqueue(K, V) for that which is not visible to the users of this Structure.


 private void enqueue(K key, V value){

  if(this.keys.size() < this.size){
   // We still have space in the queue
   // Add they entry in both queue and the Map
   if(this.keys.add(key)){
    this.values.put(key, value);
   }
  }else{
   // Queue is full. Need to remove the Head 
   // before we can add a new item.
   K old = this.keys.poll();
   if(old!=null)
    this.values.remove(old);

   // Now add the new item to both queue and the map
   this.keys.add(key);
   this.values.put(key, value);
  }
 }

Here in the enqueue(K, V) method, first thing we are checking is if we still have space in the Structure. For that we use the instance variable size that we initialized in the constructor and we are not allowing the Structure to grow any larger than this size. If the size or our structure is still less than the maximum size with which the Structure is initialized then simply add the Key to the Queue and the Key-Value pair in the Map. If we have already reached the maximum size defined for this Structure then first remove the oldest Key from the Queue, then remove the pair with this Key from the Map before adding the new entries in both Queue and Map.

This is where you can see the Queue and Map in action. Adding Keys in the Queue ensures that when it comes to add a new element in the Structure, the oldest one is removed, and then we can use the Key to manipulate the data stored in the Map. This takes care of our "Remove last entry when adding a new pair" requirement.

The remaining two methods, getItem(K key) and remove(K key) are also fairly simple. All we are doing here is wrapping the functionality already provided by the underlying Queue and Map to control the behaviour.


 public synchronized V getItem(K key){

  if(key==null)
   return null;

  V val = this.values.get(key);
  return val;
 }

 public synchronized void remove(K key){

  if(key == null)
   throw new NullPointerException("Cannot remove a null key");

  this.keys.remove(key);
  this.values.remove(key);
 }

Here in the getItem(K key) we are simply returning the Value for that Key from our Map if the Key is not null. This fulfils our "Find a Value by Key" requirement.

The remove(K key) is slightly involved, here we are taking the Key and removing the Key from both the Queue and the Map. This takes care of the "Remove any item using Key" requirement we set out for the Structure.

That completes our Data Structure with all of the Functional Requirements we set out at the beginning of this post. Below is the full source for you to give you the full picture of how this all fits together.


package com.raza.collection;

import java.util.HashMap;
import java.util.LinkedList;
import java.util.Map;
import java.util.Queue;

/**
 * <p>
 * QueuedMap is a specialised implementation of the Queue which can 
 * store Key Value pairs instead of just the Objects. This flexibility 
 * comes handy when you want to retrieve a specific Object from the Queue 
 * then instead of trying to find the object by iterating the whole 
 * Queue you can simply get the Object using the Key.
 * </P><p>
 * In order for the structure to work properly, it is vital to override 
 * the hashCode() and equals(Object obj) methods in your Key class. These are 
 * the methods that the underlying Map will use to compare the Keys 
 * to retrieve/remove the correct Object from the QueuedStore.
 * </P><p>
 * The structure will always have a fixed size. If the structure is not 
 * initialised with a given value then it will use default value of 64 
 * to initialise. Once the Structure is initialised then the size cannot 
 * be amended. Once the overall structure reaches the maximum size then 
 * any new Key Value pairs added to the structure will result in removing 
 * the oldest entry from the structure.
 * </P><p>
 * There is virtually no size limit to the size of the structure. The structure 
 * can be initialised with any arbitrary value, however, at the time of initialisation 
 * one should always consider keeping up with the best practices used to initialise 
 * Map data structures as the underlying implementation uses the HashMap to store 
 * the Key Value pairs.
 * </P>
 * 
 * @author Baqir Raza Abidi
 * @date 26 Mar 2013 16:03:08
 */
public class QueuedMap<K, V> {

 /**
  * Final variable indicates the Maximum size of this 
  * structure. 
  */
 private static final int MAX_SIZE=1024;

 private int size;
 private Map<K, V> values;
 private Queue<K> keys;

 /**
  * Default constructor for the class. Creates a class with the default 
  * structure size of {@code 64}. Once the structure is created then the 
  * size of the structure will remain the same.   
  */
 public QueuedMap() {
  this(64);
 }

 /**
  * <p>
  * Creates the structure with the given size. The constructor throws Exception if
  * the size given is less then 1. The structure cannot be created with a 0 or -ive 
  * size. 
  * </p><p>
  * The maximum size of the structure is also limited to the {@code QueuedStore.MAX_SIZE}
  * </p>
  *  
  * @param size Size of the Structure. 
  * @throws IllegalArgumentException If an invalid size is provided. 
  */
 public QueuedMap(int size) throws IllegalArgumentException{

  if(size<=0){
   throw new IllegalArgumentException("Size can only be a +ive Integer");
  }

  if(size > QueuedMap.MAX_SIZE)
   throw new IllegalArgumentException("Size cannot be more than " + QueuedMap.MAX_SIZE);

  this.size = size;
  this.values = new HashMap<K, V>(this.size);
  this.keys = new LinkedList<K>();
 }

 /**
  * <p>
  * Add a new {@code (Key, Value)} pair to the structure. Both the Key and Value can 
  * be any {@code Objects}. The method throws a {@code NullPointerException} in case any of
  * the Key and Value are {@code null}. 
  * </p><p>
  * If both the Key and Value are non null objects then it will try to store the
  * pair to the structure. If the key already exists in the Store then it will
  * simply replace the Value of that Key in the Store with the new Value. If the 
  * Key is a new one then it will try to store a new entry in the Structure. 
  * </p><p>
  * When storing a new entry in the structure, it first checks the size of the 
  * Structure and if it is still less than the size with which it was initialised then 
  * it will add the Key Value pair to the Structure. In case the size is now reached 
  * the limit then the method will first remove the oldest entry from the Structure 
  * and then will add the new Key Value pair to the Store. 
  * </p>
  * 
  * @param key  Object represents the Key.
  * @param value Object represents the Value. 
  * @throws Exception 
  */
 public synchronized void addItem(K key, V value){

  if(key == null || value == null)
   throw new NullPointerException("Cannot insert a null for either key or value");

  // First see if we already have this key in our queue
  if(this.keys.contains(key)){
   // Key found. 
   // Simply replace the value in Map
   this.values.put(key, value);
  }else{
   // Key not found
   // Add value to both Queue and Map
   this.enqueue(key, value);
  }
 }

 /**
  * Returns the value to which the specified key is associated,
  * or {@code null} if this Structure contains no association for the key.
  * <p>
  * More formally, if this map contains a mapping from a key
  * {@code k} to a value {@code v} such that {@code (key==null ? k==null :
  * key.equals(k))}, then this method returns {@code v}; otherwise
  * it returns {@code null}.  (There can be at most one such mapping.)
  * </p>
  *
  * @param key the key whose associated value is to be returned
  * @return the value to which the specified key is mapped, or
  *         {@code null} if this map contains no mapping for the key
  */
 public synchronized V getItem(K key){

  if(key==null)
   return null;

  V val = this.values.get(key);
  return val;
 }

 /**
  * Removes the mapping for a key from this Structure if it is present
  * (optional operation).   More formally, if this Structure contains a 
  * mapping from key <tt>k</tt> to value <tt>v</tt> such that
  * <code>(key==null ?  k==null : key.equals(k))</code>, that mapping
  * is removed.
  *
  * @param key key whose mapping is to be removed from the map
  */
 public synchronized void remove(K key){

  if(key == null)
   throw new NullPointerException("Cannot remove a null key");

  this.keys.remove(key);
  this.values.remove(key);
 }

 /**
  * Returns the number of elements in this collection.  
  * @return size of the structure.
  */
 public int size(){
  return this.keys.size();
 }

 /**
  * Removes all of the elements from this collection (optional operation). 
  * The collection will be empty after this method returns.
  */
 public void clear(){
  this.values.clear();
  this.keys.clear();
 }

 /*
  * Method implementing the actual logic to add 
  * the Key Value pair to the structure. 
  */
 private void enqueue(K key, V value){

  if(this.keys.size() < this.size){
   // We still have space in the queue
   // Add they entry in both queue and the Map
   if(this.keys.add(key)){
    this.values.put(key, value);
   }
  }else{
   // Queue is full. Need to remove the Head 
   // before we can add a new item.
   K old = this.keys.poll();
   if(old!=null)
    this.values.remove(old);

   // Now add the new item to both queue and the map
   this.keys.add(key);
   this.values.put(key, value);
  }
 }
}

Note here that the QueuedMap Data Structure that we created is using HashMap and LinkedList as the building blocks. Both of these DataStructures from Java Collection Framework allow null values to be stored, this is something that we have to handle ourselves. Also both of these DataStructures are not synchronized, i.e., they are not suitable for multi-threaded applications. Hence all the operational methods in this QueuedMap are marked as synchronized explicitly for thread safety.

I tried to provide some Guidelines of how we can create a new Data Structures by combining the existing functionality already provided by the Collection Framework. These are the same principles that are applied to any software development, wherever possible; reuse the existing functions, classes, methods. However, you still need to consider the pros and cons of the underlying building blocks. For example, since HashMap and LinkedList are not synchronized, we have to take care of that ourselves or alternatively use some other implementations of Queue and Map that provide thread safety.

This gives you basic building blocks to come up with your own ideas and create more complex data structures according to your requirements. One good change to this Data Structure may be to implement this as a Priority Queue where instead of removing the oldest entry, you remove the least accessed entry. The possibilities are endless.

Friday, May 3, 2013

JMS on Glassfish with Standalone Client

Most of today’s businesses have a whole range of systems supporting the day-to-day business needs and they may be very different in terms of architecture and technologies they are using. Effective communication between these heterogeneous systems has become an integral part of today’s business. Messaging standards like JMS make it easier to create effective communication solutions across distributed systems to exchange business data or events. In this post I will try to explain what is messaging in general and how we can build asynchronous messaging systems using JMS and Message Driven Beans in Glassfish Application Server.

Messaging

Messaging in simple terms is communication between two parties, a sender and a receiver. That can be as simple as an email sent from one party to another. Contents of that email can be some instructions or simply information and the receiver can take action according to the information received in the message.

Enterprise messaging is pretty similar to that, one system (sender) sends a message to another system (consumer) and the receiver can take appropriate action against that message. It is not mandatory for the receiver to be available at the time when message was sent, nor do they need to know each other in order to exchange messages. All they need is a defined format of the message so that receiver can understand what message is sent by the sender. This loose coupling makes messaging solutions completely different in comparison to other communication solutions like CORBA, or RMI, etc.

JMS Messaging

Java Messaging Service (JMS) is a messaging API for Java platform that defines a common set of interfaces to create, send and receive messages. There are two ways the JMS messaging solution can be implemented. Point-to-point and Publish Subscribe. We will look at both of these models in this post.

Point-to-point
The point-to-point model relies on the concept of MessageQueue. In this model there will be only one sender and one consumer at both ends of the Queue. The sender sends a message to a Queue which is then received by the Consumer from the same Queue. The Consumer the processes the message and acknowledges the receipt of the messages. There is no timing dependency in this model. If the Consumer is not available at the time the message is sent, it will remain in the Queue and the Consumer can receive the message when it becomes available again and acknowledge the receipt of the message. A message can remains in the Queue until it is acknowledged.

Publish Subscriber
This model allows sending a message to multiple Consumers. The message is not sent to any consumer; it is broadcasted to a channel called TOPIC, any Consumers that are SUBSCRIBED to this TOPIC can receive the message. This model does have a timing dependency and the messages are not retained in the JMS provider for a long time, if any subscriber is not active at the time of message broadcast, that subscriber will lose that message. However, the JMS API allows creating a DURABLE SUBSCRIPTION to receive messages if a Subscriber is not active. The JMS Provider will retain a message for a DURABLE SUBSCRIBER until it is received or the message is expired.

JMS on Glassfish

The Glassfish application server has an integrated JMS provider Open MQ providing full messaging support to any applications deployed on Glassfish. Open MQ is a complete message-oriented middleware and JMS implementation.

In Glassfish we create two JMS resources to communicate using JMS messages; Connection Factories and Destination Resources. A Connection Factory is the object used by the Senders to create connections to the JMS Provider, Open MQ in our case. We can have three types of Connection Factories to establish a connection:

  • Queue Connection Factory: This is used to create a Queue Connection to the JMS Provider.
  • Topic Connection Factory: This is used to create a Topic Connection to the JMS Provider.
  • Connection Factory: This is generic factory object that can be used to create Queue as well as Topic connection.
A Destination Resources is the actual channel which is used by the Sender to send messages and the Consumer to receive messages. We have two types of resources:
  • Queue: This is for point-to-point communication.
  • Topic: This is for publish-subscribe communication.
All of these Connections and Resources must be created in the JMS provider. Once the infrastructure is in place we can start communicating on these channels.

Message-Driven Bean

A Message-Driven Bean (MDB) is a special Enterprise Bean that helps to process JMS Messages asynchronously. An MDB acts as a listener for JMS Messages. A JMS client have no direct access to the MDB, instead the JMS Client sends a message to a JMS Resource (Queue or Topic) and at the other end of that resources an MDB listening on that resources will process the message.

In this post I will be creating the JMS Resources in Glassfish and the MDBs that will be listening to these resources and will deploy these MDBs in Glassfish. After that we will create standalone JMS Client and send messages to these MDBs using JMS Connections. We will do that for both a Queue and a Topic.

  • JMS Queue Messaging.

In the first part we will be creating a Queue in Glassfish and then we will create a MDB listening to that Queue. After that we will create a stand-alone client that will send a JMS Message to the Message Queue.

Creating JMS Resources

First thing that we need to do is to create the JMS resources in Glassfish. I find it easiest to use the AS Admin CLI to create the server resources. For that go to the Glassfish installation directory and type in asadmin, this will open the asadmin prompt. Type in the following commands to create the JMS Resources, below is the output from my computer when I created these resources and it is also showing how to open the ASAdmin prompt.


Microsoft Windows [Version 6.1.7601]
Copyright (c) 2009 Microsoft Corporation.  All rights reserved.

C:\Users\raza.abidi>cd \glassfish\glassfish\bin

C:\glassfish\glassfish\bin>asadmin
Use "exit" to exit and "help" for online help.
asadmin> create-jms-resource --restype javax.jms.Queue TestQueue
Administered object TestQueue created.
Command create-jms-resource executed successfully.
asadmin> create-jms-resource --restype javax.jms.QueueConnectionFactory TestQueueConnectionFactory
Connector resource TestQueueConnectionFactory created.
Command create-jms-resource executed successfully.
asadmin>

The create-jms-resource command will create the resource for you and once the resources are created then you can execute the list-jms-resources command to see the existing resources in your server. Below is the output from list-jms-resources command in my system.


asadmin> list-jms-resources
TestQueue
TestQueueConnectionFactory
Command list-jms-resources executed successfully.
asadmin>

You have just created the JMS Queue and a QueueConnectionFactory in Glassfish. Now we need to create a MDB that will listen to this Queue for any incoming messages.

All you need to do is to create a class that implements MessageListener and override the onMessage method of MessageListener to provide your own implementation. Also you need to use the @MessageDriven annotation that provides the details of which resource the MDB is listening to.


package com.test.ejb.mdb;

import javax.ejb.ActivationConfigProperty;
import javax.ejb.MessageDriven;
import javax.jms.Message;
import javax.jms.MessageListener;
import javax.jms.TextMessage;

/**
 * Message-Driven Bean implementation class for: TestMdb
 */
@MessageDriven(
  activationConfig = { 
    @ActivationConfigProperty(propertyName = "destinationType", propertyValue = "javax.jms.Queue"), 
    @ActivationConfigProperty(propertyName = "destination", propertyValue = "TestQueue")}, 
  mappedName = "TestQueue")
public class TestQueueMdb implements MessageListener {

 /**
  * @see MessageListener#onMessage(Message)
  */
 public void onMessage(Message message) {
  
  try {
   message.acknowledge();
  } catch (Exception e) {
   e.printStackTrace();
  }
      
  TextMessage txtMessage = (TextMessage) message;

  try {
   System.out.println(txtMessage.getText());
  } catch (Exception e) {
   e.printStackTrace();
  }
 }
}

In the @MessageDriven annotation you have to provide two activation properties, destinationType and destination. Also the mapped name property is where you will use the name of the resources to which the MDB is listening to.

When this MDB is deployed in the Glassfish server, it will start listening to the TestQueue. As soon as a message arrives in the TestQueue, it will execute the onMessage method of this bean. In this method you can receive the message and process the message according to your requirements. For the simplicity, I am using the TextMessage, but you can use more complex message types exectly in the same way. Here I am simply extracting the text from the TextMessage object and printing it out to the console.

Now we need to create a JMS Client that will send a message to this Queue to see this in action.


package jms;

import java.util.Properties;

import javax.jms.Connection;
import javax.jms.DeliveryMode;
import javax.jms.MessageProducer;
import javax.jms.Queue;
import javax.jms.QueueConnectionFactory;
import javax.jms.Session;
import javax.jms.TextMessage;
import javax.naming.Context;
import javax.naming.InitialContext;

public class TestJMSQueue {
 
 public static void main(String a[]) throws Exception {
  
  // Commands to create Queue 
  // asadmin --port 4848 create-jms-resource --restype javax.jms.Queue TestQueue
  // asadmin --port 4848 create-jms-resource --restype javax.jms.QueueConnectionFactory TestQueueConnectionFactory
  
  String msg = "Hello from remote JMS Client";
  
  TestJMSQueue test = new TestJMSQueue();
  
  System.out.println("==============================");
  System.out.println("Sending message to Queue");
  System.out.println("==============================");
  System.out.println();
  test.sendMessage2Queue(msg);
  System.out.println();
  System.out.println("==============================");
  System.exit(0);
 }
 
 private void sendMessage2Queue(String msg) throws Exception{
  
  // Provide the details of remote JMS Client
  Properties props = new Properties();
  props.put(Context.PROVIDER_URL, "mq://localhost:7676");
  
  // Create the initial context for remote JMS server
  InitialContext cntxt = new InitialContext(props);
  System.out.println("Context Created");
  
  // JNDI Lookup for QueueConnectionFactory in remote JMS Provider
  QueueConnectionFactory qFactory = (QueueConnectionFactory)cntxt.lookup("TestQueueConnectionFactory");
  
  // Create a Connection from QueueConnectionFactory
  Connection connection = qFactory.createConnection();
  System.out.println("Connection established with JMS Provide ");
  
  // Initialise the communication session 
  Session session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE);
  
  // Create the message
  TextMessage message = session.createTextMessage();
  message.setJMSDeliveryMode(DeliveryMode.NON_PERSISTENT);
  message.setText(msg);
  
  // JNDI Lookup for the Queue in remote JMS Provider
  Queue queue = (Queue)cntxt.lookup("TestQueue");
  
  // Create the MessageProducer for this communication 
  // Session on the Queue we have
  MessageProducer mp = session.createProducer(queue);
  
  // Send the message to Queue
  mp.send(message);
  System.out.println("Message Sent: " + msg);
  
  // Make sure all the resources are released 
  mp.close();
  session.close();
  cntxt.close();
  
 }
 
}

JMS Clients use JNDI to lookup for the JMS Resources in the JMS Provider. Notice the properties passed as parameters to the InitialContext, here in the properties we are providing the JMS provider URL with the server name and port on which the JMS Provider is listening for connections. If the JMS Client is running in the same JVM as JMS Provider then there is no need to provide any additional properties to the InitialContext and it should work seamlessly.

The first thing we need to do is to get a QueueConnectionFactory using the JNDI name and create a Connection using the ConnectionFactory. Then we need to initialise a Session on the connection, this is where you specify the transaction and acknowledgeMode for this session, I am not using transaction, so it's false, and the transaction mode is AUTO_ACKNOWLEDGE. Now you can create a message for this session.

Once you have successfully created the JMS Message then you need to send it to a resource. Again you need to use the JNDI lookup to find the Queue. You get the MessageProducer from the Queue and then the MessageProducer can send a message to this Queue.

After sending the message, it is now time to release the resources.

Running the JMS Client Example

Now let’s run this example. First of all make sure that the Glassfish server is running and the ConnectionFactory and Resource are created. For that you can open the asadmin console and type in the list-jms-resources command to see the JMS resources on your Glassfish installation. This is already described above.

In order to run the client successfully you need a few jar files on your classpath.

From your Glassfish lib folder:

  • gf-client.jar
  • javaee.jar
From your Glassfish modules folder:
  • javax.jms.jar
And these files are in imqjmsra.rar archive that you can find in your glassfish\mq\lib directory. You need to manually extract all of these jar files from imqjmsra.rar and place them in the classpath of your JMS Client.
  • fscontext.jar
  • imqbroker.jar
  • imqjmsbridge.jar
  • imqjmsra.jar
  • imqjmx.jar
  • imqstomp.jar

Once you have your classpath setup and the Glassfish is up and running then you can run the client to see the JMS communication in action. Here is the output when I run the client on my machine.


==============================
Sending message to Queue
==============================

Context Created
Connection established with JMS Provide 
Message Sent: Hello from remote JMS Client

==============================

And here is the output on Glassfish console. This will be available on server.log file by default.


[INFO|glassfish3.1.2|Hello from remote JMS Client]

As you can see from the output, the message sent by the Client is consumed by the MDB.

Now let’s see how to broadcast the JMS messages to multiple consumers.

  • JMS Topic Broadcasting.

Creating a Topic is not much different to creating a Queue in Glassfish. Repeting the same procedure that we did earlier, we now create a Topic and a TopicConnectionFactory.

Creating JMS Resources

First thing that we need to do is to create the JMS resources in Glassfish. Open the asadmin prompt as described earlier and type in the following commands to create the JMS Resources, below is the output from my computer when I created these resources.


asadmin> create-jms-resource --restype javax.jms.Topic TestTopic
Administered object TestTopic created.
Command create-jms-resource executed successfully.
asadmin> create-jms-resource --restype javax.jms.TopicConnectionFactory TestTopicConnectionFactory
Connector resource TestTopicConnectionFactory created.
Command create-jms-resource executed successfully.
asadmin>

The create-jms-resource command will create the resource for you and once the resources are created then you can execute the list-jms-resources command to see the existing resources in your server. Below is the output from list-jms-resources command in my system.


asadmin> list-jms-resources
TestQueue
TestTopic
TestQueueConnectionFactory
TestTopicConnectionFactory
Command list-jms-resources executed successfully.
asadmin>

You have just created the JMS Topic and a TopicConnectionFactory in Glassfish. Now we need to create a few MDBs that will subscribe to this Topic for any broadcasted messages.

Just like before, all you need to do is to create a class that implements MessageListener and override the onMessage method of MessageListener to provide your own implementation. Also you need to use the @MessageDriven annotation that provides the details of which resource the MDB is listening to.

Since we are experimenting with the broadcast where there can be multiple listeners, it is better to create two MDBs to illustrate the working of a publish-subscribe paradigm properly. Below is the code for both of the classes, basically both of them are identical really.


package com.test.ejb.mdb;

import javax.ejb.ActivationConfigProperty;
import javax.ejb.MessageDriven;
import javax.jms.Message;
import javax.jms.MessageListener;
import javax.jms.TextMessage;

/**
 * Message-Driven Bean implementation class for: TestTopicMdb1
 */
@MessageDriven(
  activationConfig = { 
    @ActivationConfigProperty(propertyName = "destinationType", propertyValue = "javax.jms.Topic"), 
    @ActivationConfigProperty(propertyName = "destination", propertyValue = "TestTopic")}, 
  mappedName = "TestTopic")
public class TestTopicMdb1 implements MessageListener {

 /**
     * @see MessageListener#onMessage(Message)
     */
    public void onMessage(Message message) {
     
     try {
   message.acknowledge();
  } catch (Exception e) {
   e.printStackTrace();
  }
     
     TextMessage txtMessage = (TextMessage) message;
     
     try {
   System.out.println("First Listener: " + txtMessage.getText());
  } catch (Exception e) {
   e.printStackTrace();
  }
    }

}

And the second one is:


package com.test.ejb.mdb;

import javax.ejb.ActivationConfigProperty;
import javax.ejb.MessageDriven;
import javax.jms.Message;
import javax.jms.MessageListener;
import javax.jms.TextMessage;

/**
 * Message-Driven Bean implementation class for: TestTopicMdb2
 */
@MessageDriven(
  activationConfig = { 
    @ActivationConfigProperty(propertyName = "destinationType", propertyValue = "javax.jms.Topic"), 
    @ActivationConfigProperty(propertyName = "destination", propertyValue = "TestTopic")}, 
  mappedName = "TestTopic")
public class TestTopicMdb2 implements MessageListener {

 /**
     * @see MessageListener#onMessage(Message)
     */
    public void onMessage(Message message) {
     
     try {
   message.acknowledge();
  } catch (Exception e) {
   e.printStackTrace();
  }
     
     TextMessage txtMessage = (TextMessage) message;
     
     try {
   System.out.println("Second Listener: " + txtMessage.getText());
  } catch (Exception e) {
   e.printStackTrace();
  }
    }

}

In the @MessageDriven annotation you have to provide two activation properties, destinationType and destination. Also the mapped name property is where you will use the name of the resources to which the MDB is listening to.

When this MDBs are deployed in the Glassfish server, they will subscribe to the TestTopic. As soon as a message arrives in the TestTopic, they will execute the onMessage method. Here I am simply extracting the text from the TextMessage object and printing it out to the console.

Now we need to create a JMS Client that will send a message to this Topic to see this in action.


package jms;

import java.util.Properties;

import javax.jms.Connection;
import javax.jms.DeliveryMode;
import javax.jms.MessageProducer;
import javax.jms.Session;
import javax.jms.TextMessage;
import javax.jms.Topic;
import javax.jms.TopicConnectionFactory;
import javax.naming.Context;
import javax.naming.InitialContext;

public class TestJMSTopic {
 
 public static void main(String a[]) throws Exception {
  
  // Commands to create Topic
  // asadmin --port 4848 create-jms-resource --restype javax.jms.Topic TestTopic
  // asadmin --port 4848 create-jms-resource --restype javax.jms.TopicConnectionFactory TestTopicConnectionFactory
  
  String msg = "Hello from remote JMS Client";
  
  TestJMSTopic test = new TestJMSTopic();
  
  System.out.println("==============================");
  System.out.println("Publishig message to Topic");
  System.out.println("==============================");
  System.out.println();
  test.sendMessage2Topic(msg);
  System.out.println();
  System.out.println("==============================");
  System.exit(0);
 }
 
 
 private void sendMessage2Topic(String msg) throws Exception{
  
  // Provide the details of remote JMS Client
  Properties props = new Properties();
  props.put(Context.PROVIDER_URL, "mq://localhost:7676");
  
  // Create the initial context for remote JMS server
  InitialContext cntxt = new InitialContext(props);
  System.out.println("Context Created");
  
  // JNDI Lookup for TopicConnectionFactory in remote JMS Provider
  TopicConnectionFactory qFactory = (TopicConnectionFactory)cntxt.lookup("TestTopicConnectionFactory");
  
  
  // Create a Connection from TopicConnectionFactory
  Connection connection = qFactory.createConnection();
  System.out.println("Connection established with JMS Provide ");
  
  // Initialise the communication session 
  Session session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE);
  
  // Create the message
  TextMessage message = session.createTextMessage();
  message.setJMSDeliveryMode(DeliveryMode.NON_PERSISTENT);
  message.setText(msg);
  
  // JNDI Lookup for the Topic in remote JMS Provider
  Topic topic = (Topic)cntxt.lookup("TestTopic");
  
  // Create the MessageProducer for this communication 
  // Session on the Topic we have
  MessageProducer mp = session.createProducer(topic);
  
  // Broadcast the message to Topic
  mp.send(message);
  System.out.println("Message Sent: " + msg);
  
  // Make sure all the resources are released 
  mp.close();
  session.close();
  cntxt.close();
 }
}

This client is pretty much same as the client we wrote for the communication over JMS MessageQueue. The only difference is that we now get a TopicConnectionFactory using the JNDI name and send it to a Topic. This is how the message is Broadcasted to that Topic.

Running the JMS Client Example

We have the same prerequisites to run this client as we have for the Queue client. Follow the instructions from the Queue client to setup the classpath and make sure the Glassfish is running. Now let’s run this example.

Once you have your classpath setup and the Glassfish is up and running then you can run the client to see the JMS communication in action. Here is the output when I run the client on my machine.


==============================
Publishig message to Topic
==============================

Context Created
Connection established with JMS Provide 
Message Sent: Hello from remote JMS Client

==============================

And here is the output on Glassfish console. This will be available on server.log file by default.


[INFO|glassfish3.1.2|First Listener: Hello from remote JMS Client]

[INFO|glassfish3.1.2|Second Listener: Hello from remote JMS Client]

As you can see from the output on Glassfish server.log that both of the MDBs we created for this excersise got executed and processed the same message in their own way.

This has been an overly simplified example of how MessageQueue and Topic work in Glassfish. I did not take care of the minor details basically to show the concept. I am sure that will give you enough information to get started.

Monday, April 15, 2013

Executing DB2 Stored Procedure on AS/400

Effective SOA solutions take advantage of all the services available across the systems, which include the legacy systems in your organization. There are several ways you can integrate the existing services running on your legacy systems. This series of posts is about using the AS/400 services directly in your Java applications. I will be covering how to call RPG programs, AS/400 Commands and AS/400 DataQueues in your Java applications.

Fortunately IBM has provided a very nice easy to use library for communicating with the AS/400 server from Java. The IBM Toolbox for Java is a library of Java classes that give Java programs easy access to IBM iSeries data and resources. JT Open is the open source version of Toolbox for Java. You can go to JT Open link to download the full set of java libraries and some more details of how that can be used to easily communicate with the AS/400 server. There are several ways you can access the services on AS/400 server, most common are as follows.

All of these different methods of accessing the AS/400 services and have their own pros and cons. JT Open is a very powerful library and provides very easy to use APIs to communicate with the AS/400 services. These posts are about exploiting the JT Open libraries to use the AS/400 services. I am splitting these different approaches to separate posts. Click on the topic above to see the relevant post for that topic.

  • Executing DB2 Stored Procedures.

Calling StoredProcedures created in DB2 running on iSeries AS/400 is no different to calling a StoredProcedure created in SQL Server or Oracle running on Windows or Unix. The same JDBC interfaces that you use for any other databases are used to call DB2 StoredProcedures. For this exercise we created a very simple SQL StoredProcedure in DB2 that takes the Customer Code as a parameter and will return all Open Orders for that customer from our DB2 database. The StoredProcedure name is CUSTORDOP and it is created in MYLIB library.


package as400;

import com.ibm.as400.access.AS400;
import com.ibm.as400.access.AS400JDBCCallableStatement;
import com.ibm.as400.access.AS400JDBCConnection;
import com.ibm.as400.access.AS400JDBCDriver;
import com.ibm.as400.access.AS400JDBCResultSet;

public class AS400DBTest {

 public static void main(String av[]){

  String server="yourserver.company.com";
  String user = "AS400USER";
  String pass = "AS400PWRD";

  AS400 as400 = null;
  AS400JDBCDriver driver = null;
  AS400JDBCConnection con = null;
  AS400JDBCCallableStatement stm = null;
  AS400JDBCResultSet rs = null;
  
  String sp = "CALL MYLIB.CUSTORDOP(?)";

  try{
   driver = new AS400JDBCDriver();
   as400 = new AS400(server, user, pass);

   // Connect to the Database
   con = AS400JDBCConnection.class.cast(driver.connect(as400));

   // Prepare the call
   stm = AS400JDBCCallableStatement.class.cast(con.prepareCall(sp));
   stm.setString(1, "ABC123");

   // Execute the Stored Procedure
   rs = AS400JDBCResultSet.class.cast(stm.executeQuery());

   while(rs.next()){
    System.out.println(rs.getString(5) + " : " + rs.getString(6));
   }

  }catch(Exception e){
   e.printStackTrace();
  }finally{
   try{
    // Make sure to disconnect   
    as400.disconnectAllServices();  
   }catch(Exception e){
    e.printStackTrace();
   }
  }
 }
}

In DB2 we use the keyword CALL to execute a StoredProcedure. To call a Stored Procedure, we need to create a CallableStatement from the connection and then we can set the values of the parameters using the setter methods. After all the values are set for the Stored Procedure, you simple call the executeQuery method of the stm object if it returns a ResultSet, or executeUpdate method in case of a StoredProcedure that updates records.

I am using the wrappers provided in the JT Open library for Connection, Statement, ResultSet, etc. for this example. This however is not required, you can use the same interfaces provided by the java.sql.* package. Infect I have to Cast the returned objects to the AS400 types explicitly before using them. This is because, for example, stm.executeQuery() by default returns an instance of java.sql.ResultSet and I have to explicitly cast it to com.ibm.as400.access.AS400JDBCResultSet before I can use it. Here this is done merely to show that we have AS/400 specific classes in the JT Open library. However, you may need them if you want to access some AS/400 specific data types and functions.

These are the very basics of how to use the IBM Toolbox for Java to communicate with programs in AS/400. For more details and advanced topics you can consult the IBM programmers guide. To view or download the PDF version of this document, select IBM® Toolbox for Java™ (about 3100 KB).

Using AS/400 DataQueues in Java

Effective SOA solutions take advantage of all the services available across the systems, which include the legacy systems in your organization. There are several ways you can integrate the existing services running on your legacy systems. This series of posts is about using the AS/400 services directly in your Java applications. I will be covering how to call RPG programs, AS/400 Commands and AS/400 DataQueues in your Java applications.

Fortunately IBM has provided a very nice easy to use library for communicating with the AS/400 server from Java. The IBM Toolbox for Java is a library of Java classes that give Java programs easy access to IBM iSeries data and resources. JT Open is the open source version of Toolbox for Java. You can go to JT Open link to download the full set of java libraries and some more details of how that can be used to easily communicate with the AS/400 server. There are several ways you can access the services on AS/400 server, most common are as follows.

All of these different methods of accessing the AS/400 services and have their own pros and cons. JT Open is a very powerful library and provides very easy to use APIs to communicate with the AS/400 services. These posts are about exploiting the JT Open libraries to use the AS/400 services. I am splitting these different approaches to separate posts. Click on the topic above to see the relevant post for that topic.

  • Using AS/400 DataQueue.

DataQueues are an integral part of the AS/400 System. For any external communication they are a very good tool to implement event based applications to take appropriate actions on either sides on receipt of a message. Best thing about the DataQueues is that they are bidirectional, means not only you can send a message to the AS/400 but also you can receive a message from AS/400.

Data queues provide considerable flexibility to the Programmer. The DataQueues interfaces require no communications programming and can be used either for connected or disconnected communication. Java programs can communicate with AS/400 programs via a common AS/400 DataQueue. The data queue messages are merely described at the record-level, allowing the application programmer to define the field-level structure as required.

JT Open provides a simple set of Classes that hide most of the communication complexity and presents a very simple easy to use set of APIs to the Java Programmer. I am going to demonstrate the use of these classes with the help of tow very simple programs that reads and writes messages to a DataQueue. First the program that reads data from the DataQueue.

Reading from DataQueue

package as400;

import com.ibm.as400.access.AS400;
import com.ibm.as400.access.DataQueue;
import com.ibm.as400.access.DataQueueEntry;

public class DataQueueTest implements Runnable{

 String server="yourserver.company.com";
 String user = "AS400USER";
 String pass = "AS400PWRD";

 String queueName = "MYDTQ";
 String libraryName = "MYLIB";
 
 private AS400 system = null;
 private DataQueueEntry dqData = null;
 
 @Override
 public void run() {

  String queue = "/QSYS.LIB/" + libraryName +".LIB/" + queueName +".DTAQ";
  
  try{
   int cntr=1;
   system = new AS400(server, user, pass);
   DataQueue dq = new DataQueue(system, queue);
   
   while(true){
    
    String data = null;
    try{
     
     System.out.println("Listening to DataQueue ......");
     
     // Start wailting for a message to arrive
     // Disconnect after 5 seconds if nothing received
     dqData = dq.read(5);
     if (dqData != null) {
      // get the data out of the DataQueueEntry object.
         byte[] bytes = dqData.getData();
      data = new String(bytes, "IBM285").trim();
        }
     
     if (data == null || data.trim().length() <= 0) {
      
      // Break after 5 tries
      if(cntr < 5){
       System.out.println("DataQueue Re-Started: " + cntr);
       cntr ++;
       continue;
      }else{
       System.out.println("Giving up on DataQueue after " + cntr + " tries");
       break;
      }
     }
     System.out.println("--|" + data + "|--");
    }catch(Exception e){
     e.printStackTrace();
    }
   }
  }catch(Exception e){
   e.printStackTrace();
  }finally{
   // Make sure to disconnect
   if(system != null){
    try{
     system.disconnectAllServices();  
    }catch(Exception e){}
   }
  }
 }
 
 // Main method to start the Thread
 public static void main(String a[]){
  new Thread(new DataQueueTest()).start();
 }
}

This is a very simple program listening to a DataQueue and waiting for a message to arrive. We pass the timeout to dq.read(5); method in seconds. This is very low just to show the output as this tries again and again to re-connect to the queue. Here is the output when I run this program on my system.

Listening to DataQueue ......
DataQueue Re-Started: 1
Listening to DataQueue ......
--|00000001ADS#D000000000000000000000100000000000000#JK9000000000000001000|--
Listening to DataQueue ......
DataQueue Re-Started: 2
Listening to DataQueue ......
--|00000001SDFJKLO0000000000000000000100000000000000#OD8000000000000001000|--
Listening to DataQueue ......
DataQueue Re-Started: 3
Listening to DataQueue ......
DataQueue Re-Started: 4

As you can see from the output, the data received is a long String, this is actually a list of parameters received from AS/400 with fixed lengths. We can extract data from this String and use it to Kickstart something in our Java program.

Writing to a DataQueue

Now let's see how we can write data to a DataQueue.


package as400;

import com.ibm.as400.access.AS400;
import com.ibm.as400.access.DataQueue;

public class DataQueueTest {

 String server="yourserver.company.com";
 String user = "AS400USER";
 String pass = "AS400PWRD";

 String queueName = "MYDTQ";
 String libraryName = "MYLIB";
 
 private AS400 system = null;
 
 public static void main(String a[]){

  String queue = "/QSYS.LIB/" + libraryName +".LIB/" + queueName +".DTAQ";
  
  String dataStr = "Message from Java";
  
  try{
   system = new AS400(server, user, pass);
   DataQueue dq = new DataQueue(system, queue);
   
   // Convert the Data Strings to IBM format
   byte[] byteData = dataStr.getBytes("IBM285");
   
   dq.write(byteData);
   
  }catch(Exception e){
   e.printStackTrace();
  }finally{
   // Make sure to disconnect
   if(system != null){
    try{
     system.disconnectAllServices();  
    }catch(Exception e){}
   }
  }
 }
}

These are the very basics of how to use the IBM Toolbox for Java to communicate with programs in AS/400. For more details and advanced topics you can consult the IBM programmers guide. To view or download the PDF version of this document, select IBM® Toolbox for Java™ (about 3100 KB).

Friday, April 12, 2013

Calling AS/400 CL Commands from Java

Effective SOA solutions take advantage of all the services available across the systems, which include the legacy systems in your organization. There are several ways you can integrate the existing services running on your legacy systems. This series of posts is about using the AS/400 services directly in your Java applications. I will be covering how to call RPG programs, AS/400 Commands and AS/400 DataQueues in your Java applications.

Fortunately IBM has provided a very nice easy to use library for communicating with the AS/400 server from Java. The IBM Toolbox for Java is a library of Java classes that give Java programs easy access to IBM iSeries data and resources. JT Open is the open source version of Toolbox for Java. You can go to JT Open link to download the full set of java libraries and some more details of how that can be used to easily communicate with the AS/400 server. There are several ways you can access the services on AS/400 server, most common are as follows.

All of these different methods of accessing the AS/400 services and have their own pros and cons. JT Open is a very powerful library and provides very easy to use APIs to communicate with the AS/400 services. These posts are about exploiting the JT Open libraries to use the AS/400 services. I am splitting these different approaches to separate posts. Click on the topic above to see the relevant post for that topic.

  • Calling AS/400 Commands.

In this article, we shell see how the CommandCall class work to execute AS/400 Commands from Java. First thing we need are the details of an AS/400 Command that we will be executing from our Java client. For the purpose of this exercise we created a test AS/400 Command that executes a batch job on the AS/400 System. Here are the details of the program:


AS/400 Command:  MYLIB/RUNMYJOB SOME(PARAM)

NOTE: You can call any iSeries server CL command.


package as400;

import java.util.Date;

import com.ibm.as400.access.AS400;
import com.ibm.as400.access.AS400Message;
import com.ibm.as400.access.CommandCall;


/**
 * Test program to test the AS/400 Command from Java.
 */
public class AS400CommandCallTest {

 public static void main(String[] args) {  

  String server="yourserver.company.com";
  String user = "AS400USER";
  String pass = "AS400PWRD";

  String commandStr = "MYLIB/RUNMYJOB SOME(PARAM)";

  AS400 as400 = null;
  try  {
   // Create an AS400 object  
   as400 = new AS400(server, user, pass);  
   
   // Create a Command object
   CommandCall command = new CommandCall(as400);

   // Run the command.
   System.out.println("Executing: " + commandStr);
   boolean success = command.run(commandStr);
   
   if (success) {  
    System.out.println("Command Executed Successfully.");
   }else{
    System.out.println("Command Failed!");
   }
   
   // Get the command results
   AS400Message[] messageList = command.getMessageList();
   for (AS400Message message : messageList){
    System.out.println(message.getText());
   }
  } catch (Exception e) {  
   e.printStackTrace();  
  }finally{
   try{
    // Make sure to disconnect   
    as400.disconnectAllServices();  
   }catch(Exception e){}
  }  
  System.exit(0);  
 }  
}

And here is the out put of this program when you run it.


Executing: MYLIB/RUNMYJOB SOME(PARAM)
Command Executed Successfully.

All the command parameters are passed as a space saparated list with the command. We check the value returned by command.run(commandStr); to see if the command was successful and display the appropriate message. Calling the command.getMessageList(); will always return a list of messages if gnerated by the AS/400 Command. They can be either failure or success messages depending on if the command succeded or failed.

Note that the command.run(commandStr); will only return data if a completion message was generated by the command you are executing. Some commands do not generate a completion message if they run successfully. If you want the CommandCall to return a message regardless of its success or failure, you could do the following:

Create a CL program that will run the necessary command(s).

Include the SNDPGMMSG command as follows:
SNDPGMMSG MSGID(CPF9898) MSGF(QCPFMSG) MSGDTA('put msg text here') MSGTYPE(*COMP)
Note: The MSGTYPE parameter must be set to *COMP.

Now create a CL command that will call your CL program.

What you have done here is actually wrapped the AS/400 Command inside your custom command. The new CL command will allow you to receive a meaningful completion message in your JAVA program.

These are the very basics of how to use the IBM Toolbox for Java to communicate with programs in AS/400. For more details and advanced topics you can consult the IBM programmers guide. To view or download the PDF version of this document, select IBM® Toolbox for Java™ (about 3100 KB).