Pages

Thursday, May 16, 2013

Implementing Observer Pattern

The Observer Pattern is a design pattern in which an object (called subject) is designed in such a way that other objects can be added as observers of the Subject, and any change in the state of the Subject can be notified to the Observers. This is very useful when you want to implement some event driven logic such as implementing a threshold of stock and notifying the purchasing application if stock goes below the set threshold.

In this way we completely separate the concerns of different areas of the application and can implement functions that are fired up based on the events happening in your application. This programming model is called Even Driven Programming where you implement the logic based on the events happeing in the application. Without going into the details of Event Drive programming; let’s see how this is implemented in Java.

Java provides a class Observable and an interface Observer in the java.util package to implement the Observable paradigm. This is what you do:

  • To Create Observable (the Subject), extend the Observable class
  • To create Observers, implement Observer interface and override update(Object obj) method
After that all you need to do is to addObserver() instances in your Observable class. Let’s see how it is done; we start by creating our Subject.

package observertest;

import java.util.Observable;

public class MyObservable extends Observable {
 
 public int sum(int a, int b){
  
  // If not set then observers will think nothing is changed
  // hence no action required. 
  setChanged();
  
  // Perform the business logic
  int c = a+b;
  
  System.out.println("Notifying Observers");
  // A call to notifyObservers() also clears the changed flag
  notifyObservers(new Integer(c)); 
  
  return c;
 }

}

This is a very simple Observable with only one business method sum(int a, int b) . Things to note here are calls to setChanged() and notifyObservers(). The setChanged method is a marker that means something is changed in your Observable object. If the marker is not set then a call to notify will have no effect. This is where you control whether to notify the observers or not.

The other interesting bit is the notifyObservers() method. A call to this methid will fire a notification to ALL the observers observing this object. Note that it takes an Object as a parameter so we have to convert any primitive types to their object wrappers. This is Java’s mechanism to pass parameters to the Observers if we want. The same parameter will be passed to all the observers listening to any changes to this object.

Now let’s see the Observer. To make the point clear, I created two observers on the same Subject.

package observertest;

import java.util.Observable;
import java.util.Observer;

public class MySecondObserver implements Observer {

 @Override
 public void update(Observable o, Object arg) {
  
  System.out.println("Second Observer Notified:" + o + "  :  " + arg);

 }

}

And the second Observer for our example

package observertest;

import java.util.Observable;
import java.util.Observer;

public class MySecondObserver implements Observer {

 @Override
 public void update(Observable o, Object arg) {
  
  System.out.println("Second Observer Notified:" + o + "  :  " + arg);

 }

}

As you can see, all we are doing here is implementing the Observer interface and overriding the update() method. A call to the notifyObservers() from the Observable will result in a call to the update() method eventually. The first parameter is the Object under observation and second parameter is the value you passed to the notifyObservers() method from your Observable.

Let’s see this in action. I am creating a main class where I will create instance of the Observable and will add these Observers to the Observable. Here we go….

package observertest;

public class MainClass {

 public static void main(String[] args) {
  
  int a = 3;
  int b = 4;
  
  System.out.println("Starting");
  MyObservable ob = new MyObservable();
  
  // Add observers
  System.out.println("Adding observers");
  ob.addObserver(new MyFirstObserver());
  ob.addObserver(new MySecondObserver());
  
  System.out.println("Executing Sum :  " + a + " + " + b);
  ob.sum(a, b);
  System.out.println("Finished");
  
 }
}

Here we are simply creating an instance of the Observable class and adding the two Observers to our class. After that we call our business method and this is where the Observers are notified. The Observable does not know nor does it care how many Observers are listening to changes to its state, all it does is notify everyone that something is changed and send some information about the changes.

Similarly, the Observers are completely independent of the Observable. They are interested only in the Subject for a change of state and what action they want to perform if they are notified of that change is completely up to the Observers.

This is the output when we run the MainClass

Starting
Adding observers
Executing Sum :  3 + 4
Notifying Observers
Second Observer Notified:observertest.MyObservable@1b499616  :  7
First Observer Notified: observertest.MyObservable@1b499616  :  7
Finished

Note that your output may be different as we have no control over which observer will be notified first. However, all of the Observers are going to receive the same data, the Subject, and some details passed by the Subject, an Integer in our case.

For more details look at the JavaDocs for Observer and Observable. Some study of Event Driven Programming paradigm would also give you better insight into the pros and cons of this programming model.

Monday, May 13, 2013

SSH - SFTP Communication

FTP is the most popular protocol to transfer files over the network. The protocol has been around since the very early days of computer networks and is still widely used. FTP protocol provides functions to upload, download and delete files, create and delete directories, read the contents of a directory.

There are several libraries for almost every programming language that provide set of APIs that can be used to work with the FTP commands programmatically. In Java there are several open source libraries that can be used. The most popular is Apache Commons Net library that provides easy to use APIs to work with FTP file transfer. You can go to Commons NET link for more details including several working sample applications that you can use.

SFTP however is a completely different story. To begin with, it has nothing to do with the FTP protocol and unlike the common perception, architecturally they are completely different. SFTP abbreviation is often mistaken as Secure FTP which is not entirely correct. Another perception is that SFTP is some kind of FTP over SSL or SSH. In fact SFTP is abbreviation of "SSH File Transfer Protocol". This is not FTP over SSL and not FTP over SSH. SFTP is an extension of the Secure Shell (SSH) protocol which provides the file transfer capabilities. See this SFTP Wiki page for more details.

SFTP works over a secure channel, i.e. SSH. First you connect to the secure channel, as soon as the connection is established; the server presents a public key to the client, any subsequent communication between the server and the client will be encrypted using the public key presented by the server. After establishing the connection, you then need to authenticate the communication using any supported authentication mechanism, i.e. Public Key or Username-Password. Successfully authentication creates a secure channel on which you create the SFTP connection for secure file transfer to and from the SFTP server.

That all sounds very nice and interesting, but the story starts getting muddy after that. Unfortunately there are not many complete open source implementations of SFTP in Java. What I could find so far are these two implementations of SFTP in the open source world, one is JSch and the other one is SSHTools All other implementations are either a fork of one of these two or they are in a very initial stage.

I will be using SSHTools for this example. I found that comparatively easy to use and it does what it says on the tin. You however, are welcome to try both and see which one you like the most. Both of these libraries provide very similar interfaces and are not very difficult to use. To use the SSHTools libraries, all you need to do is to download the j2ssh-core-0.2.9.jar from SSHTools website and place it on your classpath. For this example I will be creating a SFTP Client class to connect to an SFTP server using the UserName-Password authentication.

Any SFTP communication starts by creating the secure channel. First thing is to create an SSH connection using SshClient to the SFTP server and then authenticate your credentials using an instance of PasswordAuthenticationClient with your credentials and then pass it to the SshClient for authentication.


 // Create SSH Connection. 
 SshClient ssh = new SshClient();
 ssh.connect("sftp_server", new ConsoleKnownHostsKeyVerification());
 
 // Authenticate the user
 PasswordAuthenticationClient passwordAuthenticationClient = new PasswordAuthenticationClient();
 passwordAuthenticationClient.setUsername("user_name");
 passwordAuthenticationClient.setPassword("password");
 try{
  int result = ssh.authenticate(passwordAuthenticationClient);
  if(result != AuthenticationProtocolState.COMPLETE){
   throw new Exception("Login failed !");
  }
 }catch(Exception e){
  throw new Exception("Authentication Failure: " + e.getMessage()); 
 }
 
 //Open the SFTP channel
 try{
  sftp = ssh.openSftpClient();
 }catch(Exception e){
  throw new Exception("Failed to open SFTP channel: " + e.getMessage());
 }
 

Interesting bits to observe in this piece of code are connection and authentication related lines. First of all the ConsoleKnownHostsKeyVerification class that we pass as parameter to the connect() method. This is because when you connect to any SSH server, it supplies the public key to client and the client will use this public key for any further communication with the server. That means the login and password that we are passing to the SSH server for authentication is encrypted using this public key before it is sent over to the server for authentication.

When we pass only the host name to the connect method, it will by default try to find the known_hosts file in $HOME/.ssh/known_hosts and failing to find this file or the host in this file will prompt the user to verify the server public key signature and the following prompt will come up.

The host your.sftp.server is currently unknown to the system
The host key fingerprint is: 1028: 69 54 9c 49 e5 92 59 40 5 66 c5 2e 9d 86 af ed
Do you want to allow this host key? [Yes|No|Always]:

From this prompt, you have to manually enter one of these options to continue

  • Yes will use this host for the current session
  • No will not continue with the communication
  • Always will add this host to the known host file in your system
If you select the Always option then the host will be added to the known_hosts file and any subsequent communication will not ask for verification of the public key signature.

When using the ConsoleKnownHostsKeyVerification class in the connect method, the SshClient uses the instance of this class to negotiate the protocol and exchange the key with the SSH Server on your behalf, when it returns the connection becomes ready for communication. Thus avoiding any need of user interaction to verify the server signature and manually negotiate the SSH connection. Now the credentials can be encrypted using the Public Key of the server and sent over to the server for authentication.

When your login is authenticated, then you can open an SftpClient over this SSHConnection. This is the hard work done; once you have the SftpClient then you have all the standard FTP operations at your disposal. When you finish with your FTP operations, get, put, ls, mkdir, etc. then make sure you disconnect from both the SFTP channel and the SSH connection.


 public void disconnect() throws Exception{
  
  if(sftp == null)
   throw new Exception("SFTP channel is not initialized.");
  
  if(ssh == null)
   throw new Exception("SSH session is not initialized.");
  
  try{
   sftp.quit();
  }catch(Exception e){
   throw new Exception("Failed to disconnect from the server: " + e.getMessage());
  }
  
  try{
   ssh.disconnect();
  }catch(Exception e){
   throw new Exception("Failed to disconnect from the server: " + e.getMessage());
  }
 }
 

There are a few lines of code that will be repeated for every SFTP operation, i.e. connection and authentication code and disconnect. Let’s create a wrapper for our SFTP communication and then we can use this wrapper in our application to do our SFTP operations. Encapsulating these methods in a few easy to use interfaces will make our life a lot easier.


package ftp;

import com.sshtools.j2ssh.SftpClient;
import com.sshtools.j2ssh.SshClient;
import com.sshtools.j2ssh.authentication.AuthenticationProtocolState;
import com.sshtools.j2ssh.authentication.PasswordAuthenticationClient;
import com.sshtools.j2ssh.transport.ConsoleKnownHostsKeyVerification;

public class SFtp {
 
 private String host;            // Remote SFTP hostname

 private SshClient ssh;
 private SftpClient sftp;
 
 public SFtp(String host) {
  
  this.host = host;
  this.ssh = null;
  this.sftp = null;
 }
 
 public void connect(String user, String password) throws Exception{
  
  // Connect to SSH. 
  ssh = new SshClient();
  try{
   ssh.connect("sftp_server", new ConsoleKnownHostsKeyVerification());
  }catch(Exception e){
   throw new Exception("SSH connection failure: " + e.getMessage());
  }
  
  // Authenticate the user
  PasswordAuthenticationClient passwordAuthenticationClient = new PasswordAuthenticationClient();
  passwordAuthenticationClient.setUsername(user);
  passwordAuthenticationClient.setPassword(password);
  try{
   int result = ssh.authenticate(passwordAuthenticationClient);
   if(result != AuthenticationProtocolState.COMPLETE){
    throw new Exception("Login failed !");
   }
  }catch(Exception e){
   throw new Exception("Authenticvation Failure: " + e.getMessage()); 
  }
  
  //Open the SFTP channel
  try{
   sftp = ssh.openSftpClient();
  }catch(Exception e){
   throw new Exception("Failed to open SFTP channel: " + e.getMessage());
  }
 }
 
 public void cd(String remoteDir) throws Exception{
  
  if(sftp == null)
   throw new Exception("SFTP channel is not initialized.");
  
  if(remoteDir==null || remoteDir.trim().length()==0)
   throw new Exception("Remote directory name is not provided.");
  
  try{
   sftp.cd(remoteDir);
  }catch(Exception e){
   throw new Exception("Failed to change remote directory: " + e.getMessage());
  }
 }
 
 public void put(String fileName) throws Exception{
  
  if(sftp == null)
   throw new Exception("SFTP channel is not initialized.");
  
  if(fileName==null || fileName.trim().length()==0)
   throw new Exception("File name is not provided.");
  
  //Send the file
  try{
   sftp.put(fileName);
  }catch(Exception e){
   throw new Exception("Failed to upload file: " + e.getMessage());
  }
 }
 
 public void disconnect()throws Exception{
  
  if(sftp == null)
   throw new Exception("SFTP channel is not initialized.");
  
  if(ssh == null)
   throw new Exception("SSH session is not initialized.");
  
  try{
   sftp.quit();
  }catch(Exception e){
   throw new Exception("Failed to disconnect from the server: " + e.getMessage());
  }
  
  try{
   ssh.disconnect();
  }catch(Exception e){
   throw new Exception("Failed to disconnect from the server: " + e.getMessage());
  }
 }
 
}

Since we are not using the SSH connection ever in our application directly, there is no need to provide any details of that in our wrapper. All we are interested in is the SFTP connection. That’s why the connect method takes the user and password as the parameters and does all the work of authenticating the users on the SSH channel and creating the SFTP connection.

In other implemented methods, cd, put, and disconnect, I am checking for a valid SFTP connection before any operation. The interesting bit here is the disconnect() method where we are making sure that both SFTP and SSH are disconnected.

And here is a sample client application that is using our wrapper to upload a file to an SFTP server.


package ftp;

public class SFTPTester {

 // Set these variables for your testing environment:
 private static String host = "your.sftp.server";  // Remote SFTP hostname
 private static String userName = "your_user";     // Remote system login name
 private static String password = "your_pswd";     // Remote system password
 private static String remoteDir = "remote_dir";   // Directory on SFTP Server
 
 public static void main(String argv[]) throws Exception {
 
  SFtp sftp = new SFtp(host, port);
  sftp.connect(userName, password);
  sftp.cd(remoteDir);
  sftp.put(filePath);
  sftp.disconnect();
  
 }
}

As you can see, creating a wrapper does encapsulates most of the code from your application and you end up with easy to use very simple interface to connect to your SFTP Server to upload files from your local directory. This is a very simple wrapper implementing only a few SFTP methods. Now you can add your own implementation of other FTP methods that you require.

Thursday, May 9, 2013

Queue Map Hybrid -- Creating Data Structures in Java

Recently I came across a problem where I was looking for a Queue implementation that can store Key-Value pairs. The benefits I was looking for were two folds, first of all, it must behave in a FIFO fashion, and secondly, I should be able to lookup an item by the Key without removing it from the structure. An Ideal implementation for me would be a hybrid of Queue and Map data structure implementations already available in the Collection Framework.

Like any modern programmer :-) my first attempt was to search for the available implementations and to my surprise I could not find anything that could fit to my criteria. I am surprized that no one ever considered such data structure or is it something that is so specialized that no one ever bothered to publish that, whatever is the reason; I did not find any clean implementation that I could use for my requirements.

That gave me a motivation to create my own and publish it for the community, maybe there is someone else looking for a similar solution and can benefit from the work I have already done. However, instead of simply posting my solution here, I am also taking this as an opportunity to provide some guidelines for students and junior programmes on how to design a new data structure. In this post I will try to explain what are the data structures and how we design the data structures.

Data Structures

Data structures are a special way of storing and organizing data in computer’s memory. In addition to storing the data, they also provide some functionality to manipulate the data stored in the structure. What functionality is provided depends on the data structure. Typical functionality is to Add, Remove, Find, First, Last, etc. Different kind of data structures are suited for different applications, some are very basic, like arrays, and some are highly specialized like B+ Tree. Bottom line is, you store some related data in-memory and provide appropriate operations on that data.

Functional Requirements

The first step on defining a data structure is to gather the requirements, what exactly are you looking to store in your structure and what behaviour is expected from that structure. For the purpose of this exercise, I have created this list of requirements that I am looking to achieve. The new structure must be able to

  • Store Key-Value pairs
  • Have a fixed size of structure
  • Remove last entry when adding a new pair
  • Find a Value by Key
  • Store any Object as Key or Value
  • Update Value of a Key
  • Remove any item using Key
Now that we have our requirements laid down, we can start looking at the existing solutions and how much of the given functionality they provide. I had in mind the Queue and the Map from Java Collection Framework, combining these two will give me all of the above functions.

Designing the Interface

An Interface is a set of functions that will be available for the users of that structure. In our case, all the public methods of the Data Structure are going to be the interface for that data structure. Now that we have the functional requirements, we can define the public methods of the new structure that we are going to create. This is what I came up with:

 public synchronized void addItem(K key, V value);
 public synchronized V getItem(K key);
 public synchronized void remove(K key);
 public int size();
 public void clear();

These methods will cover the functional requirements that we set out in our Gathering Functional Requirements phase. Now we can worry about the actual implementation of these methods.

Before we start implementing these methods, we must first look at the underlying structures that we are going to use. Remember, we have two basic requirements we set out in the beginning, it must behave like a Queue, and it must be able to store Key-Value pairs; for that we already have two very nice interfaces in Java Collection Framework; Queue and Map. Both of these interfaces provide functions to add, get, and remove elements from the structures. However, they are interfaces, and we need to choose the correct implementations of these interfaces in our new structure, we don’t want to re-invent the wheel, do we?

For the purpose of this exercise we are going to use the LinkedList implementation of Queue and HashMap implementation of Map; simply because they are the most basic ones. Now let’s start implementing the Structure. We begin by declaring the class and instance variables.


public class QueuedMap<K, V> {

 private static final int MAX_SIZE=1024;

 private int size;
 private Map<K, V> values;
 private Queue<K> keys;

 public QueuedMap() {
  this(64);
 }

 public QueuedMap(int size) throws IllegalArgumentException{

  if(size<=0){
   throw new IllegalArgumentException("Size can only be a +ive Integer");
  }

  if(size > QueuedMap.MAX_SIZE)
   throw new IllegalArgumentException("Size cannot be more than " + QueuedMap.MAX_SIZE);

  this.size = size;
  this.values = new HashMap<K, V>(this.size);
  this.keys = new LinkedList<K>();
 }

}

Here are a few interesting things to note, First of all the use of Generics. (If you are new to Generics the follow this nice Oracle Tutorial or this Wikipedia Page for more information) This class declaration covers our "Store Key-Value pairs" requirement by using the Map and also the "Store any Object as Key or Value" requirement by allowing creating the class instance of any Type. There is also an element of Type Safety by introducing the Generics, refer to the links above for details of how Generics achieve that.

The other important bit is the constructor of the structure. The default constructor initializes the structure with a default size of 64 but the overriding constructor takes the size as a parameter and initializes the structure with the given size. This fulfils the "Have a fixed size of structure" requirement. The Structure can have virtually any size, but once initialized, it cannot be changed. The MAX_SIZE constant that restricts the size of our Structure is there only for a reference to let you define a maximum size of your structure if you want to impose that restriction, Also notice the Map and Queue initialized as HashMap and LinkedList in the constructor.

Now let’s look at the implemented methods of our DataStructure. I will start from the easiest ones first, namely size() and clear() methods. These are the standard methods that should be implemented by any Data Structure.


 public int size(){
  return this.keys.size();
 }

 public void clear(){
  this.values.clear();
  this.keys.clear();
 }
 

As you can see, we are simply encapsulating the methods provided by our underlying Data Structures and providing a wrapper on these methods. In the size() method, we are simply returning the size of our Queue and in the clear() method we are simply calling the clear() method of both our Queue and our Map. Since we already have methods available for these functions in our underlying data structures, we don’t have to reinvent the wheel here and simple encapsulation is more than adequate.

Now let's look at the other methods in our Data Structure. Notice the synchronized keyword on all of our operations; this is because both the underlying Data Structures that we are planning to use are not synchronized, and we have to provide our own thread safety mechanisms.


 public synchronized void addItem(K key, V value){

  if(key == null || value == null)
   throw new NullPointerException("Cannot insert a null for either key or value");

  // First see if we already have this key in our queue
  if(this.keys.contains(key)){
   // Key found. 
   // Simply replace the value in Map
   this.values.put(key, value);
  }else{
   // Key not found
   // Add value to both Queue and Map
   this.enqueue(key, value);
  }
 } 
 

This is a very simple method exploiting the actual implementation from the underlying structures. The first thing to check before we add this Key-Value pair is that the Key and Value must not be null. If we have a value for both objects then we check if we already have this Key in our structure. In that case, simply replace the Value in our Map with the new Object received. If not then add this Key-Value pair to our Data Structure. This is how we are tackling our "Store any Object as Key or Value" and the "Update Value of a Key" requirements. The actual work of storing a Key-Value pair is rather involved and we use a private method enqueue(K, V) for that which is not visible to the users of this Structure.


 private void enqueue(K key, V value){

  if(this.keys.size() < this.size){
   // We still have space in the queue
   // Add they entry in both queue and the Map
   if(this.keys.add(key)){
    this.values.put(key, value);
   }
  }else{
   // Queue is full. Need to remove the Head 
   // before we can add a new item.
   K old = this.keys.poll();
   if(old!=null)
    this.values.remove(old);

   // Now add the new item to both queue and the map
   this.keys.add(key);
   this.values.put(key, value);
  }
 }

Here in the enqueue(K, V) method, first thing we are checking is if we still have space in the Structure. For that we use the instance variable size that we initialized in the constructor and we are not allowing the Structure to grow any larger than this size. If the size or our structure is still less than the maximum size with which the Structure is initialized then simply add the Key to the Queue and the Key-Value pair in the Map. If we have already reached the maximum size defined for this Structure then first remove the oldest Key from the Queue, then remove the pair with this Key from the Map before adding the new entries in both Queue and Map.

This is where you can see the Queue and Map in action. Adding Keys in the Queue ensures that when it comes to add a new element in the Structure, the oldest one is removed, and then we can use the Key to manipulate the data stored in the Map. This takes care of our "Remove last entry when adding a new pair" requirement.

The remaining two methods, getItem(K key) and remove(K key) are also fairly simple. All we are doing here is wrapping the functionality already provided by the underlying Queue and Map to control the behaviour.


 public synchronized V getItem(K key){

  if(key==null)
   return null;

  V val = this.values.get(key);
  return val;
 }

 public synchronized void remove(K key){

  if(key == null)
   throw new NullPointerException("Cannot remove a null key");

  this.keys.remove(key);
  this.values.remove(key);
 }

Here in the getItem(K key) we are simply returning the Value for that Key from our Map if the Key is not null. This fulfils our "Find a Value by Key" requirement.

The remove(K key) is slightly involved, here we are taking the Key and removing the Key from both the Queue and the Map. This takes care of the "Remove any item using Key" requirement we set out for the Structure.

That completes our Data Structure with all of the Functional Requirements we set out at the beginning of this post. Below is the full source for you to give you the full picture of how this all fits together.


package com.raza.collection;

import java.util.HashMap;
import java.util.LinkedList;
import java.util.Map;
import java.util.Queue;

/**
 * <p>
 * QueuedMap is a specialised implementation of the Queue which can 
 * store Key Value pairs instead of just the Objects. This flexibility 
 * comes handy when you want to retrieve a specific Object from the Queue 
 * then instead of trying to find the object by iterating the whole 
 * Queue you can simply get the Object using the Key.
 * </P><p>
 * In order for the structure to work properly, it is vital to override 
 * the hashCode() and equals(Object obj) methods in your Key class. These are 
 * the methods that the underlying Map will use to compare the Keys 
 * to retrieve/remove the correct Object from the QueuedStore.
 * </P><p>
 * The structure will always have a fixed size. If the structure is not 
 * initialised with a given value then it will use default value of 64 
 * to initialise. Once the Structure is initialised then the size cannot 
 * be amended. Once the overall structure reaches the maximum size then 
 * any new Key Value pairs added to the structure will result in removing 
 * the oldest entry from the structure.
 * </P><p>
 * There is virtually no size limit to the size of the structure. The structure 
 * can be initialised with any arbitrary value, however, at the time of initialisation 
 * one should always consider keeping up with the best practices used to initialise 
 * Map data structures as the underlying implementation uses the HashMap to store 
 * the Key Value pairs.
 * </P>
 * 
 * @author Baqir Raza Abidi
 * @date 26 Mar 2013 16:03:08
 */
public class QueuedMap<K, V> {

 /**
  * Final variable indicates the Maximum size of this 
  * structure. 
  */
 private static final int MAX_SIZE=1024;

 private int size;
 private Map<K, V> values;
 private Queue<K> keys;

 /**
  * Default constructor for the class. Creates a class with the default 
  * structure size of {@code 64}. Once the structure is created then the 
  * size of the structure will remain the same.   
  */
 public QueuedMap() {
  this(64);
 }

 /**
  * <p>
  * Creates the structure with the given size. The constructor throws Exception if
  * the size given is less then 1. The structure cannot be created with a 0 or -ive 
  * size. 
  * </p><p>
  * The maximum size of the structure is also limited to the {@code QueuedStore.MAX_SIZE}
  * </p>
  *  
  * @param size Size of the Structure. 
  * @throws IllegalArgumentException If an invalid size is provided. 
  */
 public QueuedMap(int size) throws IllegalArgumentException{

  if(size<=0){
   throw new IllegalArgumentException("Size can only be a +ive Integer");
  }

  if(size > QueuedMap.MAX_SIZE)
   throw new IllegalArgumentException("Size cannot be more than " + QueuedMap.MAX_SIZE);

  this.size = size;
  this.values = new HashMap<K, V>(this.size);
  this.keys = new LinkedList<K>();
 }

 /**
  * <p>
  * Add a new {@code (Key, Value)} pair to the structure. Both the Key and Value can 
  * be any {@code Objects}. The method throws a {@code NullPointerException} in case any of
  * the Key and Value are {@code null}. 
  * </p><p>
  * If both the Key and Value are non null objects then it will try to store the
  * pair to the structure. If the key already exists in the Store then it will
  * simply replace the Value of that Key in the Store with the new Value. If the 
  * Key is a new one then it will try to store a new entry in the Structure. 
  * </p><p>
  * When storing a new entry in the structure, it first checks the size of the 
  * Structure and if it is still less than the size with which it was initialised then 
  * it will add the Key Value pair to the Structure. In case the size is now reached 
  * the limit then the method will first remove the oldest entry from the Structure 
  * and then will add the new Key Value pair to the Store. 
  * </p>
  * 
  * @param key  Object represents the Key.
  * @param value Object represents the Value. 
  * @throws Exception 
  */
 public synchronized void addItem(K key, V value){

  if(key == null || value == null)
   throw new NullPointerException("Cannot insert a null for either key or value");

  // First see if we already have this key in our queue
  if(this.keys.contains(key)){
   // Key found. 
   // Simply replace the value in Map
   this.values.put(key, value);
  }else{
   // Key not found
   // Add value to both Queue and Map
   this.enqueue(key, value);
  }
 }

 /**
  * Returns the value to which the specified key is associated,
  * or {@code null} if this Structure contains no association for the key.
  * <p>
  * More formally, if this map contains a mapping from a key
  * {@code k} to a value {@code v} such that {@code (key==null ? k==null :
  * key.equals(k))}, then this method returns {@code v}; otherwise
  * it returns {@code null}.  (There can be at most one such mapping.)
  * </p>
  *
  * @param key the key whose associated value is to be returned
  * @return the value to which the specified key is mapped, or
  *         {@code null} if this map contains no mapping for the key
  */
 public synchronized V getItem(K key){

  if(key==null)
   return null;

  V val = this.values.get(key);
  return val;
 }

 /**
  * Removes the mapping for a key from this Structure if it is present
  * (optional operation).   More formally, if this Structure contains a 
  * mapping from key <tt>k</tt> to value <tt>v</tt> such that
  * <code>(key==null ?  k==null : key.equals(k))</code>, that mapping
  * is removed.
  *
  * @param key key whose mapping is to be removed from the map
  */
 public synchronized void remove(K key){

  if(key == null)
   throw new NullPointerException("Cannot remove a null key");

  this.keys.remove(key);
  this.values.remove(key);
 }

 /**
  * Returns the number of elements in this collection.  
  * @return size of the structure.
  */
 public int size(){
  return this.keys.size();
 }

 /**
  * Removes all of the elements from this collection (optional operation). 
  * The collection will be empty after this method returns.
  */
 public void clear(){
  this.values.clear();
  this.keys.clear();
 }

 /*
  * Method implementing the actual logic to add 
  * the Key Value pair to the structure. 
  */
 private void enqueue(K key, V value){

  if(this.keys.size() < this.size){
   // We still have space in the queue
   // Add they entry in both queue and the Map
   if(this.keys.add(key)){
    this.values.put(key, value);
   }
  }else{
   // Queue is full. Need to remove the Head 
   // before we can add a new item.
   K old = this.keys.poll();
   if(old!=null)
    this.values.remove(old);

   // Now add the new item to both queue and the map
   this.keys.add(key);
   this.values.put(key, value);
  }
 }
}

Note here that the QueuedMap Data Structure that we created is using HashMap and LinkedList as the building blocks. Both of these DataStructures from Java Collection Framework allow null values to be stored, this is something that we have to handle ourselves. Also both of these DataStructures are not synchronized, i.e., they are not suitable for multi-threaded applications. Hence all the operational methods in this QueuedMap are marked as synchronized explicitly for thread safety.

I tried to provide some Guidelines of how we can create a new Data Structures by combining the existing functionality already provided by the Collection Framework. These are the same principles that are applied to any software development, wherever possible; reuse the existing functions, classes, methods. However, you still need to consider the pros and cons of the underlying building blocks. For example, since HashMap and LinkedList are not synchronized, we have to take care of that ourselves or alternatively use some other implementations of Queue and Map that provide thread safety.

This gives you basic building blocks to come up with your own ideas and create more complex data structures according to your requirements. One good change to this Data Structure may be to implement this as a Priority Queue where instead of removing the oldest entry, you remove the least accessed entry. The possibilities are endless.

Friday, May 3, 2013

JMS on Glassfish with Standalone Client

Most of today’s businesses have a whole range of systems supporting the day-to-day business needs and they may be very different in terms of architecture and technologies they are using. Effective communication between these heterogeneous systems has become an integral part of today’s business. Messaging standards like JMS make it easier to create effective communication solutions across distributed systems to exchange business data or events. In this post I will try to explain what is messaging in general and how we can build asynchronous messaging systems using JMS and Message Driven Beans in Glassfish Application Server.

Messaging

Messaging in simple terms is communication between two parties, a sender and a receiver. That can be as simple as an email sent from one party to another. Contents of that email can be some instructions or simply information and the receiver can take action according to the information received in the message.

Enterprise messaging is pretty similar to that, one system (sender) sends a message to another system (consumer) and the receiver can take appropriate action against that message. It is not mandatory for the receiver to be available at the time when message was sent, nor do they need to know each other in order to exchange messages. All they need is a defined format of the message so that receiver can understand what message is sent by the sender. This loose coupling makes messaging solutions completely different in comparison to other communication solutions like CORBA, or RMI, etc.

JMS Messaging

Java Messaging Service (JMS) is a messaging API for Java platform that defines a common set of interfaces to create, send and receive messages. There are two ways the JMS messaging solution can be implemented. Point-to-point and Publish Subscribe. We will look at both of these models in this post.

Point-to-point
The point-to-point model relies on the concept of MessageQueue. In this model there will be only one sender and one consumer at both ends of the Queue. The sender sends a message to a Queue which is then received by the Consumer from the same Queue. The Consumer the processes the message and acknowledges the receipt of the messages. There is no timing dependency in this model. If the Consumer is not available at the time the message is sent, it will remain in the Queue and the Consumer can receive the message when it becomes available again and acknowledge the receipt of the message. A message can remains in the Queue until it is acknowledged.

Publish Subscriber
This model allows sending a message to multiple Consumers. The message is not sent to any consumer; it is broadcasted to a channel called TOPIC, any Consumers that are SUBSCRIBED to this TOPIC can receive the message. This model does have a timing dependency and the messages are not retained in the JMS provider for a long time, if any subscriber is not active at the time of message broadcast, that subscriber will lose that message. However, the JMS API allows creating a DURABLE SUBSCRIPTION to receive messages if a Subscriber is not active. The JMS Provider will retain a message for a DURABLE SUBSCRIBER until it is received or the message is expired.

JMS on Glassfish

The Glassfish application server has an integrated JMS provider Open MQ providing full messaging support to any applications deployed on Glassfish. Open MQ is a complete message-oriented middleware and JMS implementation.

In Glassfish we create two JMS resources to communicate using JMS messages; Connection Factories and Destination Resources. A Connection Factory is the object used by the Senders to create connections to the JMS Provider, Open MQ in our case. We can have three types of Connection Factories to establish a connection:

  • Queue Connection Factory: This is used to create a Queue Connection to the JMS Provider.
  • Topic Connection Factory: This is used to create a Topic Connection to the JMS Provider.
  • Connection Factory: This is generic factory object that can be used to create Queue as well as Topic connection.
A Destination Resources is the actual channel which is used by the Sender to send messages and the Consumer to receive messages. We have two types of resources:
  • Queue: This is for point-to-point communication.
  • Topic: This is for publish-subscribe communication.
All of these Connections and Resources must be created in the JMS provider. Once the infrastructure is in place we can start communicating on these channels.

Message-Driven Bean

A Message-Driven Bean (MDB) is a special Enterprise Bean that helps to process JMS Messages asynchronously. An MDB acts as a listener for JMS Messages. A JMS client have no direct access to the MDB, instead the JMS Client sends a message to a JMS Resource (Queue or Topic) and at the other end of that resources an MDB listening on that resources will process the message.

In this post I will be creating the JMS Resources in Glassfish and the MDBs that will be listening to these resources and will deploy these MDBs in Glassfish. After that we will create standalone JMS Client and send messages to these MDBs using JMS Connections. We will do that for both a Queue and a Topic.

  • JMS Queue Messaging.

In the first part we will be creating a Queue in Glassfish and then we will create a MDB listening to that Queue. After that we will create a stand-alone client that will send a JMS Message to the Message Queue.

Creating JMS Resources

First thing that we need to do is to create the JMS resources in Glassfish. I find it easiest to use the AS Admin CLI to create the server resources. For that go to the Glassfish installation directory and type in asadmin, this will open the asadmin prompt. Type in the following commands to create the JMS Resources, below is the output from my computer when I created these resources and it is also showing how to open the ASAdmin prompt.


Microsoft Windows [Version 6.1.7601]
Copyright (c) 2009 Microsoft Corporation.  All rights reserved.

C:\Users\raza.abidi>cd \glassfish\glassfish\bin

C:\glassfish\glassfish\bin>asadmin
Use "exit" to exit and "help" for online help.
asadmin> create-jms-resource --restype javax.jms.Queue TestQueue
Administered object TestQueue created.
Command create-jms-resource executed successfully.
asadmin> create-jms-resource --restype javax.jms.QueueConnectionFactory TestQueueConnectionFactory
Connector resource TestQueueConnectionFactory created.
Command create-jms-resource executed successfully.
asadmin>

The create-jms-resource command will create the resource for you and once the resources are created then you can execute the list-jms-resources command to see the existing resources in your server. Below is the output from list-jms-resources command in my system.


asadmin> list-jms-resources
TestQueue
TestQueueConnectionFactory
Command list-jms-resources executed successfully.
asadmin>

You have just created the JMS Queue and a QueueConnectionFactory in Glassfish. Now we need to create a MDB that will listen to this Queue for any incoming messages.

All you need to do is to create a class that implements MessageListener and override the onMessage method of MessageListener to provide your own implementation. Also you need to use the @MessageDriven annotation that provides the details of which resource the MDB is listening to.


package com.test.ejb.mdb;

import javax.ejb.ActivationConfigProperty;
import javax.ejb.MessageDriven;
import javax.jms.Message;
import javax.jms.MessageListener;
import javax.jms.TextMessage;

/**
 * Message-Driven Bean implementation class for: TestMdb
 */
@MessageDriven(
  activationConfig = { 
    @ActivationConfigProperty(propertyName = "destinationType", propertyValue = "javax.jms.Queue"), 
    @ActivationConfigProperty(propertyName = "destination", propertyValue = "TestQueue")}, 
  mappedName = "TestQueue")
public class TestQueueMdb implements MessageListener {

 /**
  * @see MessageListener#onMessage(Message)
  */
 public void onMessage(Message message) {
  
  try {
   message.acknowledge();
  } catch (Exception e) {
   e.printStackTrace();
  }
      
  TextMessage txtMessage = (TextMessage) message;

  try {
   System.out.println(txtMessage.getText());
  } catch (Exception e) {
   e.printStackTrace();
  }
 }
}

In the @MessageDriven annotation you have to provide two activation properties, destinationType and destination. Also the mapped name property is where you will use the name of the resources to which the MDB is listening to.

When this MDB is deployed in the Glassfish server, it will start listening to the TestQueue. As soon as a message arrives in the TestQueue, it will execute the onMessage method of this bean. In this method you can receive the message and process the message according to your requirements. For the simplicity, I am using the TextMessage, but you can use more complex message types exectly in the same way. Here I am simply extracting the text from the TextMessage object and printing it out to the console.

Now we need to create a JMS Client that will send a message to this Queue to see this in action.


package jms;

import java.util.Properties;

import javax.jms.Connection;
import javax.jms.DeliveryMode;
import javax.jms.MessageProducer;
import javax.jms.Queue;
import javax.jms.QueueConnectionFactory;
import javax.jms.Session;
import javax.jms.TextMessage;
import javax.naming.Context;
import javax.naming.InitialContext;

public class TestJMSQueue {
 
 public static void main(String a[]) throws Exception {
  
  // Commands to create Queue 
  // asadmin --port 4848 create-jms-resource --restype javax.jms.Queue TestQueue
  // asadmin --port 4848 create-jms-resource --restype javax.jms.QueueConnectionFactory TestQueueConnectionFactory
  
  String msg = "Hello from remote JMS Client";
  
  TestJMSQueue test = new TestJMSQueue();
  
  System.out.println("==============================");
  System.out.println("Sending message to Queue");
  System.out.println("==============================");
  System.out.println();
  test.sendMessage2Queue(msg);
  System.out.println();
  System.out.println("==============================");
  System.exit(0);
 }
 
 private void sendMessage2Queue(String msg) throws Exception{
  
  // Provide the details of remote JMS Client
  Properties props = new Properties();
  props.put(Context.PROVIDER_URL, "mq://localhost:7676");
  
  // Create the initial context for remote JMS server
  InitialContext cntxt = new InitialContext(props);
  System.out.println("Context Created");
  
  // JNDI Lookup for QueueConnectionFactory in remote JMS Provider
  QueueConnectionFactory qFactory = (QueueConnectionFactory)cntxt.lookup("TestQueueConnectionFactory");
  
  // Create a Connection from QueueConnectionFactory
  Connection connection = qFactory.createConnection();
  System.out.println("Connection established with JMS Provide ");
  
  // Initialise the communication session 
  Session session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE);
  
  // Create the message
  TextMessage message = session.createTextMessage();
  message.setJMSDeliveryMode(DeliveryMode.NON_PERSISTENT);
  message.setText(msg);
  
  // JNDI Lookup for the Queue in remote JMS Provider
  Queue queue = (Queue)cntxt.lookup("TestQueue");
  
  // Create the MessageProducer for this communication 
  // Session on the Queue we have
  MessageProducer mp = session.createProducer(queue);
  
  // Send the message to Queue
  mp.send(message);
  System.out.println("Message Sent: " + msg);
  
  // Make sure all the resources are released 
  mp.close();
  session.close();
  cntxt.close();
  
 }
 
}

JMS Clients use JNDI to lookup for the JMS Resources in the JMS Provider. Notice the properties passed as parameters to the InitialContext, here in the properties we are providing the JMS provider URL with the server name and port on which the JMS Provider is listening for connections. If the JMS Client is running in the same JVM as JMS Provider then there is no need to provide any additional properties to the InitialContext and it should work seamlessly.

The first thing we need to do is to get a QueueConnectionFactory using the JNDI name and create a Connection using the ConnectionFactory. Then we need to initialise a Session on the connection, this is where you specify the transaction and acknowledgeMode for this session, I am not using transaction, so it's false, and the transaction mode is AUTO_ACKNOWLEDGE. Now you can create a message for this session.

Once you have successfully created the JMS Message then you need to send it to a resource. Again you need to use the JNDI lookup to find the Queue. You get the MessageProducer from the Queue and then the MessageProducer can send a message to this Queue.

After sending the message, it is now time to release the resources.

Running the JMS Client Example

Now let’s run this example. First of all make sure that the Glassfish server is running and the ConnectionFactory and Resource are created. For that you can open the asadmin console and type in the list-jms-resources command to see the JMS resources on your Glassfish installation. This is already described above.

In order to run the client successfully you need a few jar files on your classpath.

From your Glassfish lib folder:

  • gf-client.jar
  • javaee.jar
From your Glassfish modules folder:
  • javax.jms.jar
And these files are in imqjmsra.rar archive that you can find in your glassfish\mq\lib directory. You need to manually extract all of these jar files from imqjmsra.rar and place them in the classpath of your JMS Client.
  • fscontext.jar
  • imqbroker.jar
  • imqjmsbridge.jar
  • imqjmsra.jar
  • imqjmx.jar
  • imqstomp.jar

Once you have your classpath setup and the Glassfish is up and running then you can run the client to see the JMS communication in action. Here is the output when I run the client on my machine.


==============================
Sending message to Queue
==============================

Context Created
Connection established with JMS Provide 
Message Sent: Hello from remote JMS Client

==============================

And here is the output on Glassfish console. This will be available on server.log file by default.


[INFO|glassfish3.1.2|Hello from remote JMS Client]

As you can see from the output, the message sent by the Client is consumed by the MDB.

Now let’s see how to broadcast the JMS messages to multiple consumers.

  • JMS Topic Broadcasting.

Creating a Topic is not much different to creating a Queue in Glassfish. Repeting the same procedure that we did earlier, we now create a Topic and a TopicConnectionFactory.

Creating JMS Resources

First thing that we need to do is to create the JMS resources in Glassfish. Open the asadmin prompt as described earlier and type in the following commands to create the JMS Resources, below is the output from my computer when I created these resources.


asadmin> create-jms-resource --restype javax.jms.Topic TestTopic
Administered object TestTopic created.
Command create-jms-resource executed successfully.
asadmin> create-jms-resource --restype javax.jms.TopicConnectionFactory TestTopicConnectionFactory
Connector resource TestTopicConnectionFactory created.
Command create-jms-resource executed successfully.
asadmin>

The create-jms-resource command will create the resource for you and once the resources are created then you can execute the list-jms-resources command to see the existing resources in your server. Below is the output from list-jms-resources command in my system.


asadmin> list-jms-resources
TestQueue
TestTopic
TestQueueConnectionFactory
TestTopicConnectionFactory
Command list-jms-resources executed successfully.
asadmin>

You have just created the JMS Topic and a TopicConnectionFactory in Glassfish. Now we need to create a few MDBs that will subscribe to this Topic for any broadcasted messages.

Just like before, all you need to do is to create a class that implements MessageListener and override the onMessage method of MessageListener to provide your own implementation. Also you need to use the @MessageDriven annotation that provides the details of which resource the MDB is listening to.

Since we are experimenting with the broadcast where there can be multiple listeners, it is better to create two MDBs to illustrate the working of a publish-subscribe paradigm properly. Below is the code for both of the classes, basically both of them are identical really.


package com.test.ejb.mdb;

import javax.ejb.ActivationConfigProperty;
import javax.ejb.MessageDriven;
import javax.jms.Message;
import javax.jms.MessageListener;
import javax.jms.TextMessage;

/**
 * Message-Driven Bean implementation class for: TestTopicMdb1
 */
@MessageDriven(
  activationConfig = { 
    @ActivationConfigProperty(propertyName = "destinationType", propertyValue = "javax.jms.Topic"), 
    @ActivationConfigProperty(propertyName = "destination", propertyValue = "TestTopic")}, 
  mappedName = "TestTopic")
public class TestTopicMdb1 implements MessageListener {

 /**
     * @see MessageListener#onMessage(Message)
     */
    public void onMessage(Message message) {
     
     try {
   message.acknowledge();
  } catch (Exception e) {
   e.printStackTrace();
  }
     
     TextMessage txtMessage = (TextMessage) message;
     
     try {
   System.out.println("First Listener: " + txtMessage.getText());
  } catch (Exception e) {
   e.printStackTrace();
  }
    }

}

And the second one is:


package com.test.ejb.mdb;

import javax.ejb.ActivationConfigProperty;
import javax.ejb.MessageDriven;
import javax.jms.Message;
import javax.jms.MessageListener;
import javax.jms.TextMessage;

/**
 * Message-Driven Bean implementation class for: TestTopicMdb2
 */
@MessageDriven(
  activationConfig = { 
    @ActivationConfigProperty(propertyName = "destinationType", propertyValue = "javax.jms.Topic"), 
    @ActivationConfigProperty(propertyName = "destination", propertyValue = "TestTopic")}, 
  mappedName = "TestTopic")
public class TestTopicMdb2 implements MessageListener {

 /**
     * @see MessageListener#onMessage(Message)
     */
    public void onMessage(Message message) {
     
     try {
   message.acknowledge();
  } catch (Exception e) {
   e.printStackTrace();
  }
     
     TextMessage txtMessage = (TextMessage) message;
     
     try {
   System.out.println("Second Listener: " + txtMessage.getText());
  } catch (Exception e) {
   e.printStackTrace();
  }
    }

}

In the @MessageDriven annotation you have to provide two activation properties, destinationType and destination. Also the mapped name property is where you will use the name of the resources to which the MDB is listening to.

When this MDBs are deployed in the Glassfish server, they will subscribe to the TestTopic. As soon as a message arrives in the TestTopic, they will execute the onMessage method. Here I am simply extracting the text from the TextMessage object and printing it out to the console.

Now we need to create a JMS Client that will send a message to this Topic to see this in action.


package jms;

import java.util.Properties;

import javax.jms.Connection;
import javax.jms.DeliveryMode;
import javax.jms.MessageProducer;
import javax.jms.Session;
import javax.jms.TextMessage;
import javax.jms.Topic;
import javax.jms.TopicConnectionFactory;
import javax.naming.Context;
import javax.naming.InitialContext;

public class TestJMSTopic {
 
 public static void main(String a[]) throws Exception {
  
  // Commands to create Topic
  // asadmin --port 4848 create-jms-resource --restype javax.jms.Topic TestTopic
  // asadmin --port 4848 create-jms-resource --restype javax.jms.TopicConnectionFactory TestTopicConnectionFactory
  
  String msg = "Hello from remote JMS Client";
  
  TestJMSTopic test = new TestJMSTopic();
  
  System.out.println("==============================");
  System.out.println("Publishig message to Topic");
  System.out.println("==============================");
  System.out.println();
  test.sendMessage2Topic(msg);
  System.out.println();
  System.out.println("==============================");
  System.exit(0);
 }
 
 
 private void sendMessage2Topic(String msg) throws Exception{
  
  // Provide the details of remote JMS Client
  Properties props = new Properties();
  props.put(Context.PROVIDER_URL, "mq://localhost:7676");
  
  // Create the initial context for remote JMS server
  InitialContext cntxt = new InitialContext(props);
  System.out.println("Context Created");
  
  // JNDI Lookup for TopicConnectionFactory in remote JMS Provider
  TopicConnectionFactory qFactory = (TopicConnectionFactory)cntxt.lookup("TestTopicConnectionFactory");
  
  
  // Create a Connection from TopicConnectionFactory
  Connection connection = qFactory.createConnection();
  System.out.println("Connection established with JMS Provide ");
  
  // Initialise the communication session 
  Session session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE);
  
  // Create the message
  TextMessage message = session.createTextMessage();
  message.setJMSDeliveryMode(DeliveryMode.NON_PERSISTENT);
  message.setText(msg);
  
  // JNDI Lookup for the Topic in remote JMS Provider
  Topic topic = (Topic)cntxt.lookup("TestTopic");
  
  // Create the MessageProducer for this communication 
  // Session on the Topic we have
  MessageProducer mp = session.createProducer(topic);
  
  // Broadcast the message to Topic
  mp.send(message);
  System.out.println("Message Sent: " + msg);
  
  // Make sure all the resources are released 
  mp.close();
  session.close();
  cntxt.close();
 }
}

This client is pretty much same as the client we wrote for the communication over JMS MessageQueue. The only difference is that we now get a TopicConnectionFactory using the JNDI name and send it to a Topic. This is how the message is Broadcasted to that Topic.

Running the JMS Client Example

We have the same prerequisites to run this client as we have for the Queue client. Follow the instructions from the Queue client to setup the classpath and make sure the Glassfish is running. Now let’s run this example.

Once you have your classpath setup and the Glassfish is up and running then you can run the client to see the JMS communication in action. Here is the output when I run the client on my machine.


==============================
Publishig message to Topic
==============================

Context Created
Connection established with JMS Provide 
Message Sent: Hello from remote JMS Client

==============================

And here is the output on Glassfish console. This will be available on server.log file by default.


[INFO|glassfish3.1.2|First Listener: Hello from remote JMS Client]

[INFO|glassfish3.1.2|Second Listener: Hello from remote JMS Client]

As you can see from the output on Glassfish server.log that both of the MDBs we created for this excersise got executed and processed the same message in their own way.

This has been an overly simplified example of how MessageQueue and Topic work in Glassfish. I did not take care of the minor details basically to show the concept. I am sure that will give you enough information to get started.