Skip to content

Last 12 months of Android Development

I have been actively doing android development from years and following are some of the apps I have developed or worked on in last 12 months. There are few others which are not published.

https://market.android.com/details?id=com.pixelgarde.free

https://market.android.com/details?id=com.ingress.android.deals.activities

https://market.android.com/details?id=net.cash4books.android.cash4books

https://market.android.com/details?id=com.app30a

https://market.android.com/details?id=com.ingress.android.forex.itrader

https://market.android.com/details?id=com.csipsimple

I am also a Sun Microsystems Certified Mobile Application Developer. I cleared that exam in 2006 and than did some apps using J2ME APIs. Based on my experience in J2ME APIs I had a natural instinct to the Android platform and since I have started working on this platform I enjoy it more and more. It is improving day by day and with the help of new 4.x OS, we will be able to write one app that will run on all type of android devices.

There are many good things in android, and following is my Android favorite list:

  • Well defined architecture
  • Open and Innovative!
  • Support multiple devices and resolutions
  • Well defined APIs
  • Support of multiple layouts
  • Access to phone native controls
  • One API for both phone and tablet
  • Publishing apps is very straightforward!!!

One major thing that i think is still missing in android is the availability of good interface builder that we can use to quickly build the Android interfaces. There are few tools around there but I have not found a good one yet. I hope Android eclipse plugin will be improved in future so that we can use it to quickly create Android user interfaces.

Following are some of the Android APIs and components I have mostly used:

  • Activity, Intent, and Service
  • Linear, Relative, Table, and Tab layouts
  • Label, Text, Image, Grid, Dialog, Menu, List, Spinner, and Notifications
  • Style and Themes
  • SQLite and Preferences
  • Camera and Audio
  • In-App Billing
  • Security and Permissions

Following are the social and third party applications I have integrated in Android apps:

  • Facebook
  • Twitter
  • Google Maps
  • Flickr
  • Photobucket
  • Picassa
  • Foursquare

Still there are plenty of things to learn and to do in Android platform and following is my near future to do list:

  • Audio/Video Streaming
  • Bluetooth APIs
  • NFC APIs
  • USB APIs
  • Animations
  • Drag and Drop
  • Games Development
  • Custom Components
Advertisements

Java Concurency Notes

Thread Locks

There is several rules that we must keep in mind when using locks :

1. Every mutable fields shared between multiple threads must be guarded with a lock or made volatile, if you only need visibility
2. Synchronize only the operations that must synchronized, this improve the performances. But don’t synchronize too few operations. Try to keep the lock only for short operations.
3. Always know which locks are acquired and when there are acquired and by which thread
4. An immutable object is always thread safe

Thread Monitors

There is several advantages on using monitors instead of a lower-level mechanisms :

• All the synchronization code is centralized in one location and the users of this code don’t need to know how it’s implemented.
• The code doesn’t depend on the number of processes, it works for as many processes as you want
• You don’t need to release something like a mutex, so you cannot forget to do it

When writing monitors, you normally have the choice between several philosophies for the signaling operation :
1. Signal & Continue (SC) : The process who signal keep the mutual exclusion and the signaled will be awaken but need to acquire the mutual exclusion before going.
2. Signal & Wait (SW) : The signaler is blocked and must wait for mutual exclusion to continue and the signaled thread is directly awaken and can start continue its operations.
3. Signal & Urgent Wait (SU) : Like SW but the signaler thread has the guarantee than it would go just after the signaled thread
4. Signal & Exit (SX) : The signaler exits from the method directly after the signal and the signaled thread can start directly. This philosophy is not often used.

The available policies depends on the programming language, in Java, there is only one policy available, the SC one.

JSF2, JPA, and PostgreSQL

Lets have a quick tutorial about JSF2 and JPA with PostgreSQL database. I am using Glassfish 3 so you will not need to configure JSF.

Introduction to JSF

JSF is an MVC based web application development framework in which you can create html forms, validate their values, invoking business logic, and displaying results. JSF provides many prebuilt HTML based GUI controls to which you can define server side action handlers. JSF can be used to generate graphics in format other than HTML and using protocol other than HTTP. It means JSF controls are device independent and can be rendered in a browser, mobile, etc. JSF2 also brings very good support of AJAX.

How JSF Application Works?

Generally JSF applications are made up of following components:

  • Managed Bean / Controllers
  • Model
  • Facelet Pages

Lets briefly talk about each one:

Managed Beans

Managed beans are like other java beans you might have write many times in your development. We call java beans as managed beans because JSF manages their life cycle. JSF runtime intialize the bean references in your facelet, control its life cycle, call its setter and getter methods, call action handlers.

You can define managed bean wtih @ManagedBean annotation. Every managed bean has an scope to which that bean instance would be alive. Different bean scopes in JSF2 are request, view, session, and application, each of these have their associated annotation. Default scope of a managed bean is request. JSF initialize a request scope bean two times for each form, 1) when the form is displayed 2) when form is submitted.

You can use your managed beans to define your form properties, action handlers, and controller to manage the flow of your application. However, the example given below i have used managed beans as controller only.

Model Classes

You can define business logic in your model classes and delegate the action calls from controller to your model classes as given in the CRUD book example in this article. JSF doesn’t inforce you to use the model classes but its a good design architecture to use model classes to encapsulate the business logic and data.

Facelet Pages

Facelets are the primary view technology in JSF 2 instead of JSPs. All the facelet pages are defined in *.xhtml pages and referred as *.jsf from the browser. You define FacesServlet in web.xml which receive every call made for *.jsf (if you have defined FacesServlet mapping as *.jsf) to corresponding *.xhtml page.

Book CRUD Example

Its easy to understand any technology by doing some practical work. Lets start with a CRUD example using JSF2 and JPA. We will create a simple Book object with few properties to play with our crud example. Following are the primary classes used in this example:

  • Book, This is a POJO class to hold the book information and it is persisted into the database using JPA.
  • BookController, This is the managed bean of book example, it defines event handlers for list, create, update and delete.
  • BookModel, This is used to define the business logic and it talks to the DAO layer to perform CRUD operations.
  • BookDAO, This is the DAO layer class and perform all JPA operations

Lets create our jsftest database. You can use following query to create the database:

CREATE ROLE jsfuser LOGIN
  PASSWORD 'jsfuser'
  SUPERUSER INHERIT CREATEDB CREATEROLE;

CREATE DATABASE jsftest
  WITH OWNER = jsfuser
       ENCODING = 'UTF8'
       LC_COLLATE = 'C'
       LC_CTYPE = 'C'
       CONNECTION LIMIT = -1;

Now create the book table and sequence that will be use in Book entity:

CREATE TABLE book
(
  id integer NOT NULL DEFAULT nextval('book_seq'::regclass),
  title character varying(100),
  author character varying(50),
  publisher character varying(50),
  CONSTRAINT book_pk PRIMARY KEY (id)
)
WITH (
  OIDS=FALSE
);
ALTER TABLE book OWNER TO jsfuser;

CREATE SEQUENCE book_seq
  INCREMENT 1
  MINVALUE 1
  MAXVALUE 9223372036854775807
  START 4
  CACHE 1;
ALTER TABLE book_seq OWNER TO jsfuser;

Following is the JPA entity that we will use to save our Book data:

import java.io.Serializable;

import javax.persistence.Column;
import javax.persistence.Entity;
import javax.persistence.GeneratedValue;
import javax.persistence.GenerationType;
import javax.persistence.Id;
import javax.persistence.SequenceGenerator;

/**
 */
@Entity
public class Book implements Serializable {

	/**
	 * 
	 */
	private static final long serialVersionUID = -3397263129495350023L;

	@Id
	@GeneratedValue(strategy = GenerationType.SEQUENCE, generator = "book_seq_gen")
	@SequenceGenerator(name = "book_seq_gen", sequenceName = "book_seq", allocationSize = 1)
	private Integer id;

	@Column
	private String title;

	@Column
	private String author;

	@Column
	private String publisher;

	
	/**
	 * @return the id
	 */
	public Integer getId() {
		return this.id;
	}

	/**
	 * @param id
	 *            the id to set
	 */
	public void setId(final Integer id) {
		this.id = id;
	}

	public String getTitle() {
		return title;
	}

	public void setTitle(String title) {
		this.title = title;
	}


	public String getPublisher() {
		return publisher;
	}

	public void setPublisher(String publisher) {
		this.publisher = publisher;
	}

	public String getAuthor() {
		return author;
	}

	public void setAuthor(String author) {
		this.author = author;
	}
}

Book class above is very straight forward java class. The only point to discuss in this class is the use of sequence. As i am using PostgreSQL that’s why I’ve used the sequence to get the unique ids for the book instances.

	@Id
	@GeneratedValue(strategy = GenerationType.SEQUENCE, generator = "book_seq_gen")
	@SequenceGenerator(name = "book_seq_gen", sequenceName = "book_seq", allocationSize = 1)
	private Integer id;

@Id annotation is used to specify the unique id column. When you use @Id annotation you need to specify how the unique keys will be generated for your entity. You use @GeneratedValue annotation to specify the unique id generation strategy. In case of sequence you need to use another annotation named @SequenceGenerator. @SequenceGenerator has different attributes to control the values of your sequence, I’ve used three of them 1) name: It is used to mention the name of the sequence generator. The value that you will give to this attribute will also be used in the generator attribute of @GeneratedValue. In above example the generator name is “book_seq_gen”. 2) Second attribute used in @SequenceGenerator is the name of the sequence you would have created in your database, and 3) allocationSize: is used to set the initial value of your sequence.

Lets talk a look at our BookController that is a managed bean for Book.

import java.io.Serializable;
import java.util.List;

import javax.faces.bean.ManagedBean;
import javax.faces.bean.ManagedProperty;
import javax.faces.bean.SessionScoped;

/**
 */
@ManagedBean
@SessionScoped
public class BookController implements Serializable {

    /**
	 * 
	 */
	private static final long serialVersionUID = -6557818468188090120L;

	@ManagedProperty("#{bookModel}")
    private BookModel bookModel;

    private final static String EDIT_BOOK = "editBook.xhtml";
    private final static String LIST_BOOKS = "books.xhtml";
    
    private Book currentBook;
    private int bookId;

    /**
     * @return the bookModel
     */
    public BookModel getbookModel() {
        return this.bookModel;
    }

    /**
     * @param bookModel the bookModel to set
     */
    public void setbookModel(final BookModel bookModel) {
        this.bookModel = bookModel;
    }

    public List<Book> getBooks() {
        return this.bookModel.getBooks();
    }

    public String create() {
    	currentBook = new Book();
        return EDIT_BOOK;
    }
    
    public String edit() {
        currentBook = this.bookModel.find(getBookId());
        return EDIT_BOOK;
    }

    public String save() {
    	bookModel.save(currentBook);
        return LIST_BOOKS;
    }
    
    public String delete() {
        bookModel.delete(getBookId());
        return LIST_BOOKS;
    }

	public Book getCurrentBook() {
		return currentBook;
	}

	public void setCurrentBook(Book currentBook) {
		this.currentBook = currentBook;
	}

	public int getBookId() {
		return bookId;
	}

	public void setBookId(int bookId) {
		this.bookId = bookId;
	}

}

BookController has three parts 1) properties 2) action handlers and 3) placeholder property “currentBook”. BookController is a session scoped bean and it is using a managed property using @ManagedProperty. BookController uses @ManagedProperty to ingect the BookModel class and delegate all the actions to the model class for business operations. BookController support following operations:

  • Listing of all books: This operation is implemented by getBooks and it is used in books.xhtml jsf facelet.
  • Create and Edit: These operations are implemented by create and edit and they forward editBook.xhtml
  • Save: This operation is implemented by save and it returns to books listing page, books.xhtml
  • Delete: This operation is implemented by delete and it returns to books listing page, books.xhtml

Lets have a look at BookModel. We have implemented book model as another managed bean but it can also be implemented as an EJB as well.


import java.io.Serializable;
import java.util.ArrayList;
import java.util.List;

import javax.faces.bean.ManagedBean;
import javax.faces.bean.SessionScoped;

@ManagedBean
@SessionScoped
public class BookModel implements Serializable{

    /**
	 * 
	 */
	private static final long serialVersionUID = -3104501356925971373L;
	
	private List<Book> Books;

	public BookModel() {
		Books = new ArrayList<Book>();
		reload();
	}
	
    private void reload() {
        this.setBooks(BookDAO.Factory.getInstance().findAll());
	}

	/**
     * @return the Books
     */
    public List<Book> getBooks() {
        return this.Books;
    }

    /**
     * @param Books the Books to set
     */
    public void setBooks(final List<Book> Books) {
        this.Books = Books;
    }
    
    public void save(Book p) {
    	BookDAO.Factory.getInstance().save(p);
        reload();
    }
    
    public Book find(Integer id) {
		return BookDAO.Factory.getInstance().findById(id);
	}

    public void delete(Integer bookId) {
    	BookDAO.Factory.getInstance().delete(bookId);
        reload();
    }
    
}

Book model is a very simple model bean and it just forward calls to dao layer to perform the CRUD operations. You can implement different business logic in your model beans.

Following are the DAO layer classes:

import java.util.List;

public class BookDAO extends BaseDAO {

	private BookDAO() {
	}

	public boolean delete(Integer id) {
		return delete(id, Book.class);
	}

	public Book findById(Integer id) {
		return (Book) findById(id, Book.class);
	}

	public List<Book> findAll() {
		return findAll("FROM Book");
	}

	public static class Factory {

		private final static BookDAO INSTANCE = new BookDAO();

		public static BookDAO getInstance() {
			return INSTANCE;
		}
	}

}

/**
 * 
 */

import java.util.List;

import javax.persistence.EntityManager;
import javax.persistence.EntityManagerFactory;
import javax.persistence.EntityTransaction;
import javax.persistence.Persistence;
import javax.persistence.spi.PersistenceProviderResolverHolder;

import org.apache.log4j.Logger;

/**
 */
public abstract class BaseDAO implements DAO {

	private final static Logger logger = Logger.getLogger(BaseDAO.class);

	protected final static EntityManagerFactory EMF;

	static {
		logger.info("Initializing EntityManagerFactory...");
		logger.info("providers:"
				+ PersistenceProviderResolverHolder
						.getPersistenceProviderResolver()
						.getPersistenceProviders());
		EMF = Persistence.createEntityManagerFactory("default");
	}

	public EntityManager getEntityManager() {
		return EMF.createEntityManager();
	}

	public boolean delete(Integer id, Class clazz) {
		EntityManager e = getEntityManager();
		EntityTransaction t = null;
		try {
			t = e.getTransaction();
			t.begin();
			Book b = e.find(clazz, id);
			e.remove(b);
			t.commit();
			return true;
		} catch (Exception ex) {
			if (t != null && t.isActive())
				t.rollback();
			logger.error(ex.getMessage(), ex);
			return false;
		} finally {
			e.close();
		}
	}

	public boolean save(Object object) {
		EntityManager e = getEntityManager();
		EntityTransaction t = null;
		try {
			t = e.getTransaction();
			t.begin();
			e.merge(object);
			t.commit();
			return true;
		} catch (Exception ex) {
			if (t != null && t.isActive())
				t.rollback();
			ex.printStackTrace(System.out);
			return false;
		} finally {
			e.close();
		}
	}

	public Object findById(Integer id, Class clazz) {
		return getEntityManager().find(clazz, id);
	}

	public List findAll(String query) {
		return getEntityManager().createQuery(query).getResultList();
	}

}

import java.util.List;

public interface DAO {

	public boolean save(Object dao);

	public boolean delete(Integer id, Class clazz);

	public Object findById(Integer id, Class clazz);
	
	public List findAll(String query);
	
}

Following facelet is used to display books:

<?xml version="1.0" encoding="UTF-8" ?>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">

<html 
  xmlns="http://www.w3.org/1999/xhtml"
  xmlns:fn="http://java.sun.com/jsp/jstl/functions"
  xmlns:ui="http://java.sun.com/jsf/facelets"
  xmlns:h="http://java.sun.com/jsf/html"
  xmlns:f="http://java.sun.com/jsf/core" >

  <h:head>
    <meta name="keywords" content="" />
    <meta name="description" content="" />
    <meta http-equiv="content-type" content="text/html; charset=utf-8" />
    <meta name="copyright" content="ingresssolutions" />
    <meta http-equiv="Expire" content="0" />
    <meta http-equiv="Pragma" content="no-cache" />
  </h:head>

  <h:body>

  <h1>Books</h1>
  <h:form>
  <table>
	<tr>
		<td colspan="2" align="right">
			<h:commandButton id="add" action="#{bookController.create}"
					value="Add Book" />

		
		</td>
	</tr>

  <tr>
  <th>Id</th>
  <th>Title</th>
  <th>Author</th>
  <th>Publisher</th>
  </tr>


  <ui:repeat value="#{bookController.books}" var="book" varStatus="s">
    <tr>
    <td>#{book.id}</td>
    <td>#{book.title}</td>
    <td>#{book.author}</td>
    <td>#{book.publisher}</td>
    <td>
    <h:commandLink action="#{bookController.edit}" value="Edit">
      <f:setPropertyActionListener target="#{bookController.bookId}" value="#{book.id}"/>
    </h:commandLink>
    </td>
    <td>
    <h:commandLink action="#{bookController.delete}" value="Delete">
      <f:setPropertyActionListener target="#{bookController.bookId}" value="#{book.id}"/>
    </h:commandLink>
    </td>
    </tr>
  </ui:repeat>

  </table>
</h:form>
  </h:body>
</html>

In the above facelet we have used head, body, form, commandButton, repeat, and commandLink tags. commandButton and commandLink are tied with server side action handlers. setPropertyActionListener is used to pass the book id to the controller to specify the book on which operation has to be performed. JSF creates a link and pass this id when that link is clicked.

Now, lets have a look at the editBook.xhtml which is used to create new book and edit existing books.

<?xml version="1.0" encoding="UTF-8" ?>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">

<html xmlns="http://www.w3.org/1999/xhtml"
	xmlns:fn="http://java.sun.com/jsp/jstl/functions"
	xmlns:ui="http://java.sun.com/jsf/facelets"
	xmlns:h="http://java.sun.com/jsf/html"
	xmlns:f="http://java.sun.com/jsf/core">

<h:head>
	<meta name="keywords" content="" />
	<meta name="description" content="" />
	<meta http-equiv="content-type" content="text/html; charset=utf-8" />
	<meta name="copyright" content="ingresssolutions" />
	<meta http-equiv="Expire" content="0" />
	<meta http-equiv="Pragma" content="no-cache" />
</h:head>

<h:body>

	<h1>Create New Book</h1>
	<h:form>
	<table>
		<tr>
			<td>Title</td>
			<td><h:inputText id="name" value="#{bookController.currentBook.title}" /></td>
		</tr>
		<tr>
			<td>Author</td>
			<td><h:inputText id="author" value="#{bookController.currentBook.author}" /></td>
		</tr>
		<tr>
			<td>Publisher</td>
			<td><h:inputText id="publisher" value="#{bookController.currentBook.publisher}" /></td>
		</tr>
		<tr>
			<td><h:commandButton action="#{bookController.save}"
				value="Save"/>
			</td>
		</tr>

	</table>
	</h:form>
</h:body>
</html>

When you click on Add Book or Edit links from books.jsf, a request is send to BookController which initialize a new book instance or retrieve book instance from database and forward the call to editBook.xhtml which displays a form to fillup book information or update information of an existing book.

editBook.xhtml uses #{bookController.currentBook.title} and other properties to display book information. When JSF runtime sees this information it look for a bean named bookController and create an instance of it, call its property currentBook and its property title to display book title information.

Conclusion

JSF2 is a good framework through which you can easily create web forms in java, validate them, perform server side event handlers, and utilize ajax features. The example present in this article is very simple but fully demonstrate the flow of JSF2 along with JPA. I will try to write other articles on same topic with more advanced features of JSF2 and JPA.

Large Attachements with MTOM and CXF

Recently i’ve solved a large file attachement problem in CXF web services using MTOM attachements. Using MTOM attachements and DataHanlder APIs I was successfully able to upload GBs of data by one user. First, lets talk about some of the background on the problem.

MTOM Overview

Binary data with SOAP messages is send in Base64 format because SAOP messages are based on plain text, so when binary data is converted into Base64 than its size is increased. And, in case of large attachement this kills the application and bring down the server with memory problems.

The solution with the large attachment with SOAP messages is the use of MTOM (SOAP Message Transmission Optimization Mechanism) which encodes binary data in base64Binary and sends the data as binary attachement rather than keeping it with actual SOAP message. MTOM is WC3 standard that is used to attach binary data with your SOAP messages. MTOM provides an elegant machenism to transfer binary data such as PDF, MS word, images, and other document types. MTOM uses XML-binary Optimized Packaging (XOP) packages for transmitting binary data.

MTOM Implementation

CXF provides support of MTOM with XOP implementation. To enable MTOM in you services you will need to define the element type in your WSDL as xsd:base64Binary for elements which will contain the binary data as shown in the following snippet:

	<s:complexType name="FileUpload">
		<s:attribute name="ByteData" type="s:base64Binary" use="optional"></s:attribute>
		<s:attribute name="Name" type="s:string"></s:attribute>
		<s:attribute name="Size" type="s:long" default="0"></s:attribute>
	</s:complexType>

In above example ByteData element of FileUpload is defined as the binary data. If you will convert the above with JAXB element you will get the following code:

@XmlAccessorType(XmlAccessType.FIELD)
@XmlType(name = "FileUpload", propOrder = {
    "byteData"
})
public class FileUpload {

    @XmlAttribute(name = "ByteData")
    protected byte[] byteData;
   @XmlAttribute(name = "Name")
    protected String name;
    @XmlAttribute(name = "Size")
    protected Long size;

// getters and setters for above properties

Above code has defined binary data element as xsd:base64Binary and it is converted into a byte[] array but doesn’t take full advantage of MTOM optimization and all the binary data will be included in the actual SOAP message rather than serializing it.

To fully utilize the MTOM features you will need to use the xmime:expectedContentTypes attribute to your binary data elements with possible value of “application/octet-stream”. You can use other MIME type but “application/octet-stream” works in most cases. When you will add the “expectedContentTypes” attribute binary data will be send as an attachement and will not be included in the XML infoset. Following is the updated schema of FileUpload type which contains two elements with base64Binary but one with “expectedContentTypes”.

<s:schema xmlns:s="http://www.w3.org/2001/XMLSchema"
	targetNamespace="http://org.artstor.adam" xmlns:adam="http://org.artstor.adam"
	elementFormDefault="qualified" xmlns:xmime="http://www.w3.org/2005/05/xmlmime">

	<s:complexType name="FileUpload">
		<s:sequence>
			<s:element name="File" type="s:base64Binary"
				xmime:expectedContentTypes="application/octet-stream"></s:element>
		</s:sequence>
		<s:attribute name="ByteData" type="s:base64Binary" use="optional"></s:attribute>
		<s:attribute name="Name" type="s:string"></s:attribute>
		<s:attribute name="Size" type="s:long" default="0"></s:attribute>
	</s:complexType>

</s:schema>

If you will generate java classes for the above schema with JAXB you will get the following code. As you can see byteData properties is still using the byte[] but the second one that we have created with “expectedContentType” attribute has been created with a DataHandler.

@XmlAccessorType(XmlAccessType.FIELD)
@XmlType(name = "FileUpload", propOrder = { "file" })
public class FileUpload {

	@XmlElement(name = "File", required = true)
	@XmlMimeType("application/octet-stream")
	protected DataHandler file;
	@XmlAttribute(name = "ByteData")
	protected byte[] byteData;
	@XmlAttribute(name = "Name")
	protected String name;
	@XmlAttribute(name = "Size")
	protected Long size;

// getters and setters for above properties
}

DataHandler and DataSource APIs does the actual streaming part of the MTOM attachements. There are different DataSource implementations available that you can use on the client side to initialize your POJOs to send the binary data to your web services. I will see how these can be used in the following section.

So, now we have done the schema part of the mtom attachement example. Lets, enable the MTOM feature.

MTOM Configuration

You can enable MTOM through java or configuration, i have used cxf.xml to enable mtom.

	  <jaxws:endpoint id="mtomService"  implementor="com.learn.cxf.mtom.MTOMServiceImpl" address="/soap">
	        <jaxws:properties>
	          <entry key="mtom-enabled" value="true"/>
	          <entry key="attachment-directory" value="/tmp/"/>
	          <entry key="attachment-memory-threshold" value="4000000"/>
	        </jaxws:properties>
	    </jaxws:endpoint>

As you can see i’ve added three properties to the jaxws:endpoint.

  1. mtom-enabled: This is the prime property to enable the MTOM, setting it true will enable MTOM feature.
  2. attachment-directory: This property is used to specify the directory to which binary data will be saved before streaming. This property is related to the next property and works along with the next property.
  3. attachement-memory-threshold: This property is use to set the memory threshold, that is use to keep the binary data in memory, data exceeding the memory threshold will be written to the directory specified by the attachement-directory property. Value of this property is set in bytes.

MTOM Client

The important part of MTOM Client in our example is the use of DataHandler and DataSource APIs to send the binary data to the web services.

I have used apache commons fileupload utility to process the multipart data. In the following example i’ve used our FileUpload class which is a java bean that will be used to upload MTOM attachements. I’ve also used fileupload utility with DiskFileItemFactory to retreive the multipart data from client. Following code extract shows the client code:


			FileItemFactory factory = new DiskFileItemFactory();
			ServletFileUpload upload = new ServletFileUpload(factory);

                        // we will upload multiple large files to our web service
			List<FileUpload> imgFileList = new ArrayList<FileUpload>();

				items = upload.parseRequest(request);
				for (FileItem fileItem : items) {
                                        // is it fileupload multipart field
					if (!fileItem.isFormField()) {
                                              FileUpload imgFile = new FileUpload();
                                              imgFile.setName(fileItem.getName());
                                              imgFile.setSize(fileItem.getSize());
                                              DataSource dataSource = new InputStreamDataSource(fileItem.getInputStream(), "application/octet-stream");
                                              DataHandler dataHandler = new DataHandler(dataSource);
                                              imgFile.setFile(dataHandler);
                                              imgFileList.add(imgFile);
                                       }
                             }

                    // call web service and forward uploaded images using web service request POJO
                    if(imgFileList.size() > 0) {
                          // UploadImageRequest is used as the input parameter to the uploadImage service
                          UploadImageRequest uploadImageRequest = new UploadImageRequest();
                          uploadImageRequest.getUploadedImages().addAll(imgFileList);
                          // service is an object of a web service port
                          service.uploadImages(uploadImageRequest);
                    }

Above code gives you an overview how you can use DataHanlder and InputStreamDataSource from client code to upload large data files. If you don’t want to use streaming and want to attach the whole data as a one byte array you could use ByteArrayDataSource. If you have file which is locally stored than you could also use FileDataSource.

In case of large files you will need to keep web service connection open until whole file is streamed and transfered to the web service. In order to change the connection timeout and enabling MTOM on client side using java API you will need to use the following code:


       Binding binding = ((BindingProvider)port).getBinding();
       ((SOAPBinding)binding).setMTOMEnabled(true);

       Client cl = ClientProxy.getClient(port);

       HTTPConduit http = (HTTPConduit) cl.getConduit();

       HTTPClientPolicy httpClientPolicy = new HTTPClientPolicy();
       // one hour timeout
       httpClientPolicy.setConnectionTimeout(1000 * 60 * 60 * 1);
       httpClientPolicy.setReceiveTimeout(1000 * 60 * 60 * 60 * 1);

       http.setClient(httpClientPolicy);

On web service end you can use DataHanlder.writeTo method to write all the streaming data to a file as shown below:

public void saveFiles(List<FileUpload> imgFileList) {
                for (FileUpload imageFile : imgFileList) {
                    if (imageFile.getFile() != null) {
                        FileOutputStream fos = null;
                        try{
                             fos = new FileOutputStream(destDir, new File(imageFile.getName());
                             imageFile.getFile().writeTo(fos);
                        }
                        catch(IOException ex) {
                           //TODO handle it
                        }
                        finally {
                            if (fos != null) {
                               try {
                                   fos.close();
                               } catch (IOException e) {
                                   //TODO handle it
                               }
                            }
                        }
                  }
}

In the end, i have written this article for intermediate programmers who can connect all the bits and peaces of the code pasted above as i have not given the complete example but enough information on how to use MTOM in CXF. If you want to read more on web services, don’t forget to subscribe to this blog!

XML Schemas in Nutshell

If you are working on web services, you should know many bits and pieces of XML schemas. XML Schema was approved as a W3C Recommendation in May, 2001 and is now being widely used for structuring XML documents for e-commerce and Web Services applications.

XML Schema Overview

The two major goals that the W3C XML Schema working group focused on during the design of the XML Schema standard were:

  • Expressing Object Oriented design principles found in common OO programming languages into the specification.
  • Providing rich datatyping support similar to the datatyping functionality available in most relational database systems.

XML Schemas provides a means of creating a set of rules that can be used to identify document rules governing the validity of the XML documents that you create. Schemas provide a means of defining the structure, content, and semantics of XML documents that can be shared between different types of computers and documents.

Now lets talk about different elements we use to create an XML schema.

Root Element

<?xml version="1.0" encoding="UTF-8"?> 
<schema xmlns="http://www.w3.org/2001/XMLSchema" xmlns:ac="http://www.agileconsultants.pk/schemas" targetNamespace="http://www.agileconsultants.pk/schemas" attributeFormDefault="qualified" elementFormDefault="qualified">
<!-- Additional schema contents. --> 
</schema>

The root element of an XML shcema is always the schema element. The default namespace for XML schema is always be the “http://www.w3.org/2001/XMLSchema&#8221;. Default namespace can be used to access the default data types such as string, int, etc. Custom namespace can be defined in the declaration such as “ac” is defined in above example, new types created in the schema can refer to this namespace. attributeFormDefault and elementFormDefault attributes are used to specify that the attributes and elements will be qualified or unqualified. Default value is unqualified.

Elements

Element declarations are used to define elements of an XML document in the Schema. There can be different type of elements in schema such as global, simpleType and complexType. Global elements are the direct children of the XML schema. An element declaration can contain simple or complex type if no type attribute is specified for it. XML schema can also define global attributes which can later be used in schema declarations.

There are number of attributes which can be used with elements. Following are brief description of these attributes:

  • name: Used to specify name of the element
  • type: Used to specify the data type of the element. This attribute cannot be used if element contains a simpleType or complextType elements.
  • default: Used to specify the default value of the element.
  • minOccurs: Used to specify the minimum count of occurance for this element in the XML document.
  • maxOccurs: Used to specify the maximum count of occurance of the element in the XML document. This attribute can be set to a special value “unbounded”.
  • nillable: can the element contain the nil/null value
<?xml version="1.0" encoding="UTF-8"?> 
<schema xmlns="http://www.w3.org/2001/XMLSchema" xmlns:ac="http://www.agileconsultants.pk/schemas" targetNamespace="http://www.agileconsultants.pk/schemas">

<!-- This is a declaration of a global element which makes it possible to use the element in the root of an XML document (instance).
--> 
<element name="person" type="ac:Person"/>

<complexType name="Person"> 
        <sequence>
               <element name="firstName" type="string" nillable="false"/> 
               <element name="lastName" type="string" nillable="false"/> 
               <element name="age" type="int"/> 
               <element name="favColour" type="string" minOccurs="0" maxOccurs="unbounded"/> 
        </sequence>
</complexType> 

</schema>

Complex Types

Complex types describe how elements are organized in an element. Complex types represent the objects in your XML documents and can use the following elements to organize the properties of the objects:

  • sequence: element in complexType is used to specify that child elements of complexType should be provided in the same order as defined by the sequence element.
  • all: element in complexType is used to specify that child elements of complexType can occur in any order. Only simple elements can occur in all group and it cannot be nested with simple, complex, sequence or all elements.
  • choice: element is used to specify that one element should be provided among the elements mentioned in the choice element.
  • simpleContent: element is used to add attributes to global simple types. This can be used to extend or restrict attributes on other complex types with simple content.

Simple Types

SimpleType element is used to restrict bult-in simple types such as int, string, etc. Following is an example of a simpleType:

<!-- Define a new simple type used for a person's age. This type cannot be further extended (final="#all"), has to have a value in the range from 0 to 150. White-space in values of the age type will be collapsed, i.e. TAB, C/R and LF will be replaced with space, two or more continuous occurrences of spaces will be replaced with one single space, finally heading and trailing white-space will be removed.
--> 
<simpleType name="age" final="#all">
      <restriction base="integer"> 
            <minInclusive value="0"/>
            <maxInclusive value="150"/>
            <whiteSpace value="collapse"/>
      </restriction>
 </simpleType>

The elements contained in the element are called facets. A facet represents a characteristic of the built-in type that is restricted. Please refer to the XML Schema specification for a complete list of which facets are available for the different built-in types.

Facets

This section will show examples of how to use some common facets when defining simple types. Numerical factes are used in the above example. Let’s see some of other facets:

Pattern and Length

The pattern facet allows for supplying a regular expression that the value of the simple type must match. Notice that the pattern facet is not only available for the string built-in type, but also for numeric types like float etc.

Enumeration Facet

Enumeration facet is used to provide a list of values among which a selection should be made:

<simpleType name="colour">
     <restriction base="string">
          <enumeration value="Blue"/> 
          <enumeration value="Green"/>
          <enumeration value="Pink"/>
          <!-- Etc. -->
     </restriction> 
</simpleType>

Import and Include

Existing XML schema definitions can be imported or included in a new XML schema, in order to reuse previous definition and/or to split XML schema definitions into multiple files.

Import: Import a XML schema that has a different namespace from the XML schema into which it is imported.

Include: Include a XML schema that has the same namespace that the XML schema into which it is included.

Hibernate/JPA Annotations

Today I’m going to talk about how to use Hibernate/JPA annotations. We will have different articles on these annotations starting with basics to more advanced use of annotations. Most of the Hibernate annotations are similar to the ones found in the JPA specification and i will specifically mention whenever i will use the one doesn’t exist in both.

Let’s start with the simple annotations you will need to map your objects. Following is a comprehensive list of basic entities you will need to persist your POJOs.

  1. Entity
  2. Table
  3. Id and GeneratedValue
  4. Column
  5. Transient
  6. Enumerated
  7. Temporal
  8. Type
  9. Basic

Entity, Table and Id Annotations

Every POJO that you want to persist in the database should have been marked with @Entity annotation. This annotation mark your POJO ready for the persistence.

The POJO to which you will apply @Entity annotation should also have @Table annotation which is used to specify in which table your POJO data will be saved in. @Table annotation has the following attributes:

  • name: This attribute is used to specify the name of the target table.
  • uniqueConstraints: This attribute is used to specify the unique constraints you have on your table.

You could have many objects of same entity each with his own unique id, so you will also use @Id and @GeneratedValue annotations to set the unique identification and type that will be used for its generation as shown in the following example:

package org.learn.java.jpa.annotations.basics;

import javax.persistence.Entity;
import javax.persistence.GeneratedValue;
import javax.persistence.Id;
import javax.persistence.Table;

@Entity
@Table (name="user")
public class User {

	@Id
	@GeneratedValue(strategy=GenerationType.AUTO)
       @Column(name = "id")
	private int id;

}

@GeneratedValue annotation is used to specify how the unique identification of each record inserted by your POJO will be created? As you can see from above example it has an attribute named strategy which could have following possible values:

  • AUTO: This strategy is used to utilize the database underlying properties to create unique keys. It could be auto increment columns, sequence, and other possible features of the database to create unique identification for each rows. AUTO strategy is the easiest way of generating object keys and is also the most portable way of creating these keys. However, the problem with using AUTO is, your objects will not get the id until you have executed them using EntityManager. This is because databases create the ids for identity columns when they get the insert statement.
  • IDENTITY: This strategy is used to specify an identity column that will be used to get the unique identification of each row. Different databases support identity columns such as MySQL, Postgres, SQL Server, and DB2. However, Oracle doesn’t support IDENTITY column, it support sequence instead. This is not the portable way across all database so if you are planning to use Oracle be careful of using IDENTITY.
  • SEQUENCE: If you want to use SEQUENCE strategy than you will need to specify the generator name and another annotation @SequenceGenerator will be used to specify further information of your sequence. @SequenceGenerator has three attributes a) name, this will be the same as you would have mentioned in the name attribute of your SEQUENCE strategy. b) sequence_name, this is the name of the sequence and c) allocation_size, this will be the starting point of your sequence.
  • TABLE: This is like the sequence generator in a way that you will need to specify a new annotation @TableGenerator if your strategy is TABLE. @TableGenerator has five attributes a) name b) table, this is the name of the table which will be used to create keys c) pkColumnName, is used to specify the name of primary key column d) valueColumnName, is used to specify the sequence value column name e) pkColumnValue, is used to specify the row value which will contain the value of the sequence.

Following is a sample code with the table which can be used if your identification strategy is TABLE:

package org.learn.java.jpa.annotations.basics;

import javax.persistence.Entity;
import javax.persistence.GeneratedValue;
import javax.persistence.GenerationType;
import javax.persistence.Id;
import javax.persistence.Table;
import javax.persistence.TableGenerator;

@Entity
@Table(name = "user")
public class User {

	@Id
	@GeneratedValue(strategy = GenerationType.TABLE, generator = "usr_tab_seq")
	@TableGenerator(name = "usr_tab_seq", table = "seq_tab", pkColumnName = "seq_name", valueColumnName = "seq_val", pkColumnValue = "usr_seq")
	private int id;

}

Sequence Table

Column, Basic, and Transient Annotations

@Column annotation is used to map the property of your POJO with the table columns. You don’t need to use @Column annotation if the name of your POJO property is same as the name of the column in the table. If this is not the case than you will need to explicitly map properties of POJO with table columns using the @Column annotation. @Column annotation has the following generally used attributes:

  • name: is used to specify the name of the column in the table in which current property data will be saved.
  • unique: is used to specify if the current column is unique
  • nullable: is used to specify if the current column can contain null values
  • updatable: is used to specify if the current column value can be updated

@Basic is used to specify the fetching strategy of your POJO properties. You can specify either LAZY or EAGER fetching strategy. I will discuss the fetching strategies in detail in next part of this series. By default every non-static property of your POJO will be persisted unless you marked it with @Transient annotations.

Enumerated, Temporal, and Type Annotations

@Enumerated annotation is used for the Enum type properties of your POJO. You can save the enum ordinal position in your table or the value of the enum. @Temporal is used for DATE, TIME or TIMESTAMP, you can specify the type using its TemporalType property.

@Type annotation is used to specify the data type class that will be used to persist current property. This is not JPA annotation rather then it is hibernate annotation. An example of you using this could be, you want to use JODA DateTime class to represent user creation and modification dates which you later on uses for different operations supported by JODA library.

Following example shows all the annotations we have discussed so far:

package org.learn.java.jpa.annotations.basics;

import java.util.Date;

import javax.persistence.Basic;
import javax.persistence.Column;
import javax.persistence.Entity;
import javax.persistence.EnumType;
import javax.persistence.Enumerated;
import javax.persistence.FetchType;
import javax.persistence.GeneratedValue;
import javax.persistence.GenerationType;
import javax.persistence.Id;
import javax.persistence.Table;
import javax.persistence.Temporal;
import javax.persistence.TemporalType;

import org.hibernate.annotations.Type;
import org.joda.time.DateTime;

@Entity
@Table (name="user")
public class User {

	@Id
	@GeneratedValue(strategy=GenerationType.AUTO)
    @Column(name = "id")
	private int id;

    private String username;

    private String password;

    @Column(name = "first_name", unique = false, nullable = false, updatable = true)
    @Basic (fetch = FetchType.EAGER)
    private String firstName;

    @Column(name = "last_name", unique = false, nullable = false, updatable = true)
    @Basic (fetch = FetchType.LAZY)
    private String lastName;

    @Enumerated (EnumType.ORDINAL)
    private UserType userType;
    
    @Temporal(TemporalType.TIMESTAMP)
    @Column(name = "last_logon", nullable = false)
    private Date lastLogon;
    
    @Column(name = "creation_date", nullable = false, updatable = false)
    @Type(type = "org.joda.time.contrib.hibernate.PersistentDateTime")
    private DateTime creationDate;

    @Column(name = "modification_date", nullable = false)
    @Type(type = "org.joda.time.contrib.hibernate.PersistentDateTime")
    private DateTime modificationDate;

}

To compile above code make sure you have following jar files in your classpath:

  • ejb3-persistence.jar
  • hibernate-annotations.jar
  • joda-time-version.jar
  • joda-time-hibernate-version.jar

Take care until the next part of this article in which we will discuss Collections and other JPA annotations.

Responsibilities of a Developer

I’ve been doing the development now from more than a decade. In my experience I have noticed the following responsibilities of a developer.

Task Ownership

Every developer should take the task ownership, by task ownership i mean the tasks given to any developer are his responsibility to analyze, code, test and deliver. If the task is dependent on any other resource than an excuse should not come as saying my tasks is delayed because i was waiting for others, instead he should keep in touch with others involved in his tasks and help them if required so that his own tasks can be finished as well. In my view the more ownership you will take of your task the better developer you will be.

Real-World Analysis

It is also the responsibility of a developer to do the real world analysis of the tasks given to him. Sometime the requirement he gets doesn’t contain some vital information which if he will not find out will break his code in real world. So, every developer should ask the following questions (not just the ones mentioned below but at least) to himself:

How his code is going to work in real-world? in peak and non-peak hours.
How his code will behave when it will be integrated with others code?
Have I covered all the good and bad scenarios in which my code will run?
Have I put enough debugging information in case anything goes wrong to find out the root cause of the problem?

QA of his Code

Every developer should do the QA of his code but i have seen most of the developers are lazy to test their code. Some of them consider its the job of the QA guys but infact its one of the prime responsibility of the developer to first test his code before handing it over to the QA team. Test driven development is good and saves a lot of time and remove headache of the developer in long run as with proper unit tests and manual QA he can find out so many bugs very early in the life cycle of his code. QA guys don’t know what you have written so they cannot find many potential bugs which you can easily with some concentration on the quality of your code.

Code Documentation

Writing no documentation of your code is the biggest evil of software engineering. In my view before implementing the core logic, you first do two things 1) write unit tests 2) write comments. Code documentation should come in following ways:

  1. Class level documentation
  2. Before starting a new class and implementing any methods in it, first write the following in that class comments:

    • Purpose of that class
    • Responsibility of that class (apply SRP* design principle)
    • Usage of the class
    • Threading implications

    * SRP is a design principle which state that “Every class should have single responsibility to exist. It should does only one thing and does it well.”

  3. Method level documentation
  4. Before implementing any method, first write the following in that method comments:

    • Purpose of that method
    • Method signatures
    • Possible exceptions
    • Threading implications
    • Overriding implications
  5. Code level documenation
  6. Before implementing any core logic in any method, first write some comments about what you are trying to solve there as not everyone is smart as you are. So, in order to understand your complex logic they need some help and good comments comes to rescue here.

Support of his code

Support is another very important responsibility of a developer. Whenever there is an issue in production, every developer should be available to provide the support of his code 24/7. Making reliable production systems is not an easy job and there will be times when your support department will call you to assist them as their customers are complaining about some malfunctioning of your code. When that times come make sure you are available and looking to resolve their issues in the best possible time.

Grooming other resources

When you become senior and you have members in your team who are recently hired or not as experienced as you are. Now, its your job to get them up to the speed and teach them about the product they are working on with you, about the tools and technologies you are using, about the best practices you are implementing, about the unit tests you have written, and work with them in harmony.

Refactoring of your code

Time will come when your code will require refactoring on many areas, do not hesitate to do it once you have good unit tests to avoid the possibility of broken code. If you do not have unit test write them first.