Time Series Prediction Approaches

Time Series Journal

Subscribe to Time Series Journal: eMailAlertsEmail Alerts newslettersWeekly Newsletters
Get Time Series Journal: homepageHomepage mobileMobile rssRSS facebookFacebook twitterTwitter linkedinLinkedIn

Time Series Journal Authors: Jason Bloomberg, Progress Blog, SmartBear Blog, APM Blog, Jnan Dash

Related Topics: Virtualization Magazine, Java EE Journal, Time Series Journal, AMD Virtualization Journal

Virtualization: Article

Multi-Core and Massively Parallel Processors

Coming soon to a theater near you...

Parallel Programming in Java
Fortunately for Java programmers, the language was designed from the beginning with concurrency in mind. Java includes support for threads that can be used to run parts of your code in parallel and "monitors," which are special kinds of locks acquired using the synchronized keyword. The java.util.concurrent package also makes managing threads much easier, provides a fast HashMap for parallel programs and "blocking queues" that can be used to pass messages efficiently between threads. If you're a J2EE developer, you're even more fortunate because J2EE application servers such as IBM WebSphere automatically manage parallelism for you. For example, multiple simultaneous requests to a Web site can be processed in parallel with multiple threads managed by the application server and running on multiple cores.

Creating a new thread in Java is as simple as extending the java.lang.Thread class and overriding the run method. Another approach is to instantiate a new instance of the Thread class providing an object that implements Runnable. See Listing 1 for a simple example of creating and running threads.

The java.util.concurrent package provides an alternate way to run code in parallel that takes away some of the burden of managing threads directly. Listing 2 shows an example making use of the ExecutorService API. While the code appears somewhat more complex than the first example, one key difference is that the number of threads executing isn't hard-coded into the application, only the amount of work done in each "task."

You may notice that the two samples we've looked at so far generate output that is a random interleaving of the words "tic" and "toc" and that the interleaving changes on each execution. That happens because the threads execute essentially without regard to what's happening in other threads1. Now let's look at how Java helps you coordinate multiple concurrently executing threads. The primary mechanism used to coordinate access to shared data in Java is a monitor. In object-oriented programming, the class is a natural protection boundary for private instance data. So in Java, every object is assigned a unique monitor. Methods declared using the synchronized keyword automatically enter the monitor as they get called and exit the monitor on returning. Only one thread can be inside a monitor at any one time, which means that if instance data is only accessed inside synchronized methods, then a data race can't occur. Listing 3 shows an example where multiple threads are updating a common counter value using a monitor to ensure that there's no data race between any two threads.

In some cases, use of monitors can incur too much overhead and simpler alternatives would suffice. For example, if hundreds of threads are involved in the counter example in Listing 3, performance can be dominated by the time taken to enter and exit the monitor rather than doing useful work. The java.util.concurrent.atomic package provides a few lightweight alternatives to monitors for safely updating shared locations in such busy situations. For example, in Listing 4, an AtomicInteger object is used to safely increment a shared counter without using synchronization.

In some cases, threads will want to wait (or block) for a particular condition before proceeding. For example, a thread operating on data in a stack will need to wait for another thread to add an entry when the stack is empty. One way to do that would be to have the thread repeatedly check the stack size. Java provides an easier and more efficient way to do this, however. The consuming thread can call the wait method on the object and, when another thread adds an entry, it can call the notify method which will wake up an arbitrary waiting thread. Listing 5 shows the use of a monitor and the use of wait and notify to operate on a simple stack of integers.

Java offers many more useful features to help you in your parallel programming tasks. We encourage you to explore them and learn more about parallel programming in Java through the excellent resources listed at the end of this article.

What Does the Future Hold?
Even with all of this native support in Java, parallel programming can still be very difficult. First of all, Java provides the means to parallelize an application but does nothing to help you design a parallel program in the first place. Furthermore, even when you have a good parallel design, there are many challenges in achieving good performance and avoiding concurrency bugs. One common problem is for synchronization to cause an excessive number of threads to block. In the worst case, all threads can block leading to deadlock. Another more devastating problem is a race condition that can lead to data corruption. These kinds of problems can bring the entire application down with a memory fault or worse can intermittently produce incorrect results. These and other similar problems often go undetected in development environments, showing up for the first time when under stress in a production environment. Further, these problems can be difficult to diagnose and debug, leading to delays in providing fixes. While there are some good tools available to help with problem determination, the debugging problem remains vexing and standard techniques such as adding print statements may actually make the problem disappear or move!

Of interest to Java developers is the recent announcement by IBM and several academic researchers of a new programming language called X10. X10 is a set of extensions to Java providing higher-level constructs specifically for parallel application development. The X10 programming environment is now an open source project at http://x10.sourceforge.net/.

The goals of X10 include managing both concurrency and the distribution of data and providing constructs to greatly simplify the task of concurrent programming. A central concept in X10 is the notion of a place. A place is an abstraction for a collection of related data and activities that operate on that data. A computation may have many places. Places serve as units of distribution - for instance, different places may be located at different nodes of a cluster. An object is created in one place and lives in that place throughout its lifetime. However, all places in a computation are part of the same address space. That is, an object located in one place may contain references to objects located at another place.

Objects are operated on by activities. Activities are much like threads in Java, except that they may be very lightweight - for instance, an activity may execute only a few instructions in its lifetime. An activity may read and write variables, invoke methods, execute control statements, catch and throw exceptions - in short, perform the actions that any Java thread can perform. X10 makes it very easy for a programmer to write code that creates a new activity: the statement async S specifies that the statement S is to be executed in its own separate task, which executes in parallel. Listing 6 shows that achieving the parallel Java tasks shown in Listing 1 is quite simple in X10.

Along with spawning activities, X10 supports the notion of joining activities, that is, determining when a collection of activities has terminated. The statement finish S specifies that statement S is to be executed and, if during the execution of S any activities are created, these activities must terminate before any following statement begins executing. Thus a programmer may use finish to specify an order on activities.

Unlike Java, X10 doesn't support locks. Instead, X10 provides a very simple construct for the programmer to specify atomicity of execution. The statement atomic S is executed as if in a single step (with all other activities frozen). Listing 7 shows that achieving the atomic increment shown in Listing 3 is also easy in X10.

The wait/notify behavior shown in Listing 5 is accomplished in X10 using the simple keyword when. Listing 8 shows the same simple integer stack implemented in X10. Notice that the pop() method uses when to cause the thread to wait for a specific condition. The notify is implicit in the action of the push and doesn't require explicit coding by the programmer.

We have only scratched the surface of X10 features for concurrent programming. More thorough and complex examples of parallel programming in X10 are provided at http://x10.sourceforge.net/.

While performance improvement for single threads may slow significantly over the coming years, processors will provide significantly expanding concurrency. For software performance to continue to improve, developers must begin thinking and coding in parallel. Fortunately, the Java programming language was designed for concurrency. Java provides all of the basic constructs for threading and locking. However, parallel programming is still very difficult due to inherent complexities such as dividing sequential tasks into balanced subtasks and avoiding race conditions and deadlocks. IBM has recently introduced a new programming language called X10 that runs on top of Java and significantly simplifies many of the tasks of parallel programming.


  1. www.ibm.com/developerworks/power/newto/
  2. www.sun.com/processors/niagara/
  3. www.ibm.com/power
  4. www.intel.com/multi-core/index.htm
  5. http://multicore.amd.com
  6. Brian Goetz. "Java Concurrency in Practice." www.briangoetz.com/pubs.html
  7. Brian Goetz series of articles on developerWorks. www.ibm.com/developerworks/java/library/j-jtpcol.html
  8. www.oreilly.com/catalog/jthreads3/
  9. X10 http://x10.sourceforge.net/

More Stories By J. Stan Cox

J. Stan Cox is a senior engineer with IBM's WebSphere Application Server performance group. In this role, he has worked to improve WebSphere application performance for J2EE, Web 2.0, Web services, XML and more. His current focus is WebSphere multicore and parallel foundation performance. Stan holds a B.S.C.S from Appalachian State University (1990) and an MS in computer science from Clemson University (1992).

More Stories By Bob Blainey

Bob Blainey is a Distinguished Engineer in the IBM Software Group, responsible for the technical roadmap for software in the era of multi-core and related next-generation systems innovations. Bob is an expert in programming languages and compilers having spent much of his career at IBM driving ever-greater performance and parallelism through program analyses and transformations. Immediately prior to his current position, Bob was CTO for Java at IBM. He is a member of the IBM Academy of Technology, an IBM Master Inventor, and, most impressive of all, manages to remain sane with two pre-teen daughters in the house.

More Stories By Vijay Saraswat

Vijay Saraswat joined IBM Research in 2003 after a year as a professor at Penn State, a couple of years at start-ups, and 13 years at Xerox PARC and AT&T Research. His main interests are in programming languages, constraints, logic, and concurrency. At IBM, he leads the work on the design and implementation of X10, a modern object-oriented programming language intended for scalable concurrent computing. Over the last 20 years he has lectured at most major universities and research labs in U.S.A. and Europe. Vijay got a B Tech degree from the Indian Institute of Technology, Kanpur, and an MS and PhD from Carnegie-Mellon University. His thesis on concurrent constraint programming won the ACM Doctoral Dissertation Award in 1989, and a related paper won a best-paper-in-20-years award in its area.

Comments (1) View Comments

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.

Most Recent Comments
Jim Falgout 11/26/07 09:37:55 AM EST

The X10 site does not seem very active. The news feed on the last release is dated December of 2006. Do you have any information on the current state of X10?