Published on ONJava.com (http://www.onjava.com/)
 See this if you're having trouble printing code examples

Atomic File Transactions, Part 1

by Jonathan Amsterdam

One of the many powerful features that transaction-processing systems provide is atomicity. An activity is atomic if it either happens in its entirety, or does not happen at all. Atomicity is crucial for writing correct software in many applications; for example, a bank's software may implement a transfer from account A to account B as a withdrawal from A followed by a deposit to B. If the first action happens, then the second had better happen as well.

Databases provide atomicity for data stored within them, but filesystems are not atomic with respect to their files. This two part article series explains how to achieve atomicity for standard filesystem actions. The code for such a system is implemented in the Java package com.astrel.io.atomic, available here. In part one, I will explain transactions and atomicity in more detail. I'll then discuss how to start using this package.

It's not hard to find programs that could benefit from atomic filesystem operations. Installers are a prime example: they do a lot of filesystem manipulation, and if there is an error or they crash, one would like the filesystem back the way it was. Word processors, spreadsheets and other standard "office" programs write, delete and rename files. Even business software that primarily accesses a database may use the filesystem to store non-critical data like user preferences. Crashes during these operations, or sequences of operations, can potentially corrupt the data. Finally, if you don't require powerful search capabilities, using flat files may be faster than dealing with a database. All of these programs will (and do) work without atomicity, but all would be more robust with it.

You may be under the impression that filesystems already provide atomicity. After all, when you delete a file, either the delete succeeds and the file is gone, or it fails and the file is still there. That is correct: modern filesystems make sure file deletion and other basic operations, like creation and renaming, are atomic; in other words, they prevent themselves from being corrupted (for what else would you call a filesystem in which a file was half deleted and half not?). But they do not ensure that file writes are atomic: once you overwrite some data in a file, you can't get it back. They also do not provide for the atomicity of sequences of actions: you can delete atomically and rename atomically, but you can't do a delete followed by a rename atomically.

It is possible to build a transactional filesystem from the ground up -- that is, from low-level filesystem operations like block reads and writes. In fact, databases do just that. But here I will describe how to make an existing filesystem atomic.

If you are concerned about the performance of atomic filesystem operations, your concern is well-placed; atomicity comes at a cost. You will have to determine whether the increase in robustness is worth the slowdown in your application. In many cases, you can do the I/O in a background thread, mitigating the performance impact.

The ACID Test

In standard parlance, a transaction is a sequence of operations with four properties, called the ACID properties, for Atomicity, Consistency, Isolation, and Durability.

Atomicity means that a transaction can end in only one of two ways: either successfully, in which case all its effects take place, or unsuccessfully, in which case it has no effect -- for all intents and purposes, the transaction never happened.

Consistency just means that a transaction is written correctly, so that when it completes successfully, the system is in a consistent state.

Isolation means that the transaction appears to execute completely alone, even if, in fact, other transactions are running simultaneously. Isolation is important, and can be enforced automatically, but my package does not attempt to do so (atomicity alone is hard enough). You can use Java's thread synchronization features to enforce isolation within a single virtual machine, and you can use the File.createNewFile method to implement a file-locking protocol that will enforce isolation across virtual machines. For the rest of this article, we'll assume that isolation has been done for us: that simultaneously running transactions never access the same file.

Durability means that a successful transaction's changes are permanent. In practice, "permanent" means "on disk." Since our transactions involve the filesystem, durability comes at no extra charge.

Since there's no standard term for something that has atomicity and durability but not isolation, I will use the word "transaction," but will occasionally write "atomic transaction" to indicate that these are not full-blooded transactions.

Using Transactions

Almost everyone who's programmed for a database knows how to use transactions:

  1. You begin a transaction.
  2. You do a bunch of things as part of the transaction -- "under" the transaction, we will say.
  3. You attempt to commit the transaction, locking in your changes.

The attempted commit may succeed or fail. If it succeeds, then the transaction is committed, and its effects have taken place for good. If the commit fails, the transaction mechanism guarantees that you'll be back at the beginning: none of the transaction's effects will have occurred, or all those that have occurred will be undone.

Related Reading

Learning JavaLearning Java
By Pat Niemeyer & Jonathan Knudsen
Table of Contents
Sample Chapter
Full Description
Read Online -- Safari

There is another way to end a transaction besides committing it: you can abort it, or (synonymously but more politely) "roll it back." You may choose to do this because one of the actions you attempted under the transaction didn't work the way you'd hoped, or perhaps you got new information that made the transaction's effects unnecessary or ill-advised. In any case, rolling back a transaction undoes all its effects, returning the system to the initial state.

In the event of a crash during a transaction, the transaction will be rolled back automatically when the transaction-processing system restarts. This is called recovery.

Working with the Atomic Transaction Package

Now that you understand the basics of transactions, you are ready to use the com.astrel.io.atomic package. Users of the package need to understand only two non-exception classes: TransactionManager and Transaction.


A TransactionManager (TM) represents a set of transactions that are part of the same program. You begin by constructing a TransactionManager, giving the constructor a directory where the TM will hold all of the files it needs in order to provide atomicity:

TransactionManager tm = new

The TM must wholly control that directory; it is a grievous error for other programs, or even other TMs, to use the same directory. Using a relative pathname for the directory, as shown here, is wise only if you know your program will always be started from the same directory. Otherwise, you should use an absolute or user-relative pathname.

Your program can use multiple TMs, each with its own directory, and each TM can support multiple simultaneous transactions. But it is up to you to ensure that no two simultaneous transactions access the same file -- the package does not provide isolation.

The TM constructor also handles recovery: if there was a crash the last time you ran your program, then by the time the constructor has returned, any affected transactions have been rolled back.

Once you have a TM, you invoke its beginTransaction method to get a Transaction:

Transaction t = tm.beginTransaction();

We'll see how to use the Transaction object in a moment. The other major method of TransactionManager is close. Call close when you are done with the TM. It will roll back any active transactions and close all open files.


The Transaction class has five methods for supporting filesystem actions, and two methods for commit/rollback. The five filesystem action methods are:

The openOutputStream method returns a FileOutputStream. Although

FileOutputStream out = t.openOutputStream(f);

appears equivalent to

FileOutputStream out = new FileOutputStream(f);
the openOutputStream method works behind the scenes to ensure atomicity. Once you have the FileOutputStream, you use it normally. You can close it when you're done, or you can let the transaction close it automatically when it completes.

The openInputStream method opens a file for reading, returning a FileInputStream. Since reading a file does not change the filesystem state (except possibly to alter the last-accessed time of the file, which we will not attempt to make atomic), it has no effect on atomicity; openInputStream is provided only for convenience and symmetry. Thus the call

FileInputStream in = t.openRead(f);

is equivalent to

FileInputStream in = new FileInputStream(f);

except that it will close the file when the transaction ends.

The openRandomAccess method is similar to the other open methods, except that it opens and returns a RandomAccessFile.

The other two action methods are similar.t.delete(f); is like f.delete();, except that it is atomic. It also throws exceptions, either FileNotFoundException or DeleteException (my own class, part of this package), instead of returning false. This, in my opinion, remedies a flaw in the design of File.delete.

Similarly, the rename method is the atomic version of File.renameTo, and throws a FileNotFoundException or a RenameException instead of returning false. The rename method will never overwrite an existing file, as some implementations of File.renameTo will; you must first delete the destination file.

Two methods handle transaction commit and rollback. It may seem obvious that one method does commit and another does rollback, and that is indeed the design of typical transaction systems (including JDBC), but there is another design that fits better with Java exception handling.

Every transaction has a boolean variable that determines whether it should commit or roll back when it is finished. The boolean is false by default. You set the boolean by calling a transaction's shouldCommit method, and you complete the transaction by calling its end method.

An example will show how this design makes it easy to code the most common case, in which the transaction will be rolled back if any exceptions are thrown while it is in progress:

Transaction t = tm.beginTransaction();
try {
FileOutputStream out = t.openWrite("file1");
// write to out
t.rename("file3", "file4");
} finally {

Since shouldCommit is initially false, and isn't set to true until after everything has completed successfully, exceptions during the transaction will roll it back. If you want to continue your transaction in the face of an exception, you can write a try-catch within the try-finally.

If instead one had to call commit or rollback methods explicitly, then the code for the common case would be a bit more cumbersome:

Transaction t = tm.beginTransaction();
try {
FileOutputStream out = t.openWrite("file1");
//write to out
t.rename("file3", "file4");
} catch (Exception e) {
throw e;

Here there is also the danger that the programmer will forget to roll back, or forget to re-throw the exception, thereby stifling it.

A third possible design, in which rollback is called automatically if there is an error, is no good for two reasons:

Writing a try-finally around your transaction takes care of the basic commit-rollback logic, but somewhere you will have to deal with exceptions. Transaction methods can throw three different exceptions, reflecting three different kinds of errors:

Making Actions Atomic

Let us now go behind the scenes to see how atomic file transactions are implemented. We begin with some assumptions, namely that the filesystem makes certain basic actions atomic. Most important of these are writing a single block to the disk, deleting a file, and renaming a file, but we also assume that opening and closing a file are atomic. Our job is to provide atomicity for file writes, and for sequences of actions.

Let's begin with the problem of writing out a new version of a file. The standard way to do this, of course, is to simply open the file and write to it. But an error or crash during writing will leave you with a corrupted file, and no way to recover the original.

One way to prevent the problem is to make a backup copy of the file before writing. If the write fails, the backup can be restored. If the program or machine crashes during the write, then restoring the backup can still happen, but it will have to wait until the program is restarted. In detail, the algorithm looks like this:

Atomic File Write: Execution

E1. Rename or copy the original file to a backup file. If there is an error doing this, then delete the backup file, if any, and report failure.

E2. Write the file. If there is an error doing so, then restore the backup and report failure.

Although our desired action has happened, there is one more crucial step:

Atomic File Write: Commit

C1. Delete the backup and report success.

If there is a crash during execution or commit, the program can attempt to recover the next time it starts:

Atomic File Write: Recovery

R1. If there is no backup file, assume the operation succeeded. (This is why deleting the backup is crucial. It is the signal that the entire operation was successful.)

R2. Otherwise, restore the backup.

R3. Delete the backup.

This brings the program back to the state it was in before the file write was attempted. The program can then try it again, if that is desired. In general, the job of the recovery process is to bring the system back to the state before the crash.

Observe that this is a somewhat subtle algorithm, even for something as simple as writing a single file. Much of the subtlety is in error-handling. The above description doesn't even cover all the possibilities: what if there is an error restoring the backup? Or deleting it?

Moreover, the algorithm is not even correct if step E1 uses copying instead of renaming to create the backup file. Consider what will happen if there is a crash while copying. Upon recovery, that partial backup will overwrite the original file in step R2, and the file will be corrupted. For this reason, it's better to rename the original file in E1 rather than copy it (remember that we assume the filesystem will rename atomically).

Part of the difficulty in thinking about atomicity is that there are five sources of error:


Reasoning about the correctness of an atomic protocol thus involves considering every possible crash point, both in the original sequence of actions and during recovery, and making sure that the recovery process can always eventually get back to the initial state. This isn't easy to do. If you're not convinced, let's consider another example.

Say a program wishes to atomically create a file F, then delete a file G, then later write a new file, also named G. Backing up a deletion, like backing up a write, involves renaming the file. If a new file is being created, however, no backup file is necessary -- we just have to remember to delete the file if there is an error. The process looks like this, then:

  1. Create and write to F. If this fails, delete F and report failure.
  2. Rename G to a unique backup name, say G.bak. If this fails, then G did not get renamed (rename is atomic). Delete F and report failure.
  3. Create and write to G. If this fails, then delete G, rename G.bak to G, delete F, and report failure.
  4. Delete G.bak and report success.

That's complicated, but it seems right: the successful result is what we want, and a failure anywhere along the way will undo all actions up to that point. But remember that we have to design a recovery algorithm as well, in case of a crash. Let's say that when the program is restarted, it sees files F and G, but not G.bak. What situations can give rise to that state?

There is no way for the recovery process to tell which state holds. This is a serious problem.

By now, you should have seen enough to convince you that the problem of atomic file transactions is a difficult one. Crashes may occur at any point in the file writing and deleting process, and an atomic system needs to be able to revert to the previous state of the system if anything goes wrong at any point.

In part two, I will continue to discuss how to use the package; implement atomicity using an algorithm, and conclude with a tour of the design of my package, with an emphasis on how you can extend it with new filesystem actions. Finally, I offer an informal justification of my package's accuracy.

Jonathan Amsterdam is Senior Consulting Engineer at DataSynapse, Inc. Prior to that, he was president and founder of Astrel, Inc., a Java training and consulting firm.

Return to ONJava.com.

Copyright © 2009 O'Reilly Media, Inc.