In the real world, applications keep growing in size and complexity, and change frequently; thus, the necessity for continuous testing constantly increases. Extreme programming (XP) prescribes automated acceptance testing so that tests can be run often, while facilitating regression testing at a low cost. XP also insists that the customers specify the acceptance tests and keep them updated as the requirements change, and use these tests for test-driven development (TDD).

Automated unit tests are quite common nowadays; however, most acceptance tests remain manual. Many commercial test automation tools are available, but their cost and required effort are so high that most project teams resort to doing the acceptance testing manually. The major roadblock to automating user acceptance testing has been the nonavailability of easy-to-use tools and frameworks. In this article, I show you how the Framework for Integrated Test (Fit) makes it easy to automate acceptance tests; it can also be used as an effective tool for communication and collaboration between users and developers.

A typical development team may include many roles, such as user, customer, domain expert, tester, developer, or architect. For the sake of simplicity, I use just two roles in this article: analyst and developer. You can think of an analyst as a user, customer, domain expert, tester, or anyone who provides and/or clarifies requirements, and is involved in the acceptance testing process. A developer could translate to architect, programmer, or anyone developing the actual product itself.

What is Fit?

Framework for Integrated Test (Fit) is an open source framework for user acceptance testing, and a tool for enhancing the communication and collaboration between analysts and developers. Fit lets analysts write acceptance tests using simple HTML tables. Developers write fixtures to link the test cases with the actual system itself. Fit compares these test cases, written using HTML tables, with actual values, returned by the system using fixtures, and highlights the results with colors and annotations.

Just two steps are required to automate user acceptance tests using Fit:

  1. Express a test case in the form of a Fit table
  2. Write the glue code in Java, called a fixture, that bridges the test case and system under test

That's it! You are all set to execute the tests automatically for the rest of the application's lifetime.

To work with Fit, you must know and understand four basic elements:

  1. Fit table
  2. Fixture
  3. Core fixtures
  4. Test runner

Fit table

A Fit table is a way of expressing the business logic using a simple HTML table. These examples help developers better understand the requirements and are used as acceptance test cases. Analysts create Fit tables using a tool like MS Word, MS Excel, or even a text editor (assumes familiarity with HTML tags). There are different types of Fit tables, which I discuss later in this article.


A fixture is an interface between the test instrumentation (in our case, the Fit framework), test cases (Fit tables), and the system under test (SUT). Fixtures are Java classes usually written by developers.

Figure 1. Relationship between Fit table, fixture, and SUT

In general, there is a one-to-one mapping between a Fit table and fixture. The simplicity of Fit lies in the idea that you can express and test your requirements using one or more of the core fixtures.

Fit provides three core fixtures:

  1. Column fixture for testing calculations
  2. Action fixture for testing the user interfaces or workflow
  3. Row fixture for validating a collection of domain objects

Test runner

Fit provides a main driver class, fit.FileRunner, that can be used to execute tests. FileRunner takes two parameters: the name of an input HTML file that has one or more test cases expressed as Fit tables and the name of an output file where Fit records test results.

Fit in action

To illustrate how to use Fit in a real project scenario, I'll walk you through a development cycle highlighting the key activities in the requirements-definition, development, test-case-preparation, and testing phases.


A sports magazine decides to add a new feature to its Website that will allow users to view top football teams based on their ratings. An analyst and a developer get together to discuss the requirements. The outcome of the discussion is a user story card that summarizes the requirements, a set of acceptance tests, and an Excel file with sample data, as illustrated in the following three figures.

Figure 2. Front side of the user story card: Requirements
Figure 3. Back side of the user story card: Acceptance tests
Figure 4. Excel file with sample data

Now that we have the first cut requirements, let's try our hands on test-first development using Fit. We start with the first user test, automate it, develop the code required for the test to pass, and repeat the cycle for all the tests.

Test calculations using column fixture

For a team, given the number of matches played, won, drawn, and lost, we need to verify that the ratings are calculated properly. The first step is to express the logic using a Fit table. The sample table created using MS Excel during the requirements discussion could be easily converted into a Fit table by just adding a fixture name and modifying the labels.

team nameplayedwondrawnlostrating ()
Aston Villa382021654

The above Fit table represents the first acceptance test case to verify the rating calculation. The table has seven rows. The top cell has the fully qualified name of the fixture (sample.VerifyRating) used to execute the test cases represented by this Fit table. The second row represents the list of input attribute names (Columns 1 through 5) and the name of the calculated value (Column 6). The parenthesis () after the attribute name rating in the sixth column denotes that it's a calculated value. Rows 3 through 7 are the test cases. Figure 5 is a dissection of the above Fit table.

Figure 5. Fit table: Column fixture

The Fit table corresponding to a column fixture follows a certain format as described by the table below.

RowPurposeNotes for the analystNotes for the developer
1Fixture identifierThe first column has the name of the column fixture written by the developer. Fit uses the first column only and ignores the rest.Fully qualified name of the class that will extend fit.ColumnFixture.
2LabelAdd a column for each input attribute or expected calculated value. The labels for calculated values include parenthesis () so that Fit can recognize them as calculated values.An input attribute translates to a public field (public <type> <labelName>) in the fixture. A calculated value translates to a public method with the following signature: public <type> <labelName>(). Follow camel notation to translate a label to a field/method name.
3 - nTest case(s)Specify the input and expected calculated values. Each calculated value is a test case.Fit converts the input attributes to the appropriate types and sets them in the corresponding fields in the fixture. For each of the calculated values, Fit calls the appropriate methods to get the actual result and uses it to match against the expected value.

Now that we have created the Fit table, we need to write the glue code that will bridge the test case to the system under test:


package sample;

import businessObjects.Team; import fit.ColumnFixture;

public class VerifyRating extends ColumnFixture {

public String teamName; public int played; public int won; public int drawn; public int lost; Team team = null;

public long rating(){ team = new Team(teamName,played,won,drawn,lost); return team.rating; } }

The domain object representing a football team is shown below:


package businessObjects;

public class Team {

public String name; public int played; public int won; public int drawn; public int lost; public int rating;

public Team(String name, int played, int won, int drawn, int lost) { super(); = name; this.played = played; this.won = won; this.drawn = drawn; this.lost = lost; calculateRating(); }

private void calculateRating() { float value = ((10000f*(won*3+drawn))/(3*played))/100; rating = Math.round(value); } }

The test case we have at hand is related to calculations, so we create a new fixture sample.VerifyRating that extends fit.ColumnFixture. For each input attribute represented by Columns 1 through 5 in the second row of the Fit table, there is a public member with the same name as the label using camel notation. Notice that between the table and the fixture, "team name" translates to teamName. A public method public long rating() correspondes to the calculation in the sixth column. The rating() method in VerifyRating creates a Team object using the input data specified by the test case and returns the rating from the Team object; this is where the bridging between the test case and the system under test happens.

Let's execute the test using the FileRunner:

 java -cp fit.jar;ratings.jar fit.FileRunner VerifyRatingTest.html VerifyRatingResults.html
4 right, 1 wrong, 0 ignored, 0 exceptions

Wow, we have automated the first user acceptance test case! The test results generated by Fit are shown below.

team nameplayedwondrawnlostrating()
Aston Villa382021654
Dummy383512100 expected
93 actual

Here is what happens when you run the test: Fit parses the table and creates an instance of sample.VerifyRating. For each row in Rows 3 through 7, Fit uses reflection to set the values specified in Columns 1 through 5 to the corresponding fields in the fixture. The rating() method is executed to get the actual value to be compared against the expected value specified in the sixth column. If the expected value matches the actual value, then the test passes; otherwise it fails. Fit produces an output table like the one shown above. It's the same table that we used as input, with the cells colored in green to indicate that the test case passed. The failed ones are colored in red with the expected value and the actual value listed in the cell.

Note: By convention, fixtures use green to indicate success, red to indicate failure, yellow for an exception, a gray background to indicate that the cell was ignored, and gray text to indicate a blank cell filled with the actual value from the system.

Refactoring test cases

Let's move on to the next test case: Search for top two teams using the screen and validate the search results. This step involves a screen, an example of which appears in Figure 6, through which the user provides input, clicks on a button, and seeks verification that the results returned match a collection of objects as expected.

Figure 6. Screen prototype

As I mentioned earlier, we use an action fixture to test the user interface and a row fixture to examine a collection of objects. The second test case includes both. So how can we express it using a Fit table? It's simple: Split it into two test cases so they can fit into Fit—one test case for the user interface actions and another to validate the results. I call this process of breaking a complex test case into smaller ones to express the case in terms of the core fixtures without losing the intent of the original test case as refactoring test cases.

Testing a screen using an action fixture

The action fixture supports four commands that you can use to simulate the actions that a user would perform on a screen. The commands are start, enter, press, and check. The following table summarizes the action commands, the parameters they take, and what they do.

Action commandParameter 1Parameter 2Notes for the analystNotes for the developer
startName of the fixture to executen/aStarts the custom fixture developed for this test case.Starts a new fixture specified by Parameter 1. The rest of the commands, enter, press, and check, act on this fixture, a descendant of fit.Fixture.
enter Name of the input elementInput dataSimulates entering test data through the screen.The input data is passed to the fixture specified in the start command. The fixture has a method with the following signature: public void <nameOfTheInputElement>(<type> param). This method stores the data passed through the parameter internally for later use.
press Name of the buttonn/aSimulates button click.The fixture has a method with the following signature: public void <nameOfTheButton>(). This method simulates an action like submitting data and searching for results.
check Name of the element to checkValueSpecifies the expected value (Parameter 2) of the element identified by Parameter 1.The fixture has a method with the following signature: public <type> <nameOfTheElement>(). This method returns an actual value to be used for matching with Parameter 2.

Let's tackle testing the user interface part. We are back to creating a Fit table again, for an action fixture this time. A typical usage of the screen could be described something like The user types 2 in the number of top teams text box, clicks on the Search button, and expects to see the top two teams displayed. The following table represents this typical usage scenario in a way Fit can understand.

Page 2 of 2
start sample.VerifyWorkflow
enter number of top teams2
press search
check number of results2

The value fit.ActionFixture at the top cell informs Fit to use ActionFixture; the rest of the following rows are commands to ActionFixture. The second row tells ActionFixture to start a new fixture sample.VerifyWorkflow (to be coded by the developer soon). The third row simulates entering test data on the screen using the enter command. The fourth row simulates a click on the Search button using the press command. The fifth row specifies the expected number of results to be matched against the actual value returned by the system using the check command. Figure 7 is a dissection of the above Fit table.

Figure 7. Fit table: Action fixture

The format of the Fit table corresponding to an action fixture can be summarized as follows:

RowPurposeNotes for the analystNotes for the developer
1Fixture identifierUse fit.ActionFixture or fit.TimedActionFixture. 
2 - nAction commandsAdd a row for each action command you would like to use to simulate the user interface actions. Each row will have two or three columns depending on the command you choose.
1Action command (start , enter , press , check ).
2Parameter 1 (refer to the action command summary).
3Parameter 2 (refer to the action command summary). Applies only to enter and check commands.
Refer to the action command summary.

Next, we need to come up with the glue code corresponding to this Fit table:


package sample;

import java.util.Collection;

import businessObjects.SearchUtil; import businessObjects.Team; import fit.Fixture;

public class VerifyWorkflow extends Fixture { private int topN; private Collection<Team> results;

public void numberOfTopTeams(int n){ topN = n; }

public void search(){ results = SearchUtil.getTopTeams(topN); }

public int numberOfResults(){ return results.size(); }


The SearchUtil mock object code is shown below:


package businessObjects;

import java.util.ArrayList; import java.util.Collection;

public class SearchUtil {

public static Collection<Team> getTopTeams(int n){ Collection<Team> results = new ArrayList<Team>(); Team team = new Team("Chelsea",38, 35, 1, 2); results.add(team); team = new Team("Arsenal", 38, 31, 2, 5); results.add(team); return results; } }

When you execute the test, here is what happens: Fit creates an instance of ActionFixture. Following the start command, the ActionFixture creates an instance of VerifyWorkflow and calls the public void numberOfTopTeams(int n) method with a parameter value of 2 to fulfill the enter command; then calls public void search() to fulfill the press command; and then calls public int numberOfResults() to match the returned value against the value specified in the Fit table to complete the check command. The test result generated by Fit is shown below:

start sample.VerifyWorkflow
enter number of top teams2
press search
check number of results2

Testing nonfunctional requirements using timed action fixture

What about the time required for completing the search? How do we test the response time? Just use fit.TimedActionFixture, a descendant of fit.ActionFixture. TimedActionFixture provides visual feedback on how long the functions take to execute. The result produced by TimedActionFixture includes the time when each command begins and how long it takes to execute, referred to as split. Note: The split is reported only when it is more than 1,000 milliseconds.

Validate a collection of objects using a row fixture

Let's express the validate-search-results test case as a Fit table.


The top cell has the name of the row fixture to be executed. The second row lists all the attributes of the domain object you expect as part of the results collection. Each row from 3 through 5 represents an object with the specified values to be part of the expected results collection. Figure 8 is a dissection of the above Fit table.

Figure 8. Fit table: Row fixture

The format of the Fit table corresponding to a row fixture can be summarized as follows:

RowPurposeNotes for the analystNotes for the developer
1Fixture identifierName of the fixture to be used.A descendant of fit.RowFixture.
2LabelEach column represents an attribute of the domain object in the collection.Override the public Class getTargetClass() method to return the type of domain object in the collection. Fit uses this method to determine which type to reflect on while comparing the results.
3-nCollection of expected resultsEach row represents an object with specified attribute values in the expected results collection.Override the public Object[] query() throws Exception, method to return the actual collection of objects.

The glue code corresponding to the validate-search-results test case is shown below:


package sample;

import businessObjects.SearchUtil; import businessObjects.Team; import fit.RowFixture;

public class VerifyResults extends RowFixture {

@Override public Object[] query() throws Exception { return SearchUtil.getTopTeams(2).toArray(); }

@Override public Class getTargetClass() { return Team.class; } }

The VerifyResults class extends RowFixture and overrides the query() and getTargetClass() methods. The query() method interacts with SearchUtil (the system under test) and returns an array of objects that represents the actual results. The getTargetClass() method returns Team.class, the type of the domain object represented by the test case under consideration. The test results produced by Fit are shown below:

Dummy missing383512100

Fit uses the object array returned by the query method to match against the expected results specified in the Fit table. The matching algorithm starts with the left-most attribute as the key and proceeds to the right, matching against each of the attributes. The columns of the matching rows are highlighted in green to indicate that an object exists in the actual results collection. The rows missing from the actual result or surplus rows are marked as so on the left-most column.

Refining requirements with feedback

Once the tests execute, the analyst and developer get together to review the examples and test results. During review, they figure out things like:

  • When you know three of the four attributes (played, won, drawn, lost), the fourth one can be calculated.
  • There can be more than one team with the same rating. How do we show the results in that case?

You can use this feedback to refine the requirements and make design decisions. Continue to iterate through this process until you get to a satisfactory level.

Summary fixture

You can use fit.SummaryFixture to provide a summary of the test results on the output. The Fit table for summary fixture is a one-by-one table containing the fixture name. Usually it is included as the last Fit table on the input file so that it can produce a collective summary of all the tests on a given input file.

Primitive fixture

You may have noticed that the core fixtures depend on reflection to do their job. Fit also provides a primitive fixture that does not depend on reflection to facilitate porting Fit to a language without reflection support.

Let's implement the fixture for rating calculation using PrimitiveFixture:


package sample;

import businessObjects.Team; import fit.Parse; import fit.PrimitiveFixture;

public class PrimitiveVerifyRating extends PrimitiveFixture {

String teamName; int played; int won; int drawn; int lost; Team team = null;

@Override public void doRows(Parse rows) { super.doRows(rows.more); }

@Override public void doCell(Parse cell, int column) { switch (column) { case 0: teamName = cell.text(); break; case 1: played = (int)parseLong(cell); break; case 2: won = (int)parseLong(cell); break; case 3: drawn = (int)parseLong(cell); break; case 4: lost = (int)parseLong(cell); break; case 5: team = new Team(teamName,played,won,drawn,lost); check(cell, team.rating); break; default: ignore(cell); break; } } }

The PrimitiveVerifyRating and VerifyRating complete the same task, but in different ways. When using a PrimitiveFixture, you must override the doRows() and doCell() method, and decide what to do with each row and cell, whereas the core fixtures use reflection to accomplish the same.

Note: Fit implementations are available for the following platforms: .Net, Python, Perl, Smalltalk, and C++. Converting the Java examples in this article to any of these platforms is easy.


User-friendliness, expressiveness, and the application-agnostic nature of Fit make it unique. When I got started with Fit, I used to get stuck with some of the complex requirements and didn't know how to express them using Fit tables. Here is an analogy that I used to get rid of the mental block: Think of Fit as something similar to the binary number system where you have only 0 and 1 to represent numbers. Even though we humans are used to the decimal number system, we can still represent any number using the binary notation. In fact, using Fit is much easier than binary numbers. Should you feel constrained by the core fixtures' ability to express your requirements, you may want to look at FitLibrary for more sophisticated fixtures, or you can roll out your own fixture to suit your needs. You may also want to consider using FitNesse, a wiki, collaboration, and testing tool built on top of Fit to ease adoption.

Narayanan Jayaratchagan is an architect working for Cognizant Technology Solutions. He has a master's degree in computer applications and a bachelor's degree in mathematics from Bharathidasan University, India.

Learn more about this topic