1. Why do we have code redundancy?
Code redundancy is a real problem. It makes us less productive by writing more code and testing more code. Code redundancy presents itself in two forms: application level code redundancy and enterprise level code redundancy

Application level code redundancy is hard to notice in many cases. It originates at the principles of object-oriented design. Objects by definition consist of data and behavior. It compels us to implement all behavioral functionality inside the object and reduce external dependencies. Placing the emphasis on the object interface and encapsulation makes it OK to code the same data transformation many times. We do not perceive it as redundant code since each implementation operates on different object properties.

Enterprise level code redundancy is easier to notice by looking at the enterprise information system as single application. A big enterprise will have many applications implementing the same functionality like: retrieving customer information, running credit checks, implementing payment workflow, etc. Because of the scale, the code redundancy at this level is causing serious damages to the enterprise – increased development cost, increased maintenance cost and reduced agility. Just like the application level, code redundancy at the enterprise level was caused by the enterprise software development paradigm, which focused on the individual applications.
2. Addressing code redundancy
Economical forces are pushing companies to address the enterprise level code redundancy. The general consensus is to apply new paradigm for enterprise application development – Service Oriented Architecture (SOA). It fights code redundancy by implementing enterprise level services that are shared between all enterprise applications.
We are currently in the phase of the top-down approach to SOA. It is characterized by implementing enterprise services and forcing existing silo applications to share those services.
Like any new paradigm, SOA is immature and still evolving. At this stage it lacks more specific implementation guidance. That forces the early adopters to pay higher price by hiring more skillful architects and developers as well as increased project cost. We are learning valuable lessons from those experiences though:
  • Understand the importance of enterprise architecture and SOA governance.
  • Discover the limitations of the existing application architectures (from the era of silo application development).The software industry is making more visible progress in the area of understanding the importance of enterprise architecture and implementing SOA governance. It is confirmed by the long list of companies offering products for SOA governance.
  • New breed of application architectures (designed with SOA in mind) are needed to move SOA into the mainstream. They will hide the complexities of implementing collaborative applications and help developers embrace the new programming paradigm.
3. New breed of application architectures
Shipka JDF pioneered this trend by introducing service-based application architecture that consists of three service interfaces and an application process. The services are generalized using the virtualization concept:
  • Data virtualization service – access and manipulation of data objects of all types.
  • Operation virtualization service – consolidate all programmatic data manipulations.
  • Presentation virtualization service – consolidates all reusable presentation components.
  • Application process – implement application functionality by consuming components from the virtualizations services.
The virtualization services were chosen to provide clear separation of responsibilities. The main objective of this architecture is to promote reusability at the application and enterprise level.
Promote reusability at application level.
  • Implement successful strategy for separating the data and programming logic that eliminates code redundancy. It is explained below.
  • Provide presentation component model that enables the implementation of user interface functionality without engaging in data management or operation implementation.
Promote reusability at enterprise level.
  • All virtualization services are defined outside of the application process. It means that they have no dependencies on the application context and may be shared by other applications.
  • Virtualization services come with two important capabilities:
    • Remoting capabilities – it means that any component that is provided by the virtualization services (data, operation or presentation component) may be implemented remotely by another application.
    • Exporting capabilities – it means that any component that is provided by the virtualization services may be exposed to other applications with appropriate access control attached to it.
The architecture provides unparallel flexibility to the enterprise applications. When all virtualized components (data, operations and presentation components) are implemented locally – it results in a standalone application. When some virtualized components are implemented remotely or exposed to other applications the application is participating actively in the enterprise information system. The number of remote component implementations determines the level of integration.

Since the implementation of virtualized components is configured at the virtualization service, developers can change it at any time without affecting the application functionality. It provides enterprise architects with more options to fight the redundancy. They can share services from the context of individual applications or consolidate them in enterprise repositories: Enterprise Object Repository, Enterprise Operation Repository and Enterprise Presentation Repository.

4. Separation of code and data at the application level
Data and programming logic have very distinct characteristics:
  • Data – usually there are many instances of the same data object. Data objects may be persisted. They are easily converted to machine-independent format, sent to another machine and processed there.
  • Programming logic – need only one object instance (implementing programming logic). Operation implementations are machine specific (with some exceptions like Java) and generally not transferable between machines. Instead they may be called remotely by passing data to them. Operations may need version support.
Separating data from programming logic means that developers do not put code within their data objects and they do not put data within their operation objects. That does not mean that data objects are not going to have custom methods (or behavior). It only says that the custom methods will not be coded as part of the data object. Instead they will reference operations provided by the operation service.

It is easy to see how this will affect code redundancy, but the real problem is getting developers to comply with it. The only way to do that is by providing the functionality in a way that benefits developers (instead of being a burden or responsibility) and preserve the familiar object-oriented programming paradigm.

Shipka JDF accomplishes both objectives in the following way:
  • Eliminate data object classes – use data interfaces only. Developers cannot add programming code to data objects, since data object classes do not exist.
  • Developers configure data objects using configuration file and map their properties to the respective persistent storage – DB, XML data, remote application, etc.
  • Developers configure custom methods and map them to entity or operation expression. Those expressions may have references to operations at the operation repository, data object properties and custom method parameters.
  • Data interface classes are generated from the configuration file(s). It includes requested get/set/is/create methods of the object properties and all custom methods.
  • Data object instances are retrieved from a central object repository, which instantiates them at runtime.
What is lost with this solution?
  • Data object classes (in favor of data interfaces).
  • Ability to write redundant code (since data object classes do not exist).
  • Differences between data object types (XML, memory, DB, remote objects, etc.). Data interfaces look the same for all object types.
What is gained with this solution?
  • Applications are more concise – developers do not write code for data objects.
  • Better code reusability – developers can implement operation functionality in operation classes and reference it from many data objects.
  • Data objects have change tracking capabilities. Developers may accumulate changes over multiple screens and persist them with single call.
  • Data objects are transactionable. It means that if data object participates in a transaction that fails and is rolled back, its state will be restored to the one prior to initiating the transaction. For example any primary keys generated during the transaction will be removed.
  • Object persistence is implemented through configuration including: object validation, data conversions, data mapping, optimistic concurrency, etc.
Examples of custom method configuration
  • Using entity expressions. It allows access to all data within the object graph. In the example we configure custom method for retrieving action configuration from a collection of actions at ScreenConfig object.
<method method-name=”getAction” return-type=”com.shipka.jdf.config.ActionConfig” expression=”.actions[$$name=@name]” description=”Get action config for provided action name”>
The expression “.actions[$$name=@name]” instructs the data management system to search the collection of actions at the current object entity and return the entity with action name matching the value of provided operation parameter. It will add the following code to ScreenConfig interface:
/** Get action config for provided action name 
 *   @param name Action name
 *   @return Action config for provided action name
ActionConfig getAction(String name);

Method invocation:
ActionConfig actionConfig = screenConfig.getAction(“success”);
  • Using operation expressions. They are used to invoke methods available from operation repository. In the example we have a LineItem object with property of ‘unit-price’ and ‘quantity’ and we want to create custom method calculating the total amount.
<attribute name=”quantity” object-type=”int” db-table-column=”li.quantity” access-methods=”get-set” />
<attribute name=”unitPrice” object-type=”java.math.BigDecimal” db-table-column=”li.unitprice” access-methods=”get-set” />
<method method-name=”getTotal” return-type=”java.math.BigDecimal” expression=”big-decimal.multiply(value-1=$unitPrice, value-2=$quantity)” description=”Calculate the total cost of a line item” />
The custom method ‘getTotal’ invokes ‘big-decimal.multiply’ operation from the operation store and passes object attributes ‘unitPrice’ and ‘quantity’ as parameters. The method itself does not need input parameters.
Method is invocation:

BigDecimal total = listItem.getTotal();

Developers can write their own operations, register them with the operation store and use them to implement custom methods.
5. Conclusions
The service-based application architecture provides bottom-up approach for solving the code redundancy and application integration problem. It enables smooth transition from the architecture of individual applications to the enterprise architecture. At the core of this approach are the standard service interfaces that are available at the application level and well as the enterprise level. They are not application specific. Instead they represent generic interfaces to access data, execute operations and access presentation components.

Service-oriented application architectures provide tremendous value to the concept of SOA – clear vision on how to develop enterprise class of applications and how to make them collaborate. More importantly it enables enterprise architects to implement SOA iteratively instead of using the waterfall approach.

Service-oriented application architectures represent a shift in the SOA strategy that puts more emphasis on enabling collaboration within the individual applications versus acquiring expensive infrastructure like ESB.

You can find more information regarding Shipka JDF by visiting our website: http://shipka.com

    <parameter parameter-name=”name” object-type=”java.lang.String” description=”Action name” />



Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s