Documents/PGFSOA/3: Infrastructure/4.3.1: Security, Scalability, and Interoperability

4.3.1: Security, Scalability, and Interoperability

Focus on Enterprise Security, Scalability, and Interoperability

Other Information:

The flexibility of SOA comes with some costs in terms of complexity and the need to manage a greater number of moving parts. The issues are notionally similar to traditional environments. As a result, organizations need to pro-actively address the issues of security, scalability and interoperability in the design and implementation of their Service Oriented Infrastructure. Security: Due to the number of environments, domains, and platforms that potentially will be crossed in executing a business process based on SOA, a federated approach to security must be adopted. While work is on-going to produce government-wide security architectures and standards, defined communities of interest should perform pragmatic risk/reward analysis to define level of service requirements for common security issues. The fundamental security areas to address include the following: • Authentication and identity management across domains and environments • Authorization and confidentiality (access control) • Integrity (no inappropriate modifications are made) • Availability (reliable service, no denial of service) • Non-repudiation (positive identification and cannot deny providing or receiving services) • Audit and monitoring • Security administration and policy management Industry standards are under development and are rapidly improving, including WS-Security and Liberty Alliance specifications. However, while the standards available today are necessary, they are not yet sufficient for the most stringent government use cases. Scalability: With traditional applications, the number of users is typically known beforehand and the performance of the systems can be tuned to that user base. On the other hand, in an SOA environment, the number of users or consumers of the service is (almost intentionally) unknown. Loose coupling implies that the service is not aware of the number or types of ways in which it will be accessed. However, in reality, to be able to meet the terms of an SLA, the service provider must have some indication of the level of demand and develop mechanisms to scale up to accommodate spikes in demand. SOA Governance should provide this by having consumers register to consume a service from a provider. This allows providers to understand who its potential consumers are (and consumers to understand the SLA that the provider is offering) and to develop demand expectations prior to provisioning the service. Some operating environments (such as Java Enterprise Edition - JEE) have mechanisms built in to provide scalability; however, this must still be anticipated and provided for. Thus, as a best practice, service providers should have SLAs in place with potential users and use this information to design scalability into the service offering as appropriate. Interoperability: The concept of interoperability in an SOA enabled routable network is fundamentally different than in a traditional point to point information exchange architecture. In the latter, the job is to guarantee that one known system can synchronize with another. In the former, the job is to allow a virtually infinite number of unknown nodes supported by an unlimited number of known and unknown services to interoperate. Both traditionally and with SOA, use of standards across domains is a necessary, but not sufficient approach. Engineers also need pragmatic reference implementations. Reference implementations are examples of standard components bundled effectively to solve a critical problem. SOA engineers need these interoperable reference implementations for both semantic interoperability and run time infrastructure. There are two fundamentally different approaches to achieve semantic interoperability among disparate organizations, systems, domains, etc. The first approach is pre-instantiation – negotiating semantic consistency and then implementing it through common definitions (e.g., metadata schema). Achieving this commonality has proven extremely difficult. However, it has been approached by a few very focused COIs (e.g., the NIEM process between DOJ, DHS, and others) and has proven to be well worth the effort. Many efforts to negotiate semantic standards have fallen under their own weight, often as a result of participants resisting compromises for common definitions. The second approach to semantic interoperability is post- instantiation – using adaptors and translators to reconcile different data sources for common processing. Indeed, most middleware, including EAI (enterprise application integration), EII (enterprise information integration), and ESB (enterprise service bus) technologies contain capabilities to enable this. However the drawback is that in cases where the need for rigor is high, adapters are not capable of translating or aggregating disparate data sources. An emerging development in the second approach is the use of semantic technologies and ontologies to establish precise relationships among data that can be used with inference engines to uncover additional relationships and enable interoperability across domains. While these technologies are at an early stage, they appear to hold significant promise for the future. The increasing “openness”, granularity, and modularity in the service implementation infrastructure (e.g., Java Business Integration (JBI), or Service Component Architecture (SCA)) allows considerable cross enterprise interoperability at reasonable cost and time scales. Service Management: Service management becomes increasingly important as the number of services and collaborating organizations increases. Agencies should incorporate run time and build time service management functionality to define, monitor, enforce, and adjust the service level agreements (SLAs) between service providers and their consumers. The service management details (quality of service) should be rolled up to populate management dashboards for use at operational, tactical, and strategic levels in a manner similar to how network operation centers monitor network performance.

Indicator(s):