NIS ITA Legacy

International Technology Alliance in Network and Information Science

The Network and Information Science (NIS) International Technology Alliance (ITA) was a landmark collaborative research programme initiated by the UK Ministry of Defence (United Kingdom) (MoD) and the US Army Research Laboratory (ARL), which was active for 10 years from May 2006 to May 2016. The programme was executed by an Alliance of ARL and the UK Defence Science and Technology Laboratory (Dstl), integrated with a consortium of 25 leading academic and industrial organizations from both the US and UK with deep expertise in the fields of network and information science.

This page gives details of different aspects of the NIS ITA legacy:

  • Science Library Publications
  • Science Library Statistics
  • ITA book
  • Experimentation framework
  • Open source assets
  • Programme Research Plans
  • Quarterly Progress Reports (QPRs)
  • Peer Review
  • Capstone events
  • Consortium Management Committee (CMC)

NIS ITA Book

In the final year of the NIS ITA programme a final book was written and published (with a version suitable for eReaders too).

Amongst other things the book lists achievements created by the alliance:

  • Network Tomography for Coalitions: ITA researchers developed the scientific principles underlying monitoring of dynamically changing coalition networks with minimum overhead. The insights can be used to instrument and observe a variety of networks with minimum possible probing.
  • Distributed Dynamic Processing: The ITA programme developed the concept of bypassing network bottlenecks in the coalition edge by moving processing within the network, and analysed approaches for mapping distributed applications onto hybrid coalition networks. It has created new techniques for distributing streaming and transaction oriented applications, analyzing their performance, and improving the effectiveness of distributed applications.
  • Policy-based Security Management: ITA researchers developed new paradigms for security management using a policy-based approach, creating new frameworks for policy negotiation, policy refinement, and policy analysis. They applied them to create constructs like self-managing cells, and manage coalition information flows. The team developed techniques for determining security policies that can preserve privacy and sensitive data while allowing partners to make limited queries on that information.
  • Cryptography Applications in Coalition Contexts: The ITA has made fundamental advances in making cryptographic techniques applicable in the context of coalition networks. These include the development of new identity-based encryption paradigms, efficient implementation-friendly reformulation of fully homomorphic encryption algorithms, and outsourcing computation securely to untrusted devices belonging to coalition partners.
  • Advances in Argumentation Theory: ITA researchers provided the theoretical glue to accommodate trust, inconsistency and uncertainty in a distributed networked information systems and propose a principled method for linking provenance data with the evaluation of competing hypotheses to counter cognitive bias inherent in human analysts.
  • Insights into fundamental limits and properties of mobile network structures: ITA researchers developed a variety of models characterizing scaling properties of mobile ad hoc hybrid networks found in coalitions. These models determine the fundamental communication capacity of disruption tolerant networks, modeled limits on structures with mathematically tractable topologies, identified information theoretic limits on capacity with security constraints, and characterized performance of multi-path and multi-point communications.
  • Energy Efficiency Techniques: The alliance invented a variety of approaches to improve battery power consumption and energy efficiency in ad hoc networks. The approaches include distributed beam forming using cooperative communications and techniques for improving duty cycling behaviour in networks using self organization.
  • Coalition Communications Interoperability: The alliance created new paradigms for inter-domain routing, identify differences in coalitions cultural norms, defined a new paradigm for shared understanding, and using declarative technologies for networking and security in coalition environments. Another related activity was the creation of a collaborative planning model.
  • Quality of Information (QoI): The ITA pioneered the concept of QoI, and created the framework, algorithms, and various use-cases surrounding the use of QoI in ISR and sensor networks. The concept had a significant impact on community, including starting the I2QS workshop and being a major thrust in the Network Science CTA programme.
  • Mission-Aware Information Networking: The ITA developed a variety of techniques to adapt the network to meet the requirements of a mission, including approaches for optimizing networks to meet mission needs, matching assets to missions, and isolating faults in information networks. One of the key transition outputs was a Sensor Asset Matching tool for matching missions to assets available in the field to perform that task.
  • Dynamic Distributed Federated Databases: ITA researchers created a model to represent sensor information flows as distributed databases, and devised the principles that allowed them to be federated dynamically in a manner that is both self-organizing and scalable. The work resulted in the Gaian Database technology, which has had multiple transitions to other programmes in MOD and the U.S. Army.
  • Advances in Cognitive modelling: The ITA made significant advances in the state of cognitive modelling, including computational modelling of specific cognitive processes, using the ACT-R cognitive architecture for understanding collective agent and human interactions, and conducting cognitive social simulations.
  • Controlled Natural Language/Controlled English: The ITA programme made several advances in using a limited subset of English to improve the usability of computing systems by soldiers in the field in a variety of contexts, including mission planning, asset allocation, and policy specifications. Controlled English led to several transition activities through the development of the ce-store.

 

Experimentation Framework

The ITA Experimentation Framework has been created with the followings aims:

  • Provide a common prototyping emulation and simulation framework for integrating and validating algorithms and theories in realistic context and environments
  • Facilitate the investigation of scenarios across multiple research areas (information sharing, performance optimisation, etc.);
  • Provide a framework and unclassified services accessible by ITA Researchers, and university collaborators;
  • Enable faster experimentation cycle than otherwise possible;
  • Support for and enablement of repeatable experimentation;
  • Facilitate the packaging and sharing of experiment bundles with others;
  • Accelerates routes to transitions for rapid exploitation of research results;
  • Provide a capability to demonstrate research results.

 

Open Source Assets

Making ITA technology available as open-source software has the great advantages of: making it widely available to defence and civil sector supply chain ecosystems, avoiding procurement programmes becoming locked into a single supplier, and supporting innovation.

ITA open-source software comprises:

Before they became fully open-source, earlier versions were frequently placed in the public domain.

A number of other ITA technologies which are available within the public domain, but not available as open source, are available on the IBM developerWorks® website, including the Watson Policy Management Language (WPML).

The majority of the technology maturation, testing and validation required to enable ITA technologies to be made available within the public domain and/or as open-source software has occurred through transition tasks funded by the MOD and DoD: primarily by Dstl and Defence Equipment and Support (DE&S) in the UK, and by ARL and the CWP in the US. However, not all of the development has been funded in this manner: a significant fraction has been achieved using other funding sources available to industry and academia.

ce-store

ce-store is available from github: github.com/ce-store

CE is an unambiguous subset of English that can be both directly processed by a machine and understood by a human. Thus both humans and machines can work with the same symbolic representation of the world; it is not necessary to transform the human representation into a computer language that can only be used by a very small number of technology experts.

The user describes a domain model in terms of concepts, properties and relationships, and then populates this model by stating facts and rules: all in the form of CE sentences. ce-store stores the domain model and facts, and can reason using the rules to create new facts and insights.

ce-store is an open-source technology —available on github— with which CE can be created and experimented upon for the representation of knowledge and the application of reasoning. Fact extraction from natural language can also be performed using the features of CE and ce-store. An application programming interface (API) is provided for programmatic agents that convert incoming text into sentences, parse those sentences into raw parse trees, and turn the parse trees into phrases, all expressed using CE sentences. CE rules are then applied to extract facts, their relationships, and their properties. Facts are expressed as CE sentences in the context of a domain model described by the user.

CE Node

CE Node is available from github: github.com/flyingsparx/CENode

CE Node is a lightweight CE processing environment implemented in JavaScript that can be easily deployed in a variety of contexts, including Web browsers, mobile apps, and servers. CE Node is lightweight in the sense that it does not aim to be a fully-fledged CE engine—for example offering only limited inference and natural language processing—and requires relatively little network bandwidth to download and operate. Once loaded, a CE Node instance can function independently without any network connection. This makes it well-suited to deployments at the network edge.

Edgware Fabric

Edgware Fabric is available from github: github.com/edgware

Edgware Fabric (the open source name of the ITA Information Fabric) is a lightweight agile service bus that provides many of the features found in an enterprise service bus (such as discovery, routing, a registry, and message transformation) but which is built for resource constrained, dynamic and/or unreliable environments. Thus it integrates systems at the very edge of the network into a service-orientated architecture running on (or alongside) the devices that it connects. It is designed to be self-managing; it tracks which systems are connected, what services they offer and when they are being used. The discovery protocol enables neighbouring nodes to quickly form into a bus that spans an ad-hoc network of communicating nodes, and can be viewed logically as a bus, as illustrated in Figure 23. Actors simply request data from the bus for one or more data feed services (hardware or software components), requiring no knowledge of the structure of the network itself.

New software services can be deployed on the bus, enabling local processing of information at or near its point of origin, saving valuable network bandwidth and helping to manage the large volumes of data that can be generated by edge devices such as sensors.

Edgware Fabric enables information from the edge to be easily integrated into existing applications and used in new and innovative ways. It can be used with existing hardware (e.g. physical assets such as sensors), software and networking technologies.

The Gaian Database

Gaian Database is available from github: github.com/gaiandb

Federating and aggregating information distributed across a coalition, or indeed within any organisation, is a major operational challenge. Doing so efficiently, transparently and with minimal management overhead has been an unachieved goal, particularly in resource constrained and dynamic environments. The Gaian Database addresses this challenge.

Gaian embodies the concept of a dynamic distributed federated database (DDFD). This is a self-organizing network of federated nodes that combines ideas from data federation, distributed databases, network topology and the semantics of data. It is an information virtualization middleware component that is ideally suited to the ad hoc queries and processing operations that are needed to maximize business intelligence.

Gaian uses a store locally query anywhere (SLQA) paradigm giving global access to data from any participating node. Moreover, Gaian makes it possible for a set of heterogeneous data sources to be accessed as a single federated database, including sources as diverse as SQL and non-SQL databases, document repositories, spreadsheets and text files. Applications can transparently perform database queries across a multiplicity of data sources in a single operation.

Access to data and the flow of data can both be controlled using formal policy based mechanisms that provide fine-grained management of security constraints. This is achieved using distributed policy enforcement point (PEP) and policy decision point (PDP) components at all database nodes where policy is to be enforced.

The Gaian Database uses an extension to the standard Kerberos protocol to maintain security and access control within its distributed environment. The key to achieving efficient query performance is the way in which the nodes logically connect themselves together in order to minimise the cost of performing the distributed database operations The mechanism used is based on the ITA fundamental research on network growth and emergent graph properties. The attachment mechanism results in a connected graph structure that has predictable properties that directly impact on database query performance, and which scale efficiently with network size. Overall this ensures optimal performance with minimal overhead.

Gaian does not replace existing systems; instead it federates them in a transparent, scalable and secure manner. It introduces a new agile model of information integration that revolutionizes the way that coalitions and organizations can access and exploit the information held within their IT systems. Its small footprint and efficiency make it ideal for use everywhere from the enterprise to mobile and other constrained environments.


 

Programme Research Plans

Throughout the 10-year duration of the NIS-ITA programme the research direction has been provided by a series of "programme plans" that jointly defined by the consortium member organisations in collaboration with the government members.

The first such programme plan was defined in 2006 and is referred to as the "IPP" (Initial Programme Plan) with a duration of 1 year.

Following this a series on Biennial Programme Plans (BPPs) were defined every two years, in: 2007, 2009, 2011, and 2013.

The Final Programme Plan (FPP) was another 1 year duration plan which concluded the research activities for the NIS-ITA.

The documents that define each of these plans are listed below:


 

Quarterly Progress Reports (QPRs)

Throughout the 10-year duration of the NIS-ITA research programme quarterly reports were created for every project undertaken by the Consortium. The creation of these was led by the project champion for each of the projects with the material generated by each primary investigator involved in the work. These reports were then reviewed by the Technical Area Leaders (TALs) on a quarterly basis, in time for the quarterly meeting of the CMC and for overall aggregation and reporting.

This standard quarterly progress reports were in addition to the many regular additional documents generated as a result of the ongoing research, for example: journal and conference papers, workshops, technical reports, patents etc.

Copies of these QPR documents for the entire duration of the programme can be found below.


 

Peer Review

Throughout the 10-year NIS ITA programme a series of regular peer reviews were carried out, with a senior panel of independent peer reviewers comprised of external academic, industry and government experts with deep experience in fields relevant to the NIS-ITA research. The peer reviewers consistently found the programme to be well aligned with the stated goals, collaborating deeply and creating great scientific results. The peer reviewers frequently gave constructive advice that enhanced the overall programme.

The formal peer review was done every year in first half of the program. In the second half of the program a formal peer review was held every other year with an informal peer review carried out in intermediate years.

In their final report the peer reviewers identified that the NIS ITA programme had proven itself to be highly successful. Specifically they stated that the NIS-ITA was "...an outstanding example of true, deep and enduring International Research Collaboration..." that has "...significantly advanced the state-of-the-art in network and information science through multi-disciplinary research...".

Each of the peer review reports can be found below:


Capstone Events

To conclude the 10-year NIS-ITA programme two high-profile "Capstone" events were run: one in the US and one in the UK. This page contains links to all of the materials used within these Capstone events and various news items relating to them.


US event - March 16-17 at ARL Adelphi Maryland

Plenary slides

Science Demonstrations

Transition Demonstrations

Booths

  • Open Science Library - poster

  • Achieving deep collaboration in cross-sector international research - poster

UK event - April 6-7 at IET Savoy Place London

Plenary slides

Science Demonstrations

Transition Demonstrations

Booths

  • Open Science Library - poster
  • Achieving deep collaboration in cross-sector international research - poster

 

Consortium Management Committee (CMC)

Throughout the programme the NIS-ITA had a Consortium Management Committee ("CMC") which consisted of a representative from each member organisation. The CMC was chaired by the representative from the Consortium Lead (IBM). Each member organisation had one voting representative on the CMC to support programmatic and management-related activities and decisions. The CMC was responsible for the management and integration of the Consortium’s efforts under the NIS-ITA, including programmatic, technical, reporting, financial, and administrative matters. The CMC made recommendations concerning the membership of the Consortium, the definition of tasks and goals of the member organisations, and the distribution of funding to the member organisations. Quarterly meetings were conducted by the CMC.