Nursing semantic networks- a different take on interoperability. Part 3

In part two in this ever-expanding blog about interoperability I introduced the smallest unit of knowledge and the key to interoperability in a semantic network, the ‘triple’. This, the third article, looks at the ‘nuts and bolts’ of the triple and considers how machines read them.

Last time we looked a simple triple example describing the relationship ‘Bob knows Alice’.

Why Bob and Alice? To explain, Bob and Alice are characters from a 1969 movie called ‘Bob & Carol & Ted & Alice’. The movie was a critical and commercial success, and consequently, the characters are traditionally used to illustrate human-human and human-computer interactions, especially in cryptographic exchanges.  Anyway, back to the triple, you may ask: “what is inside it”?

A real-life ‘Bob knows Alice’ triple is composed of three Universal Resource Indicators (URIs). The URL address on the navigation bar of your browser is a form of URI. The following example shows how a machine-readable triple may look if Bob and Alice work in the same hospital.

Machine-readable triple

<>    (Subject)

<http://foaf:knows>                                                                  (Predicate)

<>         (Object)

It is clear from the above example that a triple is just three URIs that point, describe and name resources on the Semantic Web. We can see that Bob and Alice work in the same hospital. Bob works in the emergency unit and Alice works in the surgical unit.  The triple may be linked to further resources that include Bob and Alice’s addresses, employment history, education level and role in the hospital. So, how do machines, which don’t like surprises, understand the triple? And how do they know what ‘knows’ means?

Triples, like the one above, are written in a rigid and predictable Resource Description Framework (RDF). RDF evolved from the eXtensible Markup Language (XML), another rigid and predictable framework. The predicate URI in the preceding triple contains the word ‘knows’ which delineates Bob and Alice’s relationship. The word ‘knows’ is a standard in a vocabulary of human relationship predicates called ‘FOAF’

The Friend-Of-A-Friend (FOAF) vocabulary

The FOAF[1] vocabulary provides a collection of basic predicates that can be used in triples to describe people’s activities. For example, the ‘knows’ predicate in the ‘Bob knows Alice’ triple points to the ‘knows’ standard in the FOAF vocabulary which is defined in the following specification:

Property: foaf:knows

knows – A person known by this person (indicating some level of reciprocated interaction between the parties).

Status: Stable

Domain: Having this property implies being a person

Range: Every value of this property is a person.

Vocabularies are one solution to interoperability on the Semantic Web. In the Semantic Web context, interoperability is defined as an agreement between the sender and receiver, usually two dissimilar systems, that any communications between them is understood by both parties.

Predicate vocabularies provide predicates whose meaning have reached consensus, and so, facilitate interoperability by ensuring that everyone using these predicates knows that they are self-descriptive, understandable and standardised to both parties. The Semantic Web is flexible, an ontology designer may invent his/her own ‘in house’ predicate vocabulary or use standard predicates in the FOAF global vocabulary. Either way, using a vocabulary’s predicates in a triple ensures that the triple is linked to a standard peer reviewed specification. So basically, you can connect two dissimilar systems because the predicate in the triple is a known and understandable standard which all parties in the communication agree on.

The next blog I will introduce a real ontology that was drawn by a front-line nurse to describe her surgical unit ‘reality’.



Hand washing event logger-working prototype

Hi, my hand washing event logger is now a working prototype. When I worked as a nurse in a medical unit a person was employed to stand and record hand washing events with a clip board. My device uses a piezo-electric transducer to provide ‘1s and 0s’ to an Arduino micro processor while water is flowing. The Arduino logs the date/time and duration of the hand wash event on an SD card.


The LCD shows the last hand wash logged time and duration. The picture below is a screen shot of the time, day, month and duration being logged on the SD card.


The aim of the project is to provide hand washing data collection using embedded technology. It doesn’t look very embedded at the moment! It is sitting on our kitchen bench logging sink events. Hopefully it will form the basis of a study in a hospital.


A hand-washing event logger

The most effective way of combating the spread of super bugs in our hospitals is simple low-tech hand washing. The aim of this project is to accurately log date/time and duration of each hand wash event at a sink. The logger has to be battery powered, unobtrusive and safe. I trialed Radio Frequency IDentification (RFID) with a wrist band on the clinician. The band triggered the logger but these devices have limited proximity.

I thought about a water sensor or an Infrared beam to trigger the logger but these devices produce a considerable lag-time. I am now trialing a piezo-electric transducer. The transducer produces a small current which can be filtered and amplified to trigger an Arduino micro controller. So far, I have solved the false-positive triggering and floating earth problems which produce erroneous readings. The proof of concept trial depicted in the following pictures is very encouraging.

The picture below shows the wave shaping prototype connected to the Arduino. The display shows no activity and a previous 8 second flow of water from the faucet.


The picture below shows filtered and shaped pulses coming from the transducer when the faucet is running.

Next I will add a real-time clock and SD card for logging purposes. The ultimate goal is WiFi linking to a central monitor. That shouldn’t be too hard!

Nursing semantic networks- a different take on interoperability. Part 2

In part one, we saw that resources including the largest resource, ontologies, seek out to connect with similar ‘better’ resources on the Semantic Web.  For example, say we want to examine the life of ‘Flipper’, the1960’s TV star. We can connect a whole bunch of resources on the Semantic Web which will bring a sharper focus to Flipper’s slice of reality. This connectivity is achievable by using an element called a ‘triple’.


The basic unit of knowledge on the semantic web is the ‘triple’. A triple is like a simple sentence containing subject, predicate and object; ‘Bob knows Alice’ was an example of a triple in part one. The predicate ‘knows’ is the relationship between subject ‘Bob’ and object ‘Alice’.

In Figure 1 we can see triples connecting Flipper to other resources describing his world. Triples easily identifiable in Figure 1 are: ‘Flipper is a dolphin’, ‘Flipper is a animal TV star’, ‘Flipper knows Sandy Ricks’ and ‘Flipper lives at coral key’. Using this simple Flipper example, we can see triples provide a sharper focus on Flipper and his life. Each resource connected to Flipper by a triple contains lots of extra information. The resources in Figure 1 don’t have to be an ontology, they can be anything, including databases, comma delimited documents, triple stores or virtually any resource. Suddenly, you, or a machine, can navigate about Flipper’s life and discover more and more stuff.

Figure 1: Flippers life on the Semantic Web



The Semantic Web naturally lends itself to interoperability because of the Linking Of Data standards embedded in it. The thing is, it doesn’t matter what form or structure the resources come in, disparate resources can be linked on the Semantic Web by triples because triples are independent of the resources. Triples achieve linking by recognising similar semantics in the other resource. Triples are full of semantics.


Semantics, in the context of Linking Of Data, are not just the meaning of one word or phrase; it is the sum of many descriptors associated with a triple. For example:

  • Names of concepts (terms)
  • Names of relationships
  • Any annotation that is placed to describe the concept or relationship for the benefit of humans
  • Any constraint which sets the rules of class membership.

A machine or person whose job it is to link ontologies is not limited to the above semantics to link resources together. A machine called a ‘reasoner’ will scan the ontology and infer a relationship between Bob and Alice because Bob and Alice may work in the same hospital unit, share the same individual constraints or belong to the same club. Also, a machine could trawl the Semantic Web ranking the linguistic ‘closeness’ of terms and relationships and automatically link stuff by ranking the probability that resources or people are connected in some way. For nurses, we can take a ‘snapshot’ of a nursing unit and analyse the processes that occur. If we can visualise the processes we can ‘tweak’ them to provide greater efficiencies which flow on to better patient outcomes.

Anyway, things get “curioser and curioser” from here on in. For instance, “how does a machine know what ‘knows’ means in the ‘Bob knows Alice’ triple?”. I was just going to write two articles but I may as well continue down the rabbit hole in upcoming articles. I will explain the ‘knows’ question and the difference between the machine-readable triple and the human-readable graph in the next instalment.

Nursing semantic networks- a different take on interoperability. Part 1


Interoperability, the measure of how well disparate networks are able to communicate, is woven into the very fabric of the ‘web of data’, the so-called, Semantic Web (SW). As its name suggests, the SW uses ‘semantics’ to facilitate interoperability between islands of seemingly unrelated knowledge. First up, I will outline the Semantic Web in part 1, and in part 2, I will describe ‘semantics’ and how they are used to facilitate interoperability.


The Semantic Web

Sir Tim Berners-Lee, conceived the global SW, a web of linked data, with the same basic architecture as the existing World Wide Web (WWW). Consequently, both webs can co-habit and mesh together. However, there is one big difference between the two webs. Unlike the WWW which is organised for human consumption, the SW is entirely machine-readable. The SW is made up of linked data called ‘resources’. Resources can be anything under the Sun, including concrete entities like people, and abstract entities like thoughts and ideas.


Linking of data

Because machines don’t handle surprises very well resources are organised into a ridged  Resource Description Framework (RDF) which provides a predictable linking structure on the SW. How are resources linked? Resources are linked by common relationships. For example, a resource called ‘Bob’ may be linked to another resource called ‘Alice’ by a relationship called ‘knows’. So using two resources and their relationship, a tiny bit of knowledge is made; Bob knows Alice.

Berners-Lee’s idea that the usefulness of resources is enhanced by linking to ‘better’ resources underpins the SW. So, if enough resources are linked, they form a kind of ‘map of knowledge’. The SW contains billions of these maps, called ‘domain ontologies’. You can imagine ontologies are like islands of specific knowledge floating in a sea of resources such as documents, pictures, databases and descriptions. So, like islands, ontologies are ok by themselves but they are much more enhanced if ‘trade routes’ link to other islands and resources.


Domain ontologies

A domain ontology is a ‘snapshot’ or abstract of some part of human reality. The snapshot is constructed by linking resources and their relationships, the more resources and relationships, the sharper the focus. To this end, resources are always looking for similar resources to connect to, they use the gravity of their relationships to pull together and form new ontologies, like galaxies after the big bang. Suddenly,  they are machine-readable snapshots of human reality on the SW that machines can ‘read’ and analyse.


Meh, so what?

In the hospital setting, domain ontologies may describe hidden nursing knowledge and processes.  Because ontologies are machine-readable, robots called ‘intelligent agents’ can analyse hospital units such as surgical, emergency or administration looking for dependencies or errors in the logic of the unit.  The analysis of ontologies will save time and money by opening the door to automated auditing, freeing up nurses. Also, nurses will use ontologies to add and subtract resources and interventions in a unit which will provide enhanced efficiencies and better patient outcomes.

In the next installment, we will look at semantics in the context of the SW and how semantics facilitate interoperability by connecting resources together to describe even larger ontologies such as a hospital.