A tool for gathering situated impressions in order to create individual, vernacular and poetic readings of various inputs (such as space, image, text)




How do we bring multi-vocality in the work of annotation? The Annotation Compass builds composites from aggregated vernacular impressions, rich of their subjectivity and situatedness. It is the outcome of a three months journey questioning the relationship between vernacular languages and natural language processing tools. 

First experiments:

1. The living-room:

For this experiment, four of us were gathered in a living-room.

After removing the floor plan and looking at the subjective annotations of this experiment, we observed that each outcome forms another 'space'. Each person's set of annotations brings a unique perspective of the living room , an 'individual map'. We then layered the individual maps and the compilation resulted in a vernacular picture of the space. This alternative understanding of the space can only be given to a reader through those descriptions.

2. Photograph of a room:

The same method was applied to the photograph of a room. Each of us used a different set of coloured sticky notes and took 5 minutes to physically annotate the picture on the same surface. The picture was then removed from the background, resulting in a similar outcome as the experiment described above.

From these observations, our interest on subjective annotations that could flow in a common understanding of an image grew. As a tool to collect situated impressions, we elaborated the idea of the Annotation Compass.

On a given surface, such as an image, the tool facilitates the collection of annotations and their coordinates from various users simultaneously. These annotations represent individual knowledges and perspectives in regards to the given surface.


To use this tool, let's consider the "host" any person interested in gathering annotation on a specific image; and the "guest" any person invited by the host to annotate the image.

Process for the host of an image:

  1. upload an image
  2. add a text to explain the context of the image or to give instructions and helpful advice to the guests
  3. send link to guests and invite them to annotate
  4. download a json-file or text-file that contains the collected data that was gathered so far
  5. try the different functions of SI16 to filter the collected data

Process for the guest:

  1. open the link sent by the host
  2. read the information attached to the image by the host
  3. use the cursor to select a specific area that you want to annotate
  4. write and insert your annotation(s)a

The data: The Tool not only archives the annotations, but also additional meta-data that can be helpful to analyze the outcome. The collected data is stored in a "json-file" that comes as a list of labels. In each label, one can find the file name of the annotated image, the coordinates of the annotation, the dimension of the annotation 'box', the annotation itself, the index number of the annotation and a user identification:

json-file list of labels

Example label:

    'image': 'map.jpg',
    'position': {'x': 12, 'y': 97},
    'size': {'width': 43, 'height': 18},
    'text': 'This is a text! Is this a text?',
    'timestamp': 'Wed, 01 Dec 2021 14:04:00 GMT',
    'userID': 5766039063

The outcome provided by the Annotation Compass is ever-changing: whenever an individual adds an annotation, the data grows.

After applying the tool to different projects we observed that the collected data can offer a reflexion on the so called "objective": It provides individual perceptions and builds a common experience by including a multiplicity of impressions rather than one objective definition. In conclusion, the Tool can be used to provide alternative ways to define images, images of space, texts, and anything else annotatable.

Possible applications of the tool: