ooc but is a workshop

OOC - Summer Session - V2

Performance Web VR

This is some sort of devlog from the Summer Session residency at V2. Object Oriented Choreography got selected from Sardegna Teatro and we were sent in the Netherlands* to work for the whole summer**.

* actually i was already here
** actually my plan for the summer was really different but what can i do

Take everything as a WIP because it's literally that

v2 from above

Workshop?

The third iteration of OOC could be two-parts workshop: in the first moment participants co-design a VR environment with a custom multiplayer 3D editor. The second half is a dance workshop with a choreographer that guides and explores with the participants the virtual reality they created altogether. The VR headset is passed hand by hand as a catalyst, transformed from a single-user device to a wandering and plural experience.

  1. design the VR environment together

    • to encode information with spatial and body knowledges
    • to achieve meaningful and expressive interactivity through participation
    • to take into accounts multiple and situated points of view

    • starting point is the essay on the Zone written for the previous iteration

    • the essay
    • excerpts used as prompt, focus on using space as an interface with the body
  2. explore the collective VR environment

    • to decode information with spatial and body knowledges
    • to transform vr in a shared device
    • who is inside the VR trusts the ones outside
    • the ones outside the VR take care of who's inside

    • performative workshop

    • stretching and warming up
    • excercises: moving with hybrid space
    • improvisation

Outcomes:

1. documentation of the workshop
2. a different 3d environment for each iteration of the workshop i.e digital gallery ?
3. the 3D editor

first part - design the VR environment

what's the plan:

what's the point?

what are our roles here?

Mapping the algorithm

Our technological environment is made of abstract architectures built of hardware, software and networks. These abstract architectures organize information, resources, bodies, time; in fact they organize our life. Yet, they can be really obscure and difficult to grasp, even to imagine.

Within VR we can transform these abstract architecture into virtual ones: spaces that are modelled on the nature, behaviour, and power relations around specifc technologies. Places that constraint the movements of our body and at the same time can be explored with the same physical knowledge and awareness.

Starting from one specific architecture we model and map it together with the public.

This iteration of OOC is a performance with the temporality of a two-parts workshop: in the first moment participants model together the virtual environment with a custom VR editor, that let them create the space in 1:1 scale.

The second half is a performative workshop with a choreographer that guides and explores with the participants the virtual reality they created altogether. The VR headset is passed hand by hand as a way to tune in and out the virtual space, transformed from a single-user device to a wandering and plural experience.

Since an abstract architecture is composed of several entities interacting together, the dramaturgical structure con be written following them. The narration of the modeling workshop as well as the performative excercises from the warming up to the final improvisation can be modeled on the elements of the architecture.

~

The idea of having the public modeling the space and exploring with the performer responds to several needs:

To make an example: the first OOC was modeled on a group chat. The connected participants were represented as clients placed in a big circular space, the server. Within the server, the performer acted as the algorithm, taking messages from one user to the other.

Could it be done in a different way?

Here are three scenario:

Workshop

Installation:

Platform:

A draft timetable

Sparse ideas

Tracker as point lights during performance (see FF light in cave)

References

An overview for Sofia:

notes from 02/22

concept

Our technological environment is made of abstract architectures built of hardware, software and networks. These abstract architectures organize information, resources, bodies, time; in fact they organize our life. Yet, they can be really obscure and difficult to grasp, even to imagine.

Being in space is something everyone has in common, an accessible language. Space is a shared interface. We can use it as a tool to gain awareness and knowledge about complex systems.

Within VR we can transform these abstract architecture into virtual ones: spaces that are modelled on the nature, behaviour, and power relations around specifc technologies. Places that constraint the movements of our body and at the same time can be explored with the same physical knowledge and awareness. (like what we did for the chat)

Starting from one specific architecture (probably the food delivery digital platforms typical of gig economy that moves riders around) we model and map it together with the public. Since an abstract architecture is composed of several entities interacting together, a strong dramaturgical structure con be written following the elements of the architecture.

how to - two options

  1. performance as a workshop

a performance with the temporality of a two-parts workshop: in the first moment participants model together the virtual environment with a custom VR editor, that let them create the space in 1:1 scale.

Then a performative workshop with a choreographer / performer that guides and explores with the participants the virtual reality they created altogether. The VR headset is passed hand by hand as a way to tune in and out the virtual space, transformed from a single-user device to a wandering and plural experience.

  1. performance as an installation

The VR editor is used as an installation. Other than the normal functionalities to model the environment it contains a timeline with the structure of the workshop recorded as audio. The performer activate the installation following the script. The text is written with the choreographer / performer. it's a mix between the two moments of the workshop version descripted before. After the performance, participants (up to three at the same time) can follow the audio it and being guided in the creation of the environment.

~

Both options can be activated multiple times, with different results. The resulting 3D environments can be archived on a dedicated space (like a showcase website) in order to document, (communicate, and $ell the project again for further iterations)

         ___..._
    _,--'       "`-.
  ,'.  .            \
,/:. .     .       .'
|;..  .      _..--'
`--:...-,-'""\
        |:.  `.
        l;.   l
        `|:.   |
         |:.   `.,
        .l;.    j, ,
     `. \`;:.   //,/
      .\\)`;,|\'/(
       ` `itz `(,
   BREAKING CHANGES HERE

Meeting with Sofia and Iulia

ok ok ok no workshop let's stick to what we have and polish it

~

- - [ ] - - [ ] - - [ ] - ->

what do we need:

- timeline
- model for the application, a series of blocks like this:
    * text
        * duration
        * interaction
        * scene

28/7 - Prototype setup

app design

small prototype:

29/7 - Prototype Setup \& other

The building block is the Stage. Each stage is a description of what's happening at the edge of the performance: what the screen is displaying, what's inside the VR, what's happening on users' smartphone.

We can place a series of stages on a timeline and write a dramaturgy that it's based on the relation between these three elements.

The model of the stage is something like this:

text and scene are meant to be used in vvvv to build the vr environment and the screen display

interaction is meant to be sent via websocket to the server and from there to the connected clients

it could be useful to keep track of the connected users.

It could be something like:

  1. when someone access the website a random ID is generated and stored in the local storage of the device, in this way even if the user leaves the browser or refresh the page we can retrieve the same ID from the storage and keep track of who is who without spawning new user every time there is a reconnection (that with ws happens a lot!)
  2. maybe the user could choose an username? it really depends on the kind of interaction we want to develop. also i waas thinking to ending credits with the participation of and then the list of users
  3. when connecting and choosing and username, the client sends it to the server that sends it to vvvv, that stores the users in a dictionary with their ID. Every interaction from the user will be sent to the server and then vvvv with this ID, in this way interactions can be organized and optimized, as well linked to the appropriate user.
  4. tell me more about surveillance capitalism

about text - interaction

even if we can take out excerpts from the essay we wrote, this reading setup is totally different. here our texts need to be formulated like a call to action, or a provocation to trigger the interaction.

a way to acknowledge the public

31/07 - Prototype setup: vvvv

The websocket implementation im using is simple. It just provides this kind of events:

In order to distinguish between different types of message I decided to serialize every text as a JSON string with a field named type. When a message event is fired the server goes and look at the type of the message and then acts consequently. Every message triggers a different reaction aka it calls a different function.

In the previous versions the check on the message type was a loong chain of if statements, but that didn't feel right, so I searched a bit how to manage it in a better way.

In the server (node.js) i created an object that uses as keys the type of the message and as value the function associated. javascript switch object

For vvvv I asked some suggestion in the vvvv forum and ended up using a simple factory pattern that uses a common interface IMessage and then use it to process the incoming message based on the type. replacing long if chain

In order to deal with the state of the application (each message operate in a different way and on different things) I created a Context class that holds the global state of the performance such as the websocket clients, and the connected users. The IMessage interface take this context as well as the incoming message and so it can operate on the patch.

happy with it! it's much more flexible than the long if snake

1-2/08 - two Displays & Prototype setup

One screen mounted vertically

Yesterday together with Richard we setup the two screens to show what's happening inside the VR for the public. Initially they were mounted next to each other, in vertical.

With Iulia we thought how to place them. Instead of keeping them together probably it would be better to use them at the edge of the interactive zone. Even if the screen surface seems smaller, it's a creative constraint \& it creates more the space of the performance.

Ideallly the viewer can see at the same time both screens and the performer. The screens can display either the same or different things.

Two screens with frogs from Katamari Two screens mapping the same space

And now some general thoughts:

the username should be central in the visualization of the interaction, since it's the main connection point between between whats happening outside and inside? could it be something different than a name? could it be a color? using a drawing as an avatar?

OOC title + hand drawn avatar

types of interaction

the idea of presence, of being there, together and connected

touching the screen <---means to be connected with ---> the performer

keep touching to be there

a light in the environment

and when the performer gets closer to the light the connected phone plays a notification

maybe it could be enough ?

just use the touchscreen as a pointer xy and make the nature of the pointer changes

3/08 - Prototype Setup and doubts

Finished to setup the xy interaction with the clients and vvvv.

The setup with nuxt is messy since it's stuck between nuxt 2 and vue 3. There are a lot of errors that don't depend on the application but rather to the dependencies and it's really really annoying, especially since it prevents solid design principles.

I'm tihnking to rewrite the web app using only Vue, instead of nuxt, but im a bit afraaaaaidd.

4/08 - Script

Im trying to understand which setup to use to rewrite the application without nuxt. Currently im looking into fastify + vite + vue, but it's too many things altogether and im a bit overwhelmed.

So now a break and let's try to list what we need and the ideas that are around to organize the work of the next week.

Hardware Setup:

Performance Structure

0. before the performance

1. performance starts, first interaction: touch

[]need a transition[]

2. second interaction: XY

would be nice to have a camera system that let you position the camera in preview mode and then push it to one of the screens, overriding the preset

5-08

Notes from the video of OOC@Zone Digitali. The name of the movements refer to the essay triggers.

list of triggers:

~

~

Need to finish this analysis but for now here is a draft structure for the performance. Eventually will integrate it with the previous two sections: the Performance Structure and the trigger notes.

Structure?

I

Invitation and definition of the domain: touch interaction and public partecipation

????

III from partecipation to collective ritual

6-08

Two ideas for the performance:

a. Abstract Supply Chain

--> about the space where the performer dances

The space in the virtual environment resemble more an Abstract Supply Chain instead of an architectural space. It's an environment not made by walls, floor, and ceiling, but rather a landscape filled with objects and actors, the most peculiar one being the performer.

We can build a model that scan scales with the connection of new users. Something that has sense with 10 people connected as well as 50. Something like a fractal, that is legible at different scales and intensities.

Something between a map, a visualization, a constellation. Something that makes sense in a 3d environment and in a 2D screen or projection.

Lot of interesting input here: Remystifying supply chains

b. Object Oriented Live Action RolePlay (LARP)

--> about the role of the public

We have a poll of 3d object related to our theme: delivery packages, bike, delivery backpack, kiva robot, drone, minerals, rack, servers, gpu, container, etc. a proper bestiary of the zone.

Every user is assigned to an object at login. The object you are influences more or less also your behavior in the interaction. Im imagining it in a subtle way, more something related to situatedness than theatrical acting. An object oriented LARP.

How wide or specific our bestiary should be? A whole range of different object and consistency (mineral, vegetal, electronical, etc.) or just one kind of object (shipping parcels for example) explored in depth?

From here --> visual identity with 3D scan?

bike amazon package

The Three Interactions

All the interactions are focused on the physical use of touchscreen. They are simple and intuitive gestures, that dialogue with the movements of the performer.

There are three section in the performance and one interaction for each. We start simple and gradually add something, in order to introduce slowly the mecanishm.

The three steps are:

  1. presence
  2. rythm
  3. space

Presence is the simple act of touching and keep pressing the screen. Ideally is an invite for the users to keep their finger on the screen the whole time. A way for the user to say: hello im here, im connected. For the first part of the performance the goal is to transform the smooth surface of the touchscreen in something more. A sensible interface, a physical connection with the performer, a shared space.

Rythm takes into account the temporality of the interaction. The touch and the release. It gives a little more of freedom to the users, without being too chaotic. This interaction is used to trigger events in the virtual environment such as the coming into the world of the object.

Space is the climax of the interaction and map the position on the touchscreen into the VR environment. It allows the user to move around in concert with the other participants and the performer. Here the plan is to take the unreasonable chaos of the crowd interacting and building something choreographic out of it, with the same approach of the collective ritual ending of the previous iteration.

Each section / interaction is developed in two parts:

Tech Update

Started having a look at reactive programming. Since everything here is based on events and messages flowing between clients, server and vvvv, the stream approach of reactive programming makes sense to deal with the flows of data in an elegant way.

Starting from here: The introduction to Reactive Programming you've been missing

For notification and audio planning to use howler.js, probably with sound sprites to pack different sfx into one file. https://github.com/goldfire/howler.js

10/08 and 9/08 and 11/08

second interaction

how to call the Three Interactions? TI? 3I ? III I ? ok stop

it's usefull to imagine the lifecycle of the object to think about the three interactions.

1 presence___presence____being there

2 rythm______quality_____in a certain way

3 space______behaviour___and do things

So for what concerns the second interaction:

Following the timeline of the performance we could setup a flow of transformation for every object: at the beginning displacing randomly the object, messing around with its parts. We could gradually dampen the intensity of these transformations, reaching in the end a regular model of the object.

This transformations are not continous, but triggered by the tap of the user. They could be seen as snapshots or samples of the current level of transformation. In this way, either with high & low sample rate we can get a rich variation amount. This means that if we have a really concitated moment with a lot of interactions the transformations are rich as well, with a lot of movements and randomness. But the same remains true when the rythm of interaction is low and more calm: it only get the right amount of dynamic.

One aspect that worries me is that these transformation could feel totally random without any linearity or consistency. I found a solution to this issue by applying some kind of uniform transformation to the whole object, for example a slow, continous rotation. In this way the object feels like a single entity even when all its parts are scattered around randomly.

The transformation between the displaced and the regular states should take into account what I called incremental legibility, that is:

in this way we could obtain some kind of convergence of the randomness.

Actually the prototype works fine just with the decreasing intensity, i didn't tried yet to transform the different features individually or in a certain order.

Also: displacing the textures doesn't look nice. It just feels broken and glitchy, not really an object.

for what concerns the display:

  1. in one screen we cluster all the objects in a plain view, something like a grid (really packed i presume? it depends on the amount)
  2. in the other we could keep them as they were in the first interaction, and present them through the point of view of the performer, keeping the sound notification when she gets closer and working as a close-up device.

we could also display the same thing in two screens, to lower the density of object and focus more on the relationship between the performer and the public as a whole, attuning the rythm

how this interaction interacts with the choreography? is it enough for the performer to be just a point of view?

the big practical recap

0. Intro

1. Presence

Presence is the simple act of touching and keep pressing the screen. Ideally is an invite for the users to keep their finger on the screen the whole time. A way for the user to say: hello im here, im connected. For the first part of the performance the goal is to transform the smooth surface of the touchscreen in something more. A sensible interface, a physical connection with the performer, a shared space.

2. Rythm

Rythm takes into account the temporality of the interaction. The touch and the release. It gives a little more of freedom to the users, without being too chaotic. This interaction is used to trigger events in the virtual environment such as the coming into the world of the object.

3. Space

Space is the climax of the interaction and map the position on the touchscreen into the VR environment. It allows the user to move around in concert with the other participants and the performer. Here the plan is to take the unreasonable chaos of the crowd interacting and building something choreographic out of it, with the same approach of the collective ritual ending of the previous iteration.

4. Outro

12/08 vvvv app design

placeholders:

0.

output manager a system where you can decide where to render: - screen 1 - screen 2 - both screens

13/08 and 14/08 and 15/08

Last Mile

For the objects we will focus on the Last Mile Logistics. The moment in which things shift from the global to the local, from an abstract warehouse to your doorstep. Last Mile Logistics is tentacolar, it is made of vectors that head toward you.

We asked to Nico for some suggestions for good quality 3D models and he replied with a list from Sketchfab. Thanks a lot.

Here the one we imported already:

Will need to credit the authors of the models:

"pallet truck" (https://skfb.ly/6UV88) by Kwon_Hyuk is licensed under Creative Commons Attribution (http://creativecommons.org/licenses/by/4.0/).

"Platform Trolley" (https://skfb.ly/6RJto) by louis-muir is licensed under Creative Commons Attribution (http://creativecommons.org/licenses/by/4.0/).

"Hand Truck" (https://skfb.ly/6VGAH) by HippoStance is licensed under Creative Commons Attribution (http://creativecommons.org/licenses/by/4.0/).

"Warehouse Shelving" (https://skfb.ly/on6oy) by jimbogies is licensed under Creative Commons Attribution (http://creativecommons.org/licenses/by/4.0/).

"Pallet" (https://skfb.ly/os7YC) by Marsy is licensed under Creative Commons Attribution (http://creativecommons.org/licenses/by/4.0/).

"Plastic Crate" (https://skfb.ly/orQTM) by Virtua Con is licensed under Creative Commons Attribution (http://creativecommons.org/licenses/by/4.0/).

"Set of Cardboard Boxes" (https://skfb.ly/onr6S) by NotAnotherApocalypticCo. is licensed under Creative Commons Attribution (http://creativecommons.org/licenses/by/4.0/).

"8" (https://skfb.ly/6Aovp) by Roberto is licensed under Creative Commons Attribution (http://creativecommons.org/licenses/by/4.0/).

"Plastic Milk Crate Bundle" (https://skfb.ly/ov7Nn) by juice_over_alcohol is licensed under Creative Commons Attribution (http://creativecommons.org/licenses/by/4.0/).

"Compostable Burger Box .::RAWscan::." (https://skfb.ly/onGUV) by Andrea Spognetta (Spogna) is licensed under Creative Commons Attribution (http://creativecommons.org/licenses/by/4.0/).

"Barrel" (https://skfb.ly/6TInO) by Toxic_Aura is licensed under Creative Commons Attribution (http://creativecommons.org/licenses/by/4.0/).

"Foodpanda Bag" (https://skfb.ly/6TUGF) by AliasHasim is licensed under Creative Commons Attribution (http://creativecommons.org/licenses/by/4.0/).

"Japanese Road Signs (28 road signs and more)" (https://skfb.ly/o8WBK) by bobymonsuta is licensed under Creative Commons Attribution (http://creativecommons.org/licenses/by/4.0/).

"Coffee Paper Bag 3D Scan" (https://skfb.ly/6W88p) by grafi is licensed under Creative Commons Attribution (http://creativecommons.org/licenses/by/4.0/).

"Traffic cone (Game ready)" (https://skfb.ly/6SqHs) by PT34 is licensed under Creative Commons Attribution (http://creativecommons.org/licenses/by/4.0/).

"Warning Panel" (https://skfb.ly/6BQJS) by Loïc is licensed under Creative Commons Attribution (http://creativecommons.org/licenses/by/4.0/).

"An Office Knife" (https://skfb.ly/SZKT) by runflyrun is licensed under Creative Commons Attribution (http://creativecommons.org/licenses/by/4.0/).

"Tier scooter" (https://skfb.ly/opxDJ) by Niilo Poutanen is licensed under Creative Commons Attribution (http://creativecommons.org/licenses/by/4.0/).

"Bike Version 01" (https://skfb.ly/6UHzZ) by Misam Ali Rizvi is licensed under Creative Commons Attribution (http://creativecommons.org/licenses/by/4.0/).

"CC0 - Bicycle Stand 4" (https://skfb.ly/ovLvI) by plaggy is licensed under Creative Commons Attribution (http://creativecommons.org/licenses/by/4.0/).

"Wooden Crate" (https://skfb.ly/otvLA) by Erroratten is licensed under Creative Commons Attribution (http://creativecommons.org/licenses/by/4.0/).

"8-Inch GE Dr6 Traffic Signals" (https://skfb.ly/otwAu) by Signalrenders is licensed under Creative Commons Attribution (http://creativecommons.org/licenses/by/4.0/).

"Power Plug /-Outlet /-Adapter | Connector Strip" (https://skfb.ly/ooUPN) by BlackCube is licensed under Creative Commons Attribution (http://creativecommons.org/licenses/by/4.0/).

"Simple turnstile" (https://skfb.ly/o96tU) by LUMENE is licensed under Creative Commons Attribution-NonCommercial (http://creativecommons.org/licenses/by-nc/4.0/).

"Microwave Oven" (https://skfb.ly/6RrWD) by aqpetteri is licensed under Creative Commons Attribution (http://creativecommons.org/licenses/by/4.0/).

"Camera" (https://skfb.ly/ooLVM) by Shedmon is licensed under Creative Commons Attribution (http://creativecommons.org/licenses/by/4.0/).

"Cutting pliers" (https://skfb.ly/otMQv) by 1-3D.com is licensed under Creative Commons Attribution-ShareAlike (http://creativecommons.org/licenses/by-sa/4.0/).

OO Graphics Dasein with Iulia

Iulia came for the weekend to work on the visual! We spent a full-immersion graphic design visual dasein weekend to glue everything together and the results are nice.

For the screens we decided on a physical UI, condensing everything into a 3D quad inserted into the scene with the other objects.

For the website we went for the same big square concept and used the same pallette. Total black and blue. Grazie Ragazzi & Forza Atalanta.

*

TODO: img

Varia

Aqua Planning

TODO:

24/08/2022

Our video shooting is going to be on Sunday 28th. We will film a bit the reharsal with Sofia and say something about the project. Need to prepare something not to look to dumb or complicate.

Finally decided to approach the assets problem: how to load an incredible amount of 3d models and materials in the patch? The answer is: via the Stride Game Studio. I don't like it, but it works fine and give us less problem of import and loading, since every asset is being compiled and pre-loaded at the opening of the patch. Or something like that, im not super sure.

So these are the specifics to load things:

Things can be organized in scenes, that could be an interesting way to deal with the different interaction moments. Let's see.

TODO:

25/08

Yesterday night loaded the first batch of objects into Stride. It required a bit of time to setup a proper workflow.

This morning refactored the patch to work with assets from the Stride Project. Now there is an Object node that takes the name of the model and return an entity with the various parts of the 3D object as children, with the right materials etc. In this way we can set individual transform to the elements and decompose the objects in pieces.

next for today:

OK SELECTA 3D Objects

(it's still a bit clunky the way to apply the instancing transfrom to the single element of the 3d model but let's see) (maybe it's enough to implement it in a modular way so the two scenes work with different nodes?)

Francesco Luzzana is a digital media artist from Bergamo, Italy.

Francesco Luzzana (he/him) develops custom pieces of software that address digital complexity, often with visual and performative output. He likes collaborative projects, in order to face contemporary issues from multiple perspectives. His research aims to stress the borders of the digital landscape, inhabiting its contradictions and possibilities. He graduated in New Technologies at Brera Academy of Fine Arts and is currently studying at the master Experimental Publishing, Piet Zwart Institute.

propic

and this will be the picture or maybe the nice one from carmen! should ask her!

Ok ok enough

now for the interview:

two way binding:

attuning to the choreography of objects moved by digital platforms to grasp their

modality

contents

- - last mile logistics and the very body of supply chain - used as interface between our daily lifes and the accidental megastructure of digital platforms - object oriented onthology and object oriented programming

for the shooting:

Also today I got a mail from Leslie 💌 and look at here: pzwiki.wdka.nl/mediadesign/Calendars

Mom im famous im in the PZI Wiki!

todo: send pic to mic

28/08 - 31/08 First reharsal with Sofia

timeline:

- intro loop
    - website: username and confirm
    - sofia enters and put the hmd
    - transition fade out and music starts
- presence
    - ambient light off 0
    - point light off 0
    - website: waiting room
    - sofia faces the screen, back to the public
    - slowly turns
    - fingertips
    - st thomas --> queue presence interaction
    - website: presence button, sound notification
    - point light on, text posi
    - website: presence interaction
    - ambient light on 1 slow transition --> from 7:30 ~ to 8:00 (.30 min) super ease in
    - swap sock  transition (--)--> )()  --> from 8:20 ~ to 10:30  (2:00 min) ease-in circ ~ ease-in-out circ ???
    >>> stop in the middle and then explode at 10:40 !!!! setup light shaft ?
- rythm
    - website: rythm interaction 10:40
    - 10:40 queue audio
    - rythm interaction smartphone gradually to white
    - 13:00 force all smartphones white, disable interaction
    - 13:10 --> 14:10 transition to performer POV (1 min) - linear ease-out -  point light on ambient light off directional light off
    - 16:00 --> ambient light on .125 directional light on 1 transition
    - 16:45 --> audio queue to transition
    - 17:00 --> obj reconstruction fluido ~:40 min not interactive
    - 17:40 --> transition to general view, linear, 1 min
- space
    - website: space interaction 18:30
    - 18:40 zoom out transition endless super slow 100 m away - 21:00
    - website: text the zone is the zona as 20:50

The blurb & about object orientation

Object Oriented Choreography proposes a collaborative performance featuring a dancer wearing a virtual reality headset and the audience itself. At the beginning of the event, the public is invited to log on to o-o-c.org and directly transform the virtual environment in which the performer finds herself. The spectators are an integral part of the performance and contribute to the unfolding of the choreography.

The work offers an approach to technology as moment of mutual listening: who participate is not a simple user anymore, but someone who definitely inhabits and creates the technological environment in which the performance happens. The performer not only is connected with each one of the spectators, but acts also as a tramite to link them together. In this way the show re-enacts and explores one of the paradigm of our contemporary world: the Zone.

The Zone is an apparatus composed of people, objects, digital platforms, electromagnetic fields, scattered spaces, and rhythms. An accidental interlocking of logics and logistics, dynamics and rules that allow it to exist, to evolve, and eventually to disappear once the premises that made it possible come undone. The Zone could be an almost fully automated Amazon warehouse as well as the network of shared scooters scattered around the city. It could be a group of riders waiting for orders outside your favorite take-away, or a TikTok house and its followers.

The research that OOC develops is influenced by the logistic and infrastructural aspect that supports and constitutes this global apparatus. The title itself of the work orbits a gray zone between the theoretic context of Object Oriented Ontology and the development paradigm of Object Oriented Programming. Moving through these two poles the performance explores the Zone: both with the categories useful to interact with hyperobjects such as massive digital platforms, and across the different layers of the technological Stack, with a critical approach to software and its infrastructure. A choreography of multiple entities in continous development.

06/09

Recap 4 december