I am alone with my daughter this week, which means no experiments outside of the kitchen, but at least some time for reading and reserach during her naps.
From my kitchen window I can see a lot of dead antennas, used by birds to rest on.
Bertolt Brecht: Der Rundfunk als Kommunikationsapparat (1922)
This text by Bertholt Brecht is not 100 years old. He argues for a democratisation of the broadcast apparatus. I am lacking time to summarize it in his arguments, but it is exiting to reflect this on the current state of www and its politics we are using today. Content blocked by providers – photos deleted by algorythms.
I did further research into the transmission of radio, how to do it yourself and I was wondering what else is out there in the sky we can not see. Electromagentic waves? It seems this semester turns into a Alvin Lucier legacy research – at least for me.
Max Neuhaus’ Drive -In Music is an interesing piece of Radio art.
Jens gave me a hint about the topic of Metaveillance – seeing sight itself by Steve Mann.
Goethe is someone to dig into – Sorry, I left school too early! See you next week with less loose strings.
Expectations and goals for a free project:
Keep it simple – it is not helpful to include too many meta layers and try to squeeze multiple concepts into one short semester – there will be more to follow.
Leave the virtual realm and comfort zone – most of my time I worked inside the digital space. I want to take this semester as an opportunity to explore the mechanical realm, where simplicity can create complexity. (e.g. Lucier’s Music on a long thin wire, Memoria (Movie))
Although it is about exploring – the project should be related towards sound. Over all those years I have been experimenting with various visual projects, but I know already sound is my place – or at least a combination of both worlds. I am well aware I am still enrolled in VK, but for the matter of sustainability I expect to end up with something I can develop further afterwards and use for acoustical projects instead of putting it in a box on my attic.
Gain theoretical knowledge and make the process accessible for others, or at least allow others to experience what was my topic of interest. In this case it will be Just Intonation (an alternative tuning system based on prime numbers, which buzzes me since 2 years now – plus: why can materialism and phenomenology not be friends?
After putting all this together, back to top one – keep it simple, you just have 8 weeks left.
Current sketches, ideas and problems are the following:
Inspired by Karen Barad’s essay – The Inhuman That Therefore I am – which starts with statements about studies of the human touch which from perspective of scientists is an electromagnetic interaction and there is no real touch involved. (Barad S. 155) What we actually feel is the electromagnetic repulsion between the electrons of the atoms that make up our fingers and those that make up the mug. You can not bring two electrons into direct contact with each other. But Barad as well as quantum field theory are questioning the actual cause of repulsiveness and what actually happens around/inside us. The void is not empty it is an ongoing play of in/determinancies physical particles are inseparable from the void. (Barad S. 159) To sumerize Barad’s explanation: All touching entails an infinity alterity, so that touching the other is touching all others, including the “self” and touching the “self” entails touching the strangers within.
Additionally there is the German philisopher’s Georg Simmel understanding of hearing, which describes listening as the only sense we can not shut off. Different from eyes and ears, simply closing the ears will not help us to set us free from experiencing vibrations, which are basically sound waves at very low frequency. Eyes we can close, the nose we can shut.
(vgl. Simmel, Georg (1999): Georg Simmel: Georg Simmel Gesammtausgabe – Band 11 S. 730)
R. Murray Schafer, founder of theSoundscape Studies, describes hearing as touch from distance.
(vgl. Schafer, R. Murray (1993): Our Sonic Environment and the Soundscape S. 195)
With TOUCH* I want to to bring this two perspectives of quantum physics and phenomenology together.
*I leave the catchy name open for discussion
First I had the idea of creating a room installation, based on various metallic strings driven with an oscillating electromagnetic field – guitar players might know this by utilising an Ebow to create long sustaining sounds. At the same time I wanted to sense the electromagnetic human field to control amplification and overtones of the single strings inside the space.
I found out that in fact the human electromagnetic field is much lower than other electronic devices, so mainly it would react depending on the electronic wearables that people would carry with them. I did some research into OpenCV, Kinect and Infrared sensors, but most of those ideas ended up being wayyyy too technical and complicated again. What I want is to transport a simple concept which evolves into a sensual experience.
The electromagnetic vibration is just the starting point of the feedback loop I want to create here.
It is important to leave the feedback loop open and make it interact with the surrounding, to come back to Barad’s explanation of touch. I am aware there is much more to explore in the realm of science, ethics and sound, but before I get lost in there I should finally start with a first prototype and see how to translate those ideas into the real:– I will first set up strings in various lengths to “tune” them and see how I can drive them with an electromagnetic field of solenoids
The room installation and measurement of electromagnetic fields was the initial idea. An alternative would be a single sound object inspired by the Monochord, which first showed up in Sumerian writings and was reinvented by Pythagoras.A string, tied at A, is kept in tension by W, a suspended weight, and two bridges, B and the movable bridge C, while D is a freely moving wheel, density may be tested by using different strings (Source: https://en.wikipedia.org/wiki/Monochord)
The acoustic signal will be picked up and feed into an amp Stack, which will fill the space with sound. The Volume will be set to a level, where the volume itself will drive the rods again, as well the sound becomes sensible on a physical level. This means not only hearing itself – which is – coming back to Simmel and Schafer – a form of touch – but also in ones stomach. This – physical experience is the closing link between the scientifical definition of touch, translated into acoustic signals we can experience with our bodies.
By opening the acoustic system by playing the picked up sound into the space and let it interact with the installations strings as well its surrounding and visa versa, the sound is not only “touching” itself by electromagnetic vibrations or other physical parameters – it is also in touch with the other.
Temperature, pressure, wind, humans…
This system should be seen as the foundation for a project which will be open for further development. As already mentioned (automatic) sliders or to make it possible to use is as an instrument maybe even inviting other artists to compose for this sound installation.
Ok Nice – but can’t we just take a guitar?
I want to free the installation from the idea of a symbolic connotation when creating sound signals. Additionally an object is more open for modification and allows the observer to interact with in an open way, while guitars are already constructed to be held or played in a certain manner.
Is it possible to follow my train of thought or does it just sound like: Oh, you want to make something loud and try to find some kind of justification for it.
What is your experience with similar student projects, do you know if someone who could give any tips or does this work remind you of someone elses? In context of UdK I just know about Yair’s project from 2012 and GenComp’s latest “Six Strings” project, which has a different technical background. (Plan B)
Basic circuit of Lucier’s concept
An interpretation of Lucier’s Music On A Long Thin Wire.
Two or more wires in a space to experience diffraction patterns by phasing signals or creating rythmical patterns by tuning the strings to frequencies close to each other.
Technically this seems much more simple for me to implement. I would spend more time with experimenting with the actual room setup and sound itself, than the mechanical part. I somehow just need to create a story or event around it. While my first idea is much more about exploration and process, which I would actually prefer but also is risky in terms of time management.
I think through the process of building more possibilities will reveal – time to take action.
References and Further Reading:
First I started the week with trying to send audio through a basic 1D Convolutional Neural Network with random weights and no additional training. It was more about to find out if it is possible to implement something similar to David Tudor’s Neural Synthesis Instrument.
For sure the first thing I wanted to figure out is how the signal path affects the incoming audio when the output it continuously feed back into the input. This process was not happening in realtime. I spent a while setting up a automatisation structure (plus improving my Python skills ; ) for processing and saving those files in a way I can just set the parameters and check the output later in the corresponding folder.
It is super boring, but important for generative processes, otherwise I will end up with loads of files on different locations.
First the output sounded as if it was not affected, but after a few iterations the signal already started to “degrade” and it seems the networks own resonance frequencies took over.
The output reminded me a lot of Alvin Lucier’s “I am Sitting in a Room” where he started to record himself, which was then played back into the room again, which was also recorded again, which afterwards was played back and recorded… on and on… until the hearable audio becomes a drone, based on the rooms resonance frequencies.
So far so well – it was an experiment, but don’t we know already that continuous Iterations through the the same “apparatus” will leave traces on the flowing matter?
This brings me to Karen Barad and her concept of agential realism. Barad understands apparatuses as practises that shape matter and their representation. They are wether neutral tools of observation, not they are structures which anticipate the outcome already.
The constant new configuration of matter as well its shift of meaning where also one of the main topics of Cox’ publications.
In his paper Sound Art and the Sonic unconscious he argues:
“noise as the ground, the condition of possibility for every significant sound, as that from which all speech, music and signal emerges and to which it returns.”
Noise is matter. Cox refers to Leibniz who wrote:
As I already mentioned last week – the understanding of different tunings and their background is something more focus should put on not only from a musical perspective. For example when when designing interfaces for instruments. The first thought I had – since Barad was speaking about the flow of matter through an apparatus – was to build some kind of wearable portable resonator, which amplifies various tuning scales that focuses on certain frequency ranges and makes it possible to explore places with another “perspective” – or tuning.
Similar to sound walks it might be possible to explore spaces sonically, but focussed on specific details. But this also means “excluding” frequencies!
Side note: I am keen in “making” machines for the sake of just building a machine or some kind of prototype which just carries the concept of something a computer already can do.
What tackles me here is a kind concept, filter or algorithm to look/tune into constanly chaning constellation of matter(sound) around us. This is something I should play and experiment with for this week trying to orientate myself inside the apparatus of NewMedia Klasse, and for sure dig more into Barad and apparatuses.
In my video I was speaking about my background already and dropped a few topics I would like to deal with inside this project: space, continuity, states (Zustände), drone, noise, silence, tuning, feedback systems
Tuning: to tune an instrument outside of the western 12 tone scale. To play sounds that are not inside our usual understanding of sounding “in place/western” without being using techniques of cultural appropriation. To make aware of there is more than just what our culture defines as “well tuned”.
Further information about this topic: LEIMMA by Khyam llami
What I would achieve is a feedback loop/interaction between a surrounding/space and the sound creating object. What I have to bring together are concepts of sound, space, cybernetics and interaction.
How do I define interaction?
What I don’t want to do is to set up some kind of installation where people can interact randomly on buttons, touchpads and other interfaces and make all kinds of bleeps and blobs. I am intending something more durational – something that is not bound or grasped as something that needs little attention span. It has to be something continious – not direct – but the longer you listen/interact the more you will find out about the systems dynamics.
How could people participate in space?
Physical interaction: touching, moving, knocking/slapping, speaking, singing, shouting, whispering fluids, breathing, pulsing -> sensed by: sensors, counters, cameras, image recognition, others, microphones, smells -> in interaction through devices (interfaces), social media or passive by being measured.
Buzzword of the year 2020 I guess, but there are some interesting aspects in this whole topic.
Creating sounds based on seeds fed int a generator which is trained in music/vocals is pretty easy to access these days.In commercial (https://www.aiva.ai/) as well as experimental form (Unagan/Jukebox/Melgan). All of those need a lot of time, energy and computational power to create models, based on a well sorted dataset to get satisfactory results. Also it sill needs the human indicator by selecting, processing and arranging the generative output. Also it is important to put in additional variables and data otherwise the system repeats itself (sounds similar like the input) Emptyset and Holly Herndon are musical examples -> Herndon defines Neural Networks as collaborators.
What most artists have in common is the “vagueness” they have when explaining their process, to play with the mystery of Artificial intelligence for promotional reasons.
But the idea of collaborating with a set of data – a Neural Network is still kind of intriguing for me. Learning more about the background of machine learning made me aware of how much value my data has for big companies and it also explains why the average streaming productions are lacking any originality these days 😉 (-> calculating mean value)
One issue with sound generation by Neural Networks is its quite tedious and slow workflow. Either ones uses pre trained models or takes time and recources to record sounds, to train the model and feeds back the output back into the model.
David Tudor – Neural Synthesis
Tudor was using one of the first Intel Neural Network Chips as core for a synthesizer with various inert points for feedback patching into the network’s neurons:
Esoecially the following diagram made me curious, since it has seven outputs which can be streamed into the room via speakers and then feed back into the signal path. Properties of space, other sounds, objects and subjects become part of the signal chain and could interact with the network which creates signals based on the sum the output function of each neuron.
Unfortunately it is hard go get exact the same architecture of chips these days (the project was developed in the 1990s) In theory it should be possible to implement something similar in python with Keras. But I lack the skills, especially with realtime processing. I am pretty sure I can get some feedback on this idea by George who is teaching some AI seminars at Kunst und Medien. Maybe this is something for the experiment? What also makes Neural Networks interesting for me is the “art” of creating datasets – also understood as creating reality based on the programmers values. For example renting out apartments based on income or race rather than the individual situation of the the applicant. Also how our own perception and understanding of surrounding is determined by “training” is an important string of thought that comes up by dealing with this topic.
-> What I want to achieve is an immersive and acoustical experience, but it should be related to the environment and acoustical properties of the space.
Acoustical recognition could be a tool to trigger events by a Neural Network, which will be send out into its surrounding which can react/interact with the triggered sound that is going to feed back into the system l be analysed again
Ideally the “processed” sound will change the output of the system again. Or maybe it stay static until some larger sound events occur?
Where could this place be? Public space?
This reminds me of Neuhaus’ Sound Installation Times Square which is set up underneath the grates of the subway.
Playing back a continious drone, almost invisible inside the soundscape of New York.
Only when you listen closer, take your your time and step out of of the cities pulse you experience the installation consciously.
What would it mean if something like this can create more than just a moment of pause? A moment of continuity we are not aware of, something that evolves throughout our usual perception of time and the sensual overflow inside the city. Some kind of ongoing composition, slowly modulated by its surrounding?
What is my statement, what do I want to achieve?
I want to experience time/now. Taking time – stepping back. Becoming aware of the surrounding and its changes. Following a tone, not being able to leave or focus on other things -> at least for a moment and without force.
-> Sleepovers (I think Fang and other students did something similar on the island last year – I was invited once to “perform” a DJ set at a venue where everyone was sleeping/listening throughout the night.
-> Performance of Kali Malone, Steven O’Malley and Lucy Railton at Haus der Kunst.
“The longer you listen to a tone the more detail you are hearing.”
Kali Malone 30.04.2022 Artist Talk
The first entry of my journal for the summer semester term 2022.
Further reading: Macbeth Documentation