Arie Altena
index

Approaches to Space and Sound

Interview with Raviv Ganchrow
Arie Altena

PDF

Raviv Ganchrow creates sound installations that make space audible through sound interventions. In his work he researches the relation between space and sound on the most fundamental level. He bridges the fields of architecture and sound, music and spatiality, and studied both architecture and sonology. When talking about his work he touches just as frequently on art history and architectural theory, as he does on modern classical music, the history of sound recording technology and the changes in listening behaviour. I interviewed him in his house in Amsterdam in October 2009.

AA: You have a background in architecture and sonology, and you are primarily interested in the relation between space and sound. How do you connect those disciplines?

RG: The relation between sound and architecture has been a blind spot for a long time, at least in the way architecture is being taught. My personal interest in sound extends to a period before my studies in architecture when, back in Israel, I was creating sculpture and installations that often had a sonic component. The particular school I attended in New York to study architecture allowed for independent research, and already in the first year I was auditing a course on audiology in a nearby medical school because I was interested in the biological structure of the ear and the listening apparatus. At the time I was trying to find material on sound and architecture, but at the school of architecture, the only book on acoustics was Wallace C. Sabine’s Collected Papers on Acoustics, an original publication from 1923, filled with dust, and it seemed as if nobody had ever lent it from the library. I was trying to piece together a history of sound and architecture that I thought was there, but that turned out to be virtually nonexistent. At that time, it was manifest in pockets that were not necessarily connected. Only in the last ten years or so, there has been a substantial increase in the number of books published around this topic.

AA: What are the pockets where thinking on sound and architecture were present? I immediately think of the architecture of concert halls, and spatialization of sound in electroacoustic music.

RG: Various disciplines have touched on the connection between sound and space and on relations between listening and the environment of sound. It ranges from anthropology and physics to art history, media studies and music theory. You can find interesting aspects in less obvious fields, such as archaeology, that have recently coined the term ‘archaeoacoustics’. While studying in New York I was rather naive about the European history of early electronic music that was already dealing with questions of phonography and spatialization since the late 1940s, not to mention the earlier histories of polychoral music. Since my time at Sonology, I have become much more attuned to the connection between sound and space in the history of music, as well as in the development of purpose-built acoustic spaces, for instance the history of the concert hall. Greek amphitheatres already show a rather clear understanding of tectonic arrangements that facilitate an efficient transmission of voice. But an architectural construction founded upon an applied knowledge of acoustics is a rather recent development. It was only in the late nineteenth century that the acoustic fingerprint of the Neues Gewandhaus in Leipzig was successfully reproduced in the design of the Boston Symphony hall by utilizing Wallace Sabine’s newly discovered coefficients of absorption. And in many ways we are still replicating the aural yardstick propagated by the Leipzig ‘shoe box’ design. A more obscure instance of acoustics applied to building practices can be found in the example of the so-called sound mirrors – a proposed network of listening structures forming an early warning system, or listening shield, along the eastern coast of Britain. In the late 1920s the military built several large-scale prototypes of these mirrors. It ended up being a transitional technology so they were never really used in wartime, but in the project they achieved a sophisticated implementation of acoustic principles by relating frequency sizes to the dimensioning of built structure as well as achieving amplification using only physical acoustics. I conducted research into the remaining mirrors at the Denge site on the Kent coast and have published some thoughts on the topic. Aside from the looking into how these structures operate physically, I was interested in reading the case of the sound mirrors as a formative moment within the broader reconfiguration of listening habits – when an optic model of viewing is replaced with an acoustic model of listening. There are other instances where one can possible locate paradigmatic shifts in the understandings of sound in relation to the techniques of listening. Early collaborations between Marshall McLuhan and the anthropologist Edmund Carpenter produced a pointed critique of the ocular-centric nature of Western cultures. Some of the most compelling evidence they introduce comes from a comparison of navigation methods and approaches to depiction between the Inuit culture of Northern Canada – where there is a much greater reliance on the ear – and common practices within our own traditions. According to their reading, the reliance on the ear constitutes a completely different conception of space, an ‘Acoustic Space’, that can be contrasted with our own normative ‘Ocular Space’.

AA: Why has sound played such a small role in the realm of architecture until now?

RG: One of the problems with architecture and sound is that architecture, as a design practice, must operate primarily in a realm of representation. The design of a building is usually completed long before construction begins. The challenge for the architect is to be able to comprehend and convey the characteristics of an ‘experience’ solely from within the realms of drawings, models and possibly writing, in other words through languages of representation. Sound recordings, that enable us to ‘capture’ and ‘reproduce’ sounds, exists only since Edison’s invention in 1877; this is a recent event in terms of the history of architecture. It is significant that since the development of the phonograph we are literally able to hold a piece of sound and replay it for the first time. In contrast, preoccupations with vision and light in architecture go back to antiquity, as evidenced in works such as Euclid’s or Ibn al-Haytham’s books on optics. The development of lenses in the Middle Ages and Renaissance allows an exact understanding and control of light phenomena. Subsequently the knowledge of foreshortening and the understanding of how shadows fold around three-dimensional surfaces were utilized in architectural designs quite early. In the European context, the understanding of central point perspective, along with the development of different forms of representation and drawing during and after the Renaissance allowed for complex orchestrations of ocular–spatial events. The techniques to do the same with sound were simply not available at that time. In that sense it may not only be an intentional ocular-centrism in architecture, the delay of the initiation of sound into the design process may also be due to a lack of proper tools for handling sound. But I also think there is a more theoretical issue that needs to be addressed: sonic preoccupations will only become relevant in building practices when acoustics are incorporated fundamentally as an equal participant in the structuring of ‘form’; in other words, as an essential component in the production of space. The shift towards this kind of understanding of ‘form’ may be asking for too much; nonetheless, the tools that allow for such articulations are under development.

AA: Isn’t it possible now to make interactive acoustic models of a space with computer programs, and test, for instance, the reverberation of spaces before building them...

RG: It’s moving in that direction. The technique of ray-tracing, incorporated in image-rendering programs, is also applicable to calculations in sound, and there are already programs that can produce such acoustic simu lations. The problem is that it is very calculation intense. You can do it for one point in space, but you really have to virtually walk through the space to perceive the acoustic differences. I think it will take a few more years before we see such techniques incorporated into standard computer drafting programs. That said, I’m somewhat sceptical that acoustics will be incorporated in a meaningful manner into commercial CAD-packages. For instance, just look the transformation from drafting boards to computer screens: In terms of visual representation, the default rendering of space on CAD-programs is based on a Renaissance idea of linear perspective and axonometric projection. Most of these programs have naturalized representation to an extent that it reduces the potency of the representational ‘hinge’ in the development of a project. By using the standardized interface, one is immediately working in a quasi-3D space that has many presumptions on how the eye works and how space ‘is’. But representation is never neutral and this is only one way of imagining space. With sound there is a very different perception of space in the first place, it lies much more in the mediation between body and movement. It’s about engaging space. If you want to have a CAD representation that engages this notion of space, I would argue that the interfaces have to be fundamentally altered, possibly in a manner that begins to affect the way ocular space is represented as well.

AA: You were looking for an architecture of sound. Can you given an example of a project you were working on during your time in New York?

RG: My thesis project started with the question of relations between ‘form’ and ‘acoustics’. Passages is an architectural design that dealt with physical acoustics and intentionally excluded the loudspeaker. The loudspeaker is a Pandora’s box for space and acoustics because it can literally create space without the need for physical structure. In terms of ‘form’, acoustics relates to transformation in reverberation, diffusion, diffraction, but the resulting sonic formations depend on what happens in the environment at any given moment. I was very attracted to the idea of a perpetually unfinished form, a form that is continually being finished by events that are occurring beyond the control of the architect. You can control the basis of interactions, but the form sound takes in architectural terms is never complete and is based on a continual renewal.

AA: Isn’t that a description of how sound behaves in any space?

RG: Yes, but I was interested in incorporating this unruly aspect as an intentional part of the design. The proposal intended to create a container that organizes a certain orchestration of sound but not the audible events. Passages is a design for a pedestrian underpass at a busy urban intersection. The design consisted of one continuous open field of interactions – a fitting tectonic for the nature of acoustics. The fundamental difference between light and sound is that sound has no clear- cut borders. If you put up a wall you are visually separated from the space that is behind it. But sound will always manage to seep through solid surfaces, it changes its form and filters its spectrum, but there are no hard cuts. In Passages there are no defined paths through the space. Some areas are tuned to vocal frequencies, increasing the awareness of ‘self-presence’, while others filter traffic sounds. There is a sound mirror embedded in a portal that transfers sound to another precise location. There is a silenced zone constructed of sound-damping materials. There is a very reverberant drum space with a metallic walkway above it. There are Helmholtz resonators of different dimensions that go down to the traffic, so you get a sequence of hissing sounds. The design was an interesting experiment, and it remains a prototype. Once I finished it I didn’t have complete trust in the experiential outcome, thinking that it was possibly too subtle an intervention. I was afraid these careful tunings and orchestrations would go unnoticed. In hindsight, what was overlooked are the structuring capacities of listening itself. The visual training through architecture had certainly enhanced my vision, but what I was not aware of at the time was the fact that listening and the ear are equally malleable. Listening differently structures the audible world in a different way. Attention to hearing literally changes the experience of ‘surroundings’, possibly in a more potent manner than equivalent tunings of vision, because with vision you always have the relatively static material referents to fall back upon. In sound, the space you experience is in flux – it is exactly what you make of it. It is a quintessential perceiver-centric space. In that sense addressing the sonic aspect of architecture is not so much about adding sound into the built environment, it is really about rethinking listening.

AA: In Passages you propose orchestrating a sound experience for people who are passing by using the everyday urban sounds, specifically traffic. It is not about cancelling out those ‘bad’ sounds, which would have been maybe closer to the approach of the R. Murray Schafer school of acoustic ecology...

RG: In The Soundscape of Modernity: Architectural Acoustics and the Culture of Listening in America, 1900–1933, Emily Thompson draws attention to the fact that the early history of architectural acoustics is contemporaneous with concerns for noise abatement in the urban context. The minute you can measure decibels, you can say things are ‘too loud’. Thompson shows how the early Sabine work on acoustics and the first electronic recording devices that could be used outdoors in urban spaces coincided with the early skyscraper constructions and an over-densification of the street-level arteries in New York. So the history of early building acoustics is essentially intertwined with the early days of the noise abatement movement. It is somehow unfortunate that a similar narrative of sound suppression is found in acoustic ecology. One of the critiques I would have towards the acoustic ecology movement is its moralizing of sound: one of the key points of departure for this movement is a presumed degradation of a natural acoustic environment. I would challenge such a moralistic view of the ambient soundscape and even more so the implicit idea that there is such a thing as a ‘natural acoustic’ space. There is a wonderful moment in an interview with John Cage, where he is seated in his New York apartment overlooking traffic, and he remarks how pleasurable these sounds are because they are different every time you listen to them. On the other hand, one very important aspect of R. Murray Schafer’s writings is the idea of learning to listen to sounds. If we teach tonality as a basis for music you have to spend the rest of your life trying to get away from tonal systems. One of the problems of common practice tonal music is that it has its basis in periodic signals and relations between values of periodicity. With the exception of ‘timbre’, such ideas of music draw a very hard line between music and consider all those non-periodic signals to be ‘noise’. If you teach children to listen, rather than teaching ‘scales’, tonality becomes just one option amongst many other musical possibilities. That is why timbre was so important to the Futurists as well as to several modernist and contemporary classical composers. The moment you start to listen to timbre, you are opening the door to listening to the environment. I believe that listening itself is a synthesis of multiple, simultaneous factors – it is inclusive of an acoustic environment as well as of personal, subjective and cultural influences. The more I learn to listen in a certain way, the more the everyday environment surrounding me seems to have changed. When I am bicycling I am attentive to the difference in reflected ambient sound bouncing off various surfaces in the city. You can hear the difference between a brick wall or a facade with vegetation – a glass bus-stop along the bike path is an enormous acoustic event. I am listening to those kinds of things. There was a certain moment, during my sonology research, when I began to notice that ambient sounds would reflexively call up in my imagination equivalent waveforms or amplitude envelopes; maybe it had to do with crossing some comprehensive threshold in relation to sound – these noises were not only linked to the objects that produced these sounds but they also had independent palpable ‘shapes’. And the same applies to the spatial dimensions of sound, once you understand this invisible spatial layer of interactions it begins to inform and shape your experience. When I am in a concert nowadays, my attention is not focused on the stage anymore, because I have become so attentive to the three-dimensional spatial qualities of sound.

AA: Is there is a heightened awareness of space in contemporary music? I see it in many concerts at DNK in Amsterdam, a series that has also featured your work. There seems to be a redefinition of listening that comes out of noise music, and there is also the interest in drones that has resurfaced in the last few years...

RG: The example of drones is a very good one. From a compositional point of view drones have been attacked for their structural simplicity and avoidance of compositional questions. I see that as a misunderstanding of the genre – drones take the experience of sound as the starting point of the music, instead of approaching music as structuring of sounds that can then be experienced. To listen to drones is to be explicitly immersed in a fog of sound. Likewise, music that works with spatiality demands an effort on the part of the audience, and I think people are hungry for such experiences today. In these kinds of aural situations there is no possibility of ‘passive’ listening – or consumptive listening. Such difficult listening experiences can be rewarding just for the fact that it is something that cannot be reproduced in other environments; it really has to do with the experience of an event in a particular location at a particular moment in time. In other forms of spatial performance, as is the case with drones, the social context of music is brought to the fore: you come together to experience something unusual that cannot be recreated in the privatized audio environments of headphones or stereo systems. Maybe it has to do with an updating of certain notions of ‘ritual’. I see it as an important counterbalance to today’s total accessibility of music. Another influencing factor is now that multi-channel firewire sound interfaces have become cheaper, musicians and composers are experimenting even more with multi-source sound, but often without clear intentions or without knowing what is exactly happening. It breeds new types of spatial music, on which it is still difficult to comment, as these are still in an experimental phase of development. That said, I try to distance myself a bit from discussions of spatialization. Adding more loudspeakers or sources to a piece of music can also be a smoke screen of spectacularity that covers over half-baked aesthetic intentions, and in general I hesitate to endorse space as an essential turn in music. In many modes of music the question of space is not a relevant question at all and I think that it is still debatable if space is really a musical parameter in and of itself. We should not forget that even a single sound from one loudspeaker, or coming out of one side of a headphone in your ear, is already completely spatial. There is no such thing as non-spatial sound. If the dimensionality of the sound is important, or the intelligibility of complexity is, then it can be interesting to work with spatialization.

AA: Can you give an example of music in which space is indeed important?

RG: My quintessential examples are the later compositions by Luigi Nono, from the moment he starts working in Freiburg using live electronics. He then gets into an idea of space which incorporates, yet is not at all about spatialization. For instance, his opera Prometeo. His works for strings like Fragmente and Stille an Diotima also have an incredible spatial intensity that has nothing to do with spatialization. These works have to do with a certain mode of listening and a very different appearance of tonality in relation to the experience of the sounds.

AA: Does it have anything to do with the way that he deals with time in those works?

RG: How he exactly does it, is a complex question. I would suggest that in this work, space and time become one continuous mass of space-time. Prometeo has a very pronounced spatial agenda in the way it deals with the placement of audience and performers. There is an understanding of sound in the Alvin Lucier sense of tones-occupying-space, but it doesn’t take that as a starting point for the piece, as Lucier does, which for me is the major difference between Lucier and Nono.

AA: What do you think about the work of Iannis Xenakis? As an architect and composer, it seems that he was interested in some of the same issues that you are exploring...

RG: I am not an expert on Xenakis. I seem to be missing the mathematics gene in my biological makeup to really understand his Formalized Music. But from what I have encountered, it would seem that his stochastic approach to sound organization is one of his more radical contributions, more so, I would say, than his formulations of architecture and sound. I find the influence of his background as an engineer very interesting. His thesis was about reinforced concrete, then a cutting edge technology and a very technical, mathematics-intense subject. I once gave a speculative reading of his work in which I compared his approach to sound distribution with the understanding of tension and compression distribution in reinforced concrete. If you understand those calculations, you can find many analogies to the way in which he structures and distributes sound. If Varèse has ‘chemical’ or ‘atmospheric’ understandings of spatial sound then I would say Xenakis has ‘tensile’ or ‘thermodynamic’ understandings. As an expert on cast concrete you must grasp that solid material is actually very much alive, it is full of these forces that are interacting with each other, forces that are locked into the solid material. In general buildings tend to breath, they are alive just in terms of the contraction and expansion through the influence of light in the day–night cycle. My proposition was that if you understand solid materials in terms of fluid dynamics, you would better understand his distribution patterns of sound as well as light in three- dimensional space. His Diatope project, from 1978, which was presented in the forecourt of the Centre Pompidou, is interesting seen from the tradition of the Gesamtkunstwerk, though that is a history I am critical of, particularly the idea of a ‘total experience’ where all aspects of the work of art are under supervision. Still Xenakis remains an enigmatic figure. He was not part of the media art discourse when he was creating his audiovisual environments and he was never really part of either the French or the German camps of electronic music; it is only now that we are beginning to contextualize his projects. He was an outsider in the best sense of the word, a touchstone for conversations between architecture and music.

AA: The spatial aspect might have hardly been researched by composers in the past, on the other hand, it was there already in Renaissance music for instance...

RG: In the Western tradition it is probably true that the spatial aspect has not often played a major role, but Javanese gamelan and Balinese ketjak music, for example, are very spatial, and without wanting to oversimplify or eroticize these forms of music, there are some striking parallels to the acoustic vernacular of rainforests: those acousmatic listening situations where dense vegetation confines vision to the close-at-hand while the ear perceives to all these different layers of sound events, from the drone of cicadas to the yelps of monkeys and the pattering of rain on the canopy. Likewise the Indian Khyal tradition of vocal music, and of course, in connection to drones, cosmologies of vibration are related to certain aspects of Indian music. In the Western tradition we have indeed early evidence of spatial orientations in music with the cori spezzati in Venice. But the development of that style is closely connected to the history of Venice and its affluence. There was a certain excessiveness to Venice at the time, that allowed for the cori spezzati to take hold. The layout of St Mark’s basilica was very specific and allowed for the distribution of various choir lofts throughout the interior space, and composers started experimenting with space–sound relationships. The technique reached its peak in the work of Giovanni Gabrieli and the style migrated to various other centres in Europe. One explanation that I have heard Lucier voice in an interview is that the invention of modern notation techniques effectively cut off the Western tradition of music from its spatial history. The invention of print allowed for the spread of notation, but also necessitated its standardization. Because one only notates time and pitch, and not space, the spatial aspects of music disappears from view. So while notation made it possible to transmit a musical tradition over great geographic distances, without having to transport the instruments and the musicians, at the same time it started to exclude from music aspects that were already at a very high level of development in Gregorian chanting – another important form of spatial music – as well as in the cori spezzati.

AA: Is this changing now?

RG: There is indeed more interest in sound and space in general, and in these early histories as well. There also seems to be an openness from the side of the general listening public to experience such auditory events. I believe this is partly connected to the manner in which technical media influence the way we are listening. In my youth, I was hardwired to a Walkman and there are certain periods in my life where I can point out that one cassette that constitutes the soundtrack for that period. Then for many years I had no portable player, until a few years ago when I bought an MP3 player for recording purposes, but also loaded some music onto it. I only listened to it once, outdoors while walking, and all I could hear was the sound of the music, occurring inside my ears, overlapping with the sounds of the environment. It was one continuous thing, and I said to myself, maybe listening to the sounds of the environment is enough. Today’s vernacular listening habits are very peculiar. People spend a substantial portion of their lives locked up in cars, listening to music. I find this very interesting because you have the sound of the motor, the traffic and the music together. If you become aware of that, they all start to act together. An important flip of consciousness towards sound is happening now through our listening behaviours has to do with the contemporary use of sound technologies. The current lo-fi tendencies in the audio development amaze me. I really enjoy seeing the ‘hoodies’ on the tram listening to their bass-heavy hip-hop blaring out tiny loudspeakers on cell phones. The boombox phenomena I could sort of understand, but this is really bizarre. Of course it is about the assertion of a personal space, an expression of identity, but at the same time such listening experience nullifies the entire audiophile discussion of ‘qualities’ in musical reproduction. This is a different form of listening altogether.

AA: What do you think about 5.1 surround sound in cinemas, another contemporary listening situation where you can have sound coming from the back whereas the image is in front of you?

RG: This technology does not really have to do with spatial understandings of sound. In my view it is an extension of ideas of stereo, which in itself is an analogy to binocular vision and a ‘picture windows’ idea of sound. Stereo setups fit very well with the frontality of the standard cinematic experience. Surround sound formats merely enforce that frontality to include sounds-to-the-side or sounds-from-the- back. It is when you use technologies like Wave Field Synthesis in a cinema that you start to create very weird effects. If a soundscape similar to your experience of everyday-sound envelopes you in the auditorium, but you are still looking at a flat screen at the end of the hall, a noticeable disjunction is created between expressions in image and sound. One sensation approximates a realism whereas the other remains a synthetic representation. Incidentally, the Fraunhofer Institute in Germany continued to develop Wave Field Synthesis after the patent of Delft University, where the research started in the 1980s, expired. The intention was to develop it as the next standard for surround sound in cinemas. The story goes that when an industry representative from Hollywood came over to inspect the technology, they found that the sound was becoming so dominant that it began to detract from the visual experience and as such was counter-productive for the cinematic industry. So now we have an audio technology with some extensive years of research and development and we are not sure what to do with it. Wave Field Synthesis is a very complex system to create something that is actually quite simple: it’s a playback system that begins to approximate the sound of the environment; in other words, the way we hear sounds in everyday life.

AA: Can you tell me a bit about the Wave Field Synthesis project you were involved in?

RG: In 2006, following my research with phased-arrays at Sonology, I was commissioned by the Game of Life Foundation to develop a Wave Field Synthesis system for spatially oriented electroacoustic music concerts. I was responsible for designing and building the system, and Wouter Snoei, with the assistance of Jan Trützschler von Falkenstein, worked on the programming and the interface. I developed the original algorithm for phased-array techniques during my research at Sonology. Its a mobile system than can be set up in a variety of spaces and configurations and is made up of 192 coaxial speakers and 8 subwoofers. It is a platform in which you can experiment with the spatial dimensions of sound by adding a choreographic, or tectonic, component to composition strategies. I was interested in this as a design project, as I understood it to be an armature that created ‘space’ only through sound. You can see some aspects of this interest in the working out of the detailing and overall presence of the system. One unexpected characteristic of the system is that it seems to give a weight to sounds, sound is almost imbued with a dimensional gravity. From a perceptual point of view the sound quality is quite different from other multi-channel systems. Nearly any sound that you put into it becomes attractive to listen to. But that is also one of the dangers of the system, it allows for pyrotechnics of sound, it is an open door for unnecessary spectacularity.

AA: It should become interesting from a compositional perspective?

RG: Yes, that is why I gave the example of Luigi Nono. In his piece Omaggio a György Kurtág he employs the Halaphon, his ‘digital spatializer’, to grab the rapidly decaying tones emitted by live performers and transfers the sound through a sequence of loudspeakers. His intention is not just to have the sound move around, but to ‘prolong-the-leaving’. What is about to disappear from audibility lingers on just a little bit longer. Often the problem with spatialization methods is the spatial metaphors on which they are based. Systems like quadraphonics and 5.1 have a very Cartesian understanding of space: there is an imagined empty space within which loudspeakers are placed and within which the sounds are imagined to be moving around. Wave Field Synthesis is based on a very different conceptualization and understanding of sound. I call it ‘Phased Space’. I use the term to describe an understanding of acoustic space that exists prior to the act of listening, the space of the wave interactions themselves. My recent work intervenes in this Phased Space, yet is oriented towards acts of listening. Phased Space is never how we perceive the sound – we will always graft qualities to sound, yet it also recognizes that there is an inaccessible aspect to sound behaviour that the perceived tones only hint at. I am currently working on a project that gives space for both aspects of sound that has to do both with a transcription of Phased Space, at the same time without denying the indexical nature of sound. One of my Strategies Toward Space, titled Inwound, captured some of these qualities. It was a subterranean listening chamber at Potsdamer Platz, and was first realized as part of Tuned City Berlin. In this project you are listening through the environment itself in a manner that reveals a very dense field of vibrational interactions that follow the contours of audible sound, but which have a very different presence from that experienced in normative listening. I am very interested in that aspect of sound that hovers on a threshold between the indexical and the abstract, in trying to communicate something that is simultaneously abstract and completely everyday and obvious.

AA: Can you explain how you dealt with space and sound in your piece Aggregate in the DNK series in Amsterdam?

RG: In Aggregate I wanted to do something with the context of ‘performance’’, where you have the expectation of seeing a sound event in a location, normally with a starting and end time – as opposed to a sound installation that you can negotiate in various time frames. I wanted to make a piece that dealt with that social context of listening and that becomes part of the inclusive spatial strategies of the sounds. Aggregate is part of a series that I call Strategies Toward Space. Each strategy has a different sound appearance based on the space and the context to which it is applied. The strategies consist of approaches towards certain aspects of the sonic environment. Aggregate focuses on the latent aural spatialities tucked within the various cavities, surfaces and materials, in this case of the DNK-SMART Project Space hall in Amsterdam. The sonic transformations aimed at intensifying and collapsing these spaces into one another. For example, the physical dimensions of the space correspond with particular resonances that are also a set of frequencies, defining the acoustic fingerprint of the space. Also, all the materials that make up the hall, including the furniture and fixtures along with the physical characteristics of the space are propositions for frequency interactions. Microphones were set up to capture the sounds from the empty hall and specialized transducers were attached to various surfaces and materials in the space, picking up the very subtle vibrations of those materials. Over the course of the 32-minute cycle, you are listening to space through its own physical presence. One of the methods I employed is a technique pioneered by Alvin Lucier to set these frequencies into resonance through a process of looped recording and playback of the ambient sounds of the hall. By feeding back into it loops of various duration, you get the resonant frequencies of the materials as well as a presence of the audience affecting the cycles. The resonance of the room also starts to infect the frequencies of material resonances, and it becomes one chaotic system of interactions. The event started with a simple role reversal where the audience stood in the location usually reserved for the performers. The sound cycle itself was set off by the mechanical drone of folding in the empty tiered-seating. That, in turn, set off a series of recording-playback loops that are cycled through the various materials and through the empty space. With such strategies you need to work in the space itself over quite an extensive period of time, each new location needs to be gauged and tested in its own right. The durations and tunings from one space inevitably do not correspond to another. A strategy like Aggregate would take on a completely different presence in another location.

AA: You also presented another work at DNK on the same night...

RG: Undertow was installed in the entrance space, near the reception desk. It picks up magnetic fluctuations in the audible spectrum, for instance those coming from computers and the communication and electrical infrastructure built into the reception desk, and makes those vibrations audible through an array of loudspeakers. It is about displacing and presencing those vibrations. In fact much of what I do in my work is about making something present, more in the pictorial sense than in terms of a music tradition. That is one of the reasons why I shy away from performing. I have no background in music. I do not see what I am doing as extending the practice of music, although it may relate to it in more ways than one. Through my work, I try to get at a more fundamental connection between space and sound, one that is not symbolic, not synesthetic, but one that probes such relations at the level of fluctuations.

AA: It’s about physics?

RG: It is empirical, or maybe phenomenal is a better word. In the sense that experience is phenomenal, it deals with that plane of experience and existence.

AA: What do you call yourself? A sound artist?

RG: When I was in the military in Israel, there was a club scene, and on Friday nights just one place served something to eat after 2 o’clock in the morning. You could get grilled cheese sandwiches – and that was it. The peculiar thing about the place was that you couldn’t ask the guy for a grilled cheese sandwich, it was like a game, you had to ask him for something like a salmon ciabatta, put some mustard on it, and some pastrami too. Only then he’d hand you a grilled cheese sandwich. So you can call me an artist, sound artist, architect, even composer, but I will end up handing you a grilled cheese sandwich. For a long time I have had great difficulty in even naming for myself what I do. It does not really exist as a cultural domain. It touches on philosophy, it touches on architectural theory, it touches on histories of art, on music, and it sits somewhere in between all these. Now I think it is a strength to keep it as an intermediate zone. It relates to different domains, but it is not a category unto itself. One of my role models in this respect would be Aby Warburg, but viewed not as an art historian but rather as an artist because of his methods and approach that interrogates relations between forms, cultures and contexts. Sometimes you have to violate the definitions of disciplines to get to more fundamental understandings. Part of my drive is to get a grip on that which has always interested me. It’s something that has always been with me, and the closer I get, the less it has a name.

This interview was published in The Poetics of Space, 2010.

some rights reserved
Arie Altena
index