top of page
WhatsApp Image 2020-10-14 at 16.24.09 (1

SOUND

 ART

In this page, I will share my artistic research process journey (thesis and antithesis) including philosophical concepts, historical facts, conceptual ideas and techniques, just to name a few, and its results (synthesis); electroacoustic compositions, sound installations, and more. 

Contents (SoL)

CONTENTS

CLICK THE LINKS BELOW

The Seed of Life (SoL ~ so-ul)
  1. Ancestral Dance: Mangrove Forest Wetland Soundscape

  2. The Travellers: Freshwater Lake Wetland Soundscape

  3. The Black Forest: World-Wide-Forest Soundscape

  4. Rest and Shelter: Cave Limestone Soundscape

  5. Harvesting the Spirit of the Rice Paddy: Rice Paddy Field Soundscape

​​

The Shadow
  1. Tingkap Nada: Live Audio Streamer Installation

  2. Gerak Nada: Interactive 3-D Motion Sound Event

Miroirs of Malay Rebab
  1. Gajah Menangis

  2. Menghadap

  3. Enchanted Rain

  4. Nu

  5. San

The Seed of Life

The Seed of Life (SoL ~ so-ul)

Keywords: Proto-Malay, wetland, mangrove forest, soundscape, electroacoustic, acoustemology

The Seed of Life (SoL) is an artistic research project for the creative production of five electroacoustic soundscape compositions cycle; 1) Ancestral Dance, 2) The Travellers, 3) The Black Forest, 4) Rest and Shelter, and 5) Harvesting the Spirit of the Rice Paddy. From a pilot found soundscapes recordings in the Tanjung Piai mangrove forest park Malaysia, sonic juxtaposition and superimposition are perceived between the 'unheard' cacophony of chaotic gunshot sounds like from the 'snapping shrimp' inhabiting the mangrove forest and the euphony of the regular pulse of sounds produced from the cricket, cicadas and bird songs, gave rise to several questions on the inter-relationship between; nature and vibration as sound, and; vibration and life as culture, rituals, and habits, in which had brought about to the idea of ‘Seed of Life’, accompanied by the Proto-Malay eco-cultural narrative. Subsequently, a series of electroacoustic soundscapes composition cycles are developed to express the 'Seed of Life' concept and narrative by employing electroacoustic composition techniques viz; sound diffusion manipulation (spatialisation), sampling synthesis (ss), granular synthesis, real-time interactive audio manipulation, and more in which will be further discussed extensively. Thus, the objectives of this artistic research project are; 1) to explore the 'unheard' natural soundscapes of Malaysia cultural-heritage sites, Proto-Malay ethnographic narrative and their soundworld for archival and sound-culture analysis; 2) to disseminate the Malaysia cultural-heritage sites natural soundscapes, Proto-Malay eco-culture sound narrative, and soundworld through electroacoustic soundscapes as an alternative form of art music for cultural and natural heritage appreciation and identity. The ‘unheard’ natural soundscapes of Malaysia cultural-heritage sites are explored by means of literature studies of Malaysia natural cultural-heritage sites and Proto-Malay eco-cultural (ethnographic) narrative, archival research of existing Malaysia natural soundscapes recordings, ‘unheard’ Malaysia natural soundscapes field recordings, and soundscapes epistemology study (acoustemology) for cultural-aesthetic insight. 

Electroacoustic Soundscape: A Hybrid Genre
What is Soundscape and where does the term and concept originate? The soundscape was introduced by R. Murray Schafer (not to be confused with Pierre Schaeffer) in his book entitled The Soundscape: Our Sonic Environment and the Tuning of the World, which had emerged and derived from the term Landscape and its concept. A landscape described the visible natural or artificial physical features of an area (x-y) and space (z) of land (including water bodies; sea, lakes, ponds, and more) in its natural or original form (real world, reality, real-life) or its recreation or reproduction form (artificial world, fantasy, imagination) for examples; paintings, photos, and, videos through which, the visual composition (and arrangement) of visual physical features occupied in a landscape space (topography) can be described according to the depth (z) perception of visual features, which are distributed according to a three-tier (layered) visual-spatial concept; background, foreground, and middle ground, in which any of the layers may include the distinctive visible geographical feature known as the landmark (Simensen, T. et al. 2018).

"Among Schafer's (1993) main features of soundscape, 'soundmark' is the most important. This feature is essentially an auditory landmark: a sound that is uniquely and recognisably related to a particular location and/or culture. The remainder of the soundscape may be divided into the remaining two features: 'keynote', the background audio that sets the thematic tone for the piece and which other sounds modulate around; and 'sound signal', describing foreground sounds that are usually more consciously attended to. The use of soundmarks, keynotes and sound signals to establish an identifiable sense of space and time may therefore be referred to as 'soundscape composition'. Thus, an instrumental piece without any natural sound may qualify as a soundscape composition, whilst a folk song, performed on a public TV show and isolated from its original place and social, cultural or religious context would not." (Deng, Z. et al. 2015)

Therefore, a soundscape is a term and concept to describe either or both audible features of natural (nature-made; biophony, geophony) and artificial (man-made; anthrophony) in an area and space of land (including water bodies; sea, rivers, swamps, and more) in its form of natural-original perceived in real-time in the eal-world, whereby soundscape (or raw-, neutral-, encounter-soundscape) term can be applied, for example; sounds emanate from the surrounding environment in a cafe perceived by a customer. Or its recreation (reproduction) form perceived in any time and any moment in the artificial space or imaginative world, in which found soundscape (or processed-, artificial- counter-soundscape) can be applied, for such as; rural environment sound recordings archive, natural soundscapes live stereo-field streaming, loops or samples sound bank, (found-) soundscapes music  (Truax, B. 2008)  just to mention a few. 

"In any sound or combinations of sounds, some aspects or elements will sound more prominent while others will seem to recede. We use the spatial terms foreground, middleground and background to help us differentiate the various elements we hear. Something loud, high, low, or which contrasts with its surroundings will sound as if it is 'closer' than something softer or less distinguished. Furthermore, the foreground, middleground and background can constantly change and shift. They do not necessarily have to be fixed." (Vella, R. et al. 20 , Erickson, R. 1975)

Furthermore, soundscape (encounter-soundscape) contains natural features of real-world acoustic space-area (spatial information) and perceives as a real human listening experience on the real event, while found soundscape (counter-soundscape) may contain near-natural and or artificial features of acoustic space-area based on 'programmed' sound-field setting perceived; stereo-field soundscape recording-playback, binaural soundscape recording-playback, and ambisonic soundscape recording-playback. A sound source will be perceived louder by the listener when the sound source is nearer to the listener, but one may not 'focus' or pay attention to listen to the louder sound. Therefore, mechanics and analogy of visual-spatial concept (background, middle ground, and foreground) do not accurately comply to describe the way one perceived depth or distance of the sound source (magnitude of sound) but rather on its 'contents' described in sound (music) composition point of view. Hence, the landscape visual-spatial concept may work to assist in describing the soundscape sound composition 'texture topography' dimension; audibly describing the compositions of sound elements in a soundscape, as opposed to audibly describing the 'sense' of depth or distance of sound elements in a soundscape perceived. 

Soundscape is a term and concept; it is observed that soundscape is used either or both for scientific and artistic purposes, whereby the scientific study of the soundscape is the subject of acoustic ecology or soundscape ecology, while the artistic study of the soundscape is the subject of sound art and soundscape music composition which is explored in the 1960's in Canada (Farina, A. 2013., Truax, B. 2008). Music is fundamentally perceived as organised sound and silence within a time-frame (focused event) (Cage, J.,  Campbell, I. 2017) and hence, soundscape (encounter- and counter-soundscape) can be perceived as music (focused listening within a time frame); soundscape music, 'composed' by nature and objects (sounding) within the environment space-area. However, from experiencing soundscape (encounter-soundscape), one may record and reproduce the soundscape into various forms of found soundscape (counter-soundscape) such as 'original' music composition (found soundscape music), found soundscape installation, found soundscape samples and loops (sound) bank and more.

 

In another way to express this, the art of found soundscapes may expand into electroacoustic music performance, mixed media, or installation art form whereby the 'found sound' in soundscapes (ecounter-soundscape) 'music' may be used by the composers or sound artist to express an artistic concept including environmental, cultural, and political awareness in form of found soundscape music (Truax, B. 2008, Westerkamp, H. 2002, Hendy, D. 2013). The soundscape music can be distinguished by the way (technique and style) of the sound organised (composed); selected, derived, structured, layered, and arranged (background, foreground, and middle-ground) within a time frame as mentioned by R. Murray Schafer for example two (found-) soundscape music composition by Francisco López entitled Hyper-Rainforest and La Selva. Is a (found-) soundscape composition a piece of electroacoustic music?. How about the reproduction of found soundscape audible features (biophony, geophony, and anthrophony) to invoke memories of a place, culture, and more through composing using a combination or any of found object, acoustic musical instrument, and 'raw' found soundscape recordings such as the soundscape composition by Jonas Baes, DALUY (1994) five soundscapes for five groups of performers and audience. To re-answer this, we need to visit what is a soundscape music composition and what is electroacoustic music composition; method, technique, and style. 

"At first, the simple exercise of 'framing' environmental sound by taking it out of context, where often it is ignored, and directing the listener's attention to it in a publication or public presentation, meant that the compositional technique involved was minimal, involving only selection, transparent editing, and unobtrusive cross-fading. This 'neutral' use of the material established one end of the continuum occupied by soundscape compositions, namely those that are the closest to the original environment, or what might be called 'found compositions.' Other works use transformations of environmental sounds and here the full range of analog and digital studio techniques comes into play, with an inevitable increase in the level of abstraction. However, the intent is always to reveal a deeper level of signification inherent within the sound and to invoke the listener's semantic associations without obliterating the sound's recognizability." (Truax, B.)

Through experimental music approach and novelty seeking, music composers from the second industrial revolution era in the earlier 20th century notably Ferruccio Busoni, Luigi Russolo, and Edgard Varèse, had embarked on a new soundworld journey of electronic music and following this, influenced by the rapid advancement of electrical-based technology and computers environment, music composers such as Karl Stockhausen, Pierre Schaeffer, and Iannis Xenakis just to mention a few, started to adventure further into electroacoustic music (Collins, N., et al. 2013). Electro-acoustic is a term to describe the method of sound produced which involves back and forth transformation of sound 'medium' and its natural environment; from sound wave (air) to audio wave (electric) to sound wave (air). Therefore, electroacoustic music is a type of music that involves the back and forth transformation of sound 'medium' and its natural environment and features audible sound (including musical) composition characters based on electroacoustic music composition techniques and style.

 

The composition technique and style usually started with the method of transforming acoustical sound sources (sound wave) into variable electrical sources (audio wave) through the microphone (transducer) and sampled (digitized) for manipulation (sound design and sound synthesis) via digital signal processing (DSP) in the computers or synthesizers or audio effects hardware. Then, the processed audio 'collection' was organised in a time-frame (composed), arranged, and mixed in Digital Audio Workstation (DAW) or audio tape deck mixer (Tape music) and disseminated (performed) the audio composition through speakers in form of either fixed medium (playback mix) or stage performance setting; live electronic (live mix) (Manning, P. 2013). Moreover, the audio manipulation approach (timbre) and 'organising' (textures, gestures, harmonic relations, spatial, contraction and expansion, and more) the sound objects served as the audible features of electroacoustic music style and techniques; acousmatic, spatial surround music, and immersive music just to list a few in which those techniques and style can possibly be fused and crossed over. Thus, electroacoustic music is a type of music composed for and performed by electro-acoustic based musical instruments and devices (such as computers, and speakers), which generate a composition of sound objects through complex manipulation and organisation of audio signal waves such as sound design, and sound synthesis, and spatial acoustic space; spatio-temporal (time and space-based) within a programmed time-frame.

 

R. Murray Schafer had coherently defined the audible features of sounds composition characters in found soundscape 'piece' of music based on the soundscapes (mixture of sounds) 'texture-topography'; 'keynote' as background, 'sound signal' as foreground, and 'soundmarks' as middle-ground, in which the texture-topography may constantly shift and interchange, resulted from the act of composing music.  However, can we do more on found soundscape and its music composition (background, foreground, and middle-ground) to express our thoughts and feelings (artistically; counter-soundscape)? He had coined the term 'schizo-phonia' and described it as the act of one for 'splitting' or 'detaching' the soundscape (ecounter-soundscape) from its original-natural composition characters and forms (reality soundworld); background, foreground, and middle-groundwhich causing the found soundscape 'lost' its 'landmarks', 'keynote' and 'sound signals' natural-original origin and identity; resulted from one 'unintentionally act' similar to schizophrenia?.

 

Leading to this had somehow, dissonance the concept of acousmatic (Pierre Schaeffer) whereby the 'splitting' or 'separating' the sound source from its origins is done intentionally with purposes and causes; reduced listening, semantic listening, causal listening (Chion, M. 2019). Truax, B. in his article entitle Soundscape Composition as Global Music: Electroacoustic music as Soundscape stated that "Schafer (1969) originally described the electroacoustic listening experience as ‘schizophonic’, suggesting it was an aberration. Today, such ‘aberration’ is increasingly the norm"; for example, a work by Christ Waton, Climax (from Number One album, 2006). Nevertheless, it is possible to explore further by merging the two paths (found soundscape music and electroacoustic music) and balancing their 'characters identity' by 'echoing' the composer's cultural narrative expression. Even so, it would be much more clearer to assimilate the term 'electroacoustic' in (found-) soundscape music composition as a 'hybrid' genre (style); electroacoustic soundscape music (as contra to found soundscape music and soundscape music).

Sound Field Recording & Playback: Reproduction Method for Found-Soundscape

Application of sound field recording and playback will be discussed to identify the best solution in reproducing found-soundscape recordings for SoL artistic output. Sound field recording is a method of sampling and collecting sound or mixture of sounds from either or both far-field (acoustically diffused sounds) or near-field (direct sounds) in an open-free spaced area (outdoor location) as contra to the sound studio recording method and its controlled acoustic environment (space-area). The sound field recording traditionally being practices to record outdoor film location set sounds (location sounds) for cinematic movies and on-site news reporting broadcast; ambient sound, speech and dialogue sound, movement sounds, and sound effects, using sound field recording tools such as sound field microphone, and its application technique such as microphone array setup (rig) and placement.

 

Apart from recording specific sound sources (object-based) such as speech and found sound via direct microphone methods using the body (lavalier) microphone and boom (shotgun) microphone, the sound field recording method also used to record outdoor 'stage' sound and its outdoor acoustic environment using either or both omni- and bi-directional condenser microphone array whereby, larger microphone capsule diaphragm (transducer) is usually applied to record 'detailed' lower (frequency) and bigger (amplitude) tone sound source, for example, thunder sound (Krause, B. (2016). A field recording microphone is usually tailored and equipped with additional tools to capture very soft detailed sound without noise such as from self-noise or wind noise by using, for example, a low self-noise microphone, suspended microphone, and windshield or blimp, just to list a few. In addition, microphone pick-up point placement also plays an important role in isolating the picked-up sound source from external noise and unwanted acoustic feedback. Those microphone techniques will prevent heavy sonic discoloration (audio filtering) during the post-production of the raw natural recorded sound. In addition, sound field recordist may also use other types of microphone transducers to pick-up (and record) 'unheard' sounds in various medium, for example, a hydrophone to pick-up sound in water, a geophone (piezo) to pick-up sound in solids, and 'RF-phone' to pick-up radio frequency (electromagnetic) sound in electric current, just to mention a few. Even so, various microphone characteristics and arrays can be applied and experimented with in sound field recordings to produce a specific sound-field audio reproduction (playback monitoring) with a 'signature' blend of sonic quality which will be discussed in further detail applied on each SoL electroacoustic soundscape composition movement productions.

 

Sound field recordings feature location's acoustic environment information; sound localisation (displacement) and acoustic cues from a mixture of sounds in a soundscape, which based on the found-soundscape sound field recording-playback method (including its techniques) and format applied by recording engineer and sound artist such as stereo-field, immersive-field (binaural, surround, and ambisonic) to produce realistic (natural) auralisation experience and 'engaging' sensation as one may perceive from the actual, real-time in real-world sounding location (soundscape or encounter-soundscape). The way we see the world is in stereo-field eyesight view and usually paired with the way we hear the sounding world which served to 'make-sense' from our real-world experience. Stereo-field format is commonly used in audio production to produce sounds as it is coming from a 'stage' with 'natural' sound localisation information and natural even sound pressure (loudness) distributed in between left and right stereo-image field (center) by feeding two Inter-channel audio differences (left and right) from stereo microphone into each two speaker arrays, arranged in coincident stereo-field projection for creating stereophonic images based on the level difference (loudness) and time difference perceived from the arrangement of the two speakers towards the listener (figure 1).

 

There are four common stereo microphone array setup techniques to record stereo-field sounds by utilising the use of two (or three) microphones simultaneously; spaced pair (AB), coincident (XY), near-coincident (ORTF, NOS, DIN), and mid-side (three microphones) in which, each microphone setup provides different acoustic-field pickup sensitivity and characters (Figure 2) and the most near realistic stereo-field sound reproduction is from microphone array with spaced pair technique (Rumsey, F. & McCormick, T. 2014, DPA. 2019). Additionally, is observed that an acoustic array baffled such as spherical disk (known as Jecklin disk) and shoe box-like object can be applied in stereo-field array to produce distinctive separation and well balance distribution of sound localsation (spatial resolution and sonic displacement cues) in stereo-field speakers playback monitoring (Billingsley, M., & Bartlett, B. 1990). This is done by inserting the acoustic baffled between the spaced pair or near-coincident microphone array (Figure 3) (Tagg, J. 2012, Wildtronics, Hass, C. 2017., Smith, I.C. 2021)

Figure 1. Stereo-field speakers for audio reproduction (playback monitoring), emulate a listening perspective from the audience seat towards the stage.

Figure 2. Stereo Mircophone Array

Figure 3. Acoustic array baffled

However, we can perceive sound immersively (as listening at the center of the stage) from left, right, front, back, up, and down space with our pair of ears beyond our stereo-field eyesight view, and this 3-D sound-field space perspective is known as periphone. Following this, the reproduction of immersive sound field in binaural-field, surround-field, and ambisonic-field (spherical-field) method, technique, and format has been explored by audio engineers and acousticians to emulate the immersive (real-world) auralisation experience. This has been realised through the usage of; 1) specific 'immersive' microphone system types and setup techniques such as binaural microphone (dummy head-ears microphone), surround array microphone, and ambisonic (triangulated spherical) array microphone to record immersive acoustic-field 2) multi-channel speakers with regular array settings which consist of horizontal planar surround speakers array (ATMOS 5.1 surround speakers array or octophonic ring speakers array), periphonic surround speakers array (ATMOS 5.1.2 speakers array or octophonic cube speakers array) and higher-order ambisonic (HOA) speakers array (triangulated spherical speakers array); and 3) sound-field manipulation techniques such as sound-field decoder and spatialisation audio digital signal processing algorithm to convert (decode) any immersive microphone recording format into compatible immersive audio speaker arrays configuration (Rumsey, F. & McCormick, T. 2014, Waves. 2017, DPA 2020).

A binaural microphone is an extended version of stereo spaced pair array microphone (inter-channel difference) with an acoustic array baffled inserted between the microphone pair by using a real-size human head-like object (dummy) with matched-pair omnidirectional microphones installed at left-right artificial human ear canal with auricle or pinna (external ears) and positioned at a human listening height to emulate the natural human auralisation experience in immersive 3-D soundworld through headphone listening based on the Head-Related Transfer Function (HRTF) principle (Kromhout, M. J. 2017). The omnidirectional microphone picks up both incoming projected sounds that go towards the surrounding of the dummy head center region and diffused (filtered) sounds from the dummy head. As a result, immersive sounds with natural distinctive separation and natural distribution of sound localisation (spatial resolution and sonic displacement cues) able to be perceived through binaural headphone playback monitoring (Figure 4) (Rumsey, F., & McCormick, T. 2014).

 

Be that as it may, the binaural microphone recordings can be reproduced over stereo-field array speakers (transaural) by using cross-talked cancellation audio process invented by Atal and Schroeder at Bell Labs, which effectively perceived at fix listening 'sweet-spot' only; "With headphones each driver talks to each ear separately (isolate). With speakers, however, each speaker talks (spills) to both ears and destroys the desired effect of the binaural recording. Crosstalk cancellation methods take a little of the left signal and feed it to the right speaker with the right delay (and phase) to have it combine with the actual right speaker signal and cancel the part that goes to the left ear" (Audssey Labs. 2016). Nevertheless,  there are other several sound-field decoder plugins to convert binaural recording playback for horizontal planar surround speakers array, periphonic surround speakers array, and higher-order ambisonic (spherical) speakers array playback, and vice-versa (Vennerød, J. 2014).

Figure 4. The binaural periphonic-sound field region

A surround array microphone uses multiple microphones (typically five microphones) consist of cardioid microphones spaced surround array and omnidirectional microphones spaced surround array (inter-channel difference), which produce immersive sound with better spatialisation manipulation and resolution for sound-field playback encoders compare to a binaural microphone (Figure 5) (DPA, 2020). However, this immersive microphone method is costly, low mobility, space-consuming, and time-consuming to set up, for example, multiple of five microphones with individual pick-up point alignment and multiple audio channels with individual routing and configuration, which suitable for a controlled acoustic environment, and not practical for found soundscape field recording, especially in remote outdoor with an unpredictable environment situation.

 

Figure 5. Surround array microphone.

Leading to this, an ambisonic microphone array invented in 1970 and patented by Peter Craven and Michael Gerzon, which consists of a single 'body' microphone with four cardioid (or sub-cardioid) microphone 'capsules' closely arranged (fixed) to form a tetrahedral structural-point, with each fixed pick-up position pointing outward towards the sound source in the spherical sound-field region (spherical harmonic domains), has four signal channel outputs only and this compact high portability type of microphone is also known as tetra-mic (or ambi-mic or VR-mic). Each four microphone capsules provides four inter-channel differences; Left-Front-Up (LFU), Right-Front-Up (RFU), Left-Back-Down (LBD), and Right-Back-Down (RBD) which this type of raw audio format output is known as A-format. These four inter-channel differences A-format does not provide spatial information in spherical sound-field region and hence, the A-Format channels will be encoded (reconstruct) into B-format channels (W, X, Y, Z), which provides spherical sound-field spatial information known as the first-order ambisonic spherical harmonics with sound wave planar X, Y and Z. This is done through matrixing A to B format calibration including signal leveling, phase correction, and filtering (FIR filters and HF filter) (Figure 6) (Adriaensen, F. 2007).

 

The ambisonic microphone array is based on mid-side microphone array stereo-field concept and one can decode (derive) the native B format components to emulate various microphone polar pattern virtually; omnidirectional, cardioid, hyper-cardioid, figure-of-eight, or anything in between the tetra-mic sound-field sphere region; 'Agnostic' format (Rumsey, F., & McCormick, T. 2014). Following this, the B-Format components (W, X, Y, Z) can be decoded using several spatial audio algorithms to reproduce accurate and high resolution of 'artificial' surround sound spatial composition from first spherical harmonics orders (Figure 7) into higher harmonics orders (HOA), which usually up to third HOA (depending on the processing capacity of ambisonic channels decorder plugins such as in Envelope for Live plugins) for any immersive (periphonic) audio playback monitoring system and can be decoded further into a binaural audio format (HRFT filters) for periphonic monitoring via headphone (Adriaensen, F. 2007, Malham, D. G., & Myatt, A. 1995).

 

In addition, according to Sitt, P. (2017), "an HOA encoded sound field requires \((M+1)^{2}\) channels (B-format native components, X, Y, Z, W) to represent the scene, for example, four native channels (tetra)  for first-order, nine for second, sixteen native channels for third. We can see that very quickly we require a very large number of audio channels even for relatively low orders. However, as with first-order Ambisonics, it is possible to do rotations of the full sound field relatively easily, allowing for integration with head tracker information for Virtual Reality (VR) or Augmented Reality (AR) environment purposes. The number of channels remains the same no matter how many sources we include. This is a great advantage for Ambisonics". Later on, from the tetra (ambisonic) microphone array four channels, more microphone capsules were added in the triangulated spherical form to create native higher-order ambisonics for higher spatial precision and resolution reproduction and beyond third HOA. Nevertheless, several ambisonic microphone arrays were compared (subjectively and objectively) and published in research article papers by Seán Dooney, Marcin Gorzel, Hugh O’Dwyer, Luke Ferguson, and Francis M. Boland from the Spatial Audio Research Group, Trinity College, and Google, Dublin) which will be discussed further details for its application in the fourth artistic research project, Miroirs of the Malay Rebab.

Figure 6. Tetrahedral pick-up point in a spherical (ambisonic)-sound field.

Figure 7. Spherical harmonics HOA

Nevertheless, various recording method and its combination can be (experimentally) applied (recording arts) for example, ambisonic tetra-hydrophone (Farina, A. et al. 2011) to produce unique found soundscape sonic-prints and sound artist's (composer's) expression in which each sound-field recording and playback above are explored to produce various 'engaging' listening experiences (which may be induced by immersive experience) for SoL artistic output (five electroacoustic soundscape movements) in either or both with real sound-field source or artificial sound-field source composition mix using various spatial panning method as 2-D (horizontal planar) vector-based amplitude panning (VBAP), 3-D VBAP, multiple distance-based amplitude panning (M-DBAP) and ambisonic panning (Pulkki, V. 1997, Frank, M. et al. 2014, Carlier, V. 2020) for playback monitoring through binaural headphone (fixed point) and may decode further for the fixed-point horizontal planar speakers array (stereo and surround) and periphonic speakers array (fixed and free point cube or spherical array). The specific approach to each electroacoustic soundscape music composition will be discussed further in more detail. 

Sound Design and Synthesis

Sound design is a method to edit the sampled audio wave signal (from a recording or synthesised sound source) that is recorded in an audio track, which involves audio track duration length trimming, audio track layering, audio track level gaining, and wave sculpturing (filtering) through audio signal processors. Meanwhile, audio (digital) and sound (analog) synthesis is a synthesis process on analog or digital electronic-based sound production devices or instruments such as the analog synthesizers, which the produced sound may imitate (synthetic) and simulate (model) the sound timbre of acoustic-based sound production instruments such as violin or flute;  or have new sound timbre or mixed of timbre (texture) by 'manipulating' either or both the audio wave sourced from tone generator (oscillator) or sampled audio and sound waved sourced from sound sampling (sound wave recording) (Vail, M. 2014). The sound synthesis method involves either or any combination (re-synthesis) of sound synthesis techniques such as subtractive, additive, granular, wavetable, modulation (FM, AM & Ring), and more (Figure 8a:8g) as demonstrated in Pure Data, an open-source audiovisual programming language by Miller Puckette (Kreidler, J. 2009, Floss Manuals. 1991). 

 

Composing (organising) the found soundscapes and found sounds derived from the sound design and synthesis process will be done in a time-based multi-track audio environment. Following this, Ableton Live digital audio workstation (DAW) software and its internal signal processing plugins are used as the main audio design editing, synthesis processing, and composing platform for the SoL: Ancestral Dance and sound archival production. In addition, external signal processing plugins and software such as Ableton Granulator II, Pure Data granule re-synthesis, Sennheiser Ambeo Binaural, Wave Arts Panorama Stereo-Binaural, IEM Binaural Ambisonic plugins are explored. The collection of soundscapes field recordings is uploaded into the DAW working folder and multi-tracks window for post-production of found sound and 'studio-quality soundscapes recordings collection. The specific approach to each electroacoustic soundscape music composition will be discussed further. 

FM synthesis pure data

Figure 8a. Audio frequency modulation synthesis demonstrated in Pure Data. The carrier frequency is modulated through a set of modulator objects (right), resulting FM audio displayed on the FM oscillogram.

additive synthesis pure data

Figure 8b. Audio FM-additive synthesis demonstrated in Pure Data. The FM synthesis is re-synthesised into additive synthesis for either or both harmonic or inharmonic frequencies.

ring modulation pure data

Figure 8c. Audio ring-modulation synthesis demonstrated in Pure Data. The sound source amplitude modulated in a unipolar oscillogram between 1 and 0.

amplitude modulation pure data

Figure 8d. Audio amplitude modulation synthesis demonstrated in Pure Data. The sound source amplitude modulated in bipolar (wave graph) between 1 and - 1 displayed on amp oscillogram.

Subtractive synthesis pure data

Figure 8e. Audio subtractive synthesis demonstrated in Pure Data. The sound source (noise) frequencies are subtracted (removed) through frequency filters. FFT spectrum analyser may be used to visualised the sonic quality changes rather than oscillograms (above right).

 

Figure 8f. Audio wavetable synthesis demonstrated in Pure Data. 

Figure 8g. Audio granular synthesis demonstrated in Pure Data. 

Unheard Soundscapes Narrative: Malaysia Biophony and Geophony

Malaysia is a tropical country with rich natural heritage landscapes consisting of unique natural geographical features;  caves, mountains, coastline, swamp lake, rainforest, and coral reefs, which inhabited by a mega-diverse flora (plants) and fauna (animals) species, including endangered one in the Sundaland biogeographical region. These rich natural heritage landscapes are celebrated and appreciated through the ecotourism industry as part of conservation and economic activities. However, it is observed that there are less known awareness and appreciation on the potential of the Malaysia natural heritage soundscape (sound ecology) as conservable components and sound arts medium from among the local communities and institutions.

 

There are several natural heritage found soundscape recordings mainly from the Malaysia rainforest-mountains landscape (including water bodies such as rivers and, swamps) produced at studio quality sound-field recordings and published online by the Wild Ambience and Nature Soundmap project based in Australia, founded by Marc Anderson, which are accessible to the open community via their websites. Other natural heritage soundscapes from the rest of Malaysia's natural heritage landscapes; caves, coastline, swamp lake (underwater), and coral reefs are ‘unheard’. Therefore, to attract the communities to aware and appreciate the ‘unheard’ soundscapes, the field recordings will be made at specific Malaysia natural heritage landscape locations based on the historical-culture (including myth and legends) background originated from the landscape’s living communities; the Orang Asli (indigenous peoples), particularly the one descended from the Proto-Malay ethnic groups, the ancestors of the Malays in modern Malaysia (Zainuddin, Z. 2012), which in line with the theme of the Nada Bumi artistic research project.

The Malaysian government institutions had officially recognised 18 different ethnicities of the Orang Asli group in Malaysia, classifying them into three categories based on their genetics, language, culture, physical appearance, and settlement characteristics, namely the Negrito (Semang), Senoi, and aboriginal Malays (Proto-Malay) (JKOA, 2021). The Proto-Malay ethnic people is also known as the Melayu Asli (aboriginal Malay) or Melayu Purba (ancient Malay) or Melayu Tua (old Malay), refers to the diaspora of Austronesian speakers, possibly from mainland Asia, who traveled and migrated to the Malay peninsula and Malay archipelago between 2500 and 1500 BC (Norhalifah, H.K. et al. 2016, Zainuddin, Z. 2012). In Malaysia, the Proto-Malay are classified under the native Orang Asli group of people in Peninsular Malaysia and they are officially known as (JKOA, 2021); 1) Seletar, Jakun, Temuan, Semelai, Temok, Orang Kuala, and Orang Kanaq (Figure 9).

peninsular_malaysia_map_edited.png

Figure 9. Orang Asli diaspora region in Peninsular Malaysia; Negrito, Senoi and Proto-Malay group.

Orang Seletar (also known as Selitar or Slitar) are also considered as part of the Orang Laut, natives of the Straits of Johor; separating Singapore from Peninsula Malaysia and Despite their proximity to developed countries, the Orang Seletar largely retain a traditional way of life (Zainuddin, Z. 2012). Interestingly, Ali, M. (2012), stated that the Seletar people sometimes acknowledge themselves as the Kon Seletar, which the prefix “Kon-” means a person belonging to a certain ethnic or social group and is theoretically believed to be originated from the Mon-Khmer language. Most of the Seletar people nowadays have assimilated with the urban way of life, however, for a long time ago, the Seletar people have been practicing a nomadic way of life within the mangrove forests and marshes along the straits and rivers and adhere to their animistic beliefs. Mangrove is a unique repository of rich and diverse aquatic plants and natural resources, the operation of which has built a traditional economic complex of this people, and among the mangrove jungle, the Seletar people remained virtually lost to the outside world. (Lim, J. 2014).

Batu Belah Batu Bertangkup is the story of a mother who lives with her son and daughter. The mother loves to eat the eggs of the ikan tembakul (mudskipper), and looks forward to eating them one particular day. However, she discovers her children have eaten all they have, leaving none for her. In despair, she goes to a nearby rock with a cleft in it, and begs it to swallow her whole. The rock does so: the children are left alone and regret their actions. It’s a sad story with a moral of filial piety, and a warning against ingratitude. (Toh, T. 2020).

The mangrove forest swamp is well known to be inhabited by mudskipper among the local communities and the mudskipper is associated with a local Malay folktale, Batu Belah Batu Bertangkup, a cursed devouring clamp-like giant rocks, whereby the mudskipper in the folktale is to be known in Malay language as Ikan Tembakul or Ikan Belacak. This folktale will be used as part of the cultural narrative in the Ancestral Dance electroacoustic soundscape composition. Following this, the Mangrove forest swamp location will be selected to produce the ‘unheard’ found soundscape of wetland Mangrove forest swamp while the found soundscapes recordings of the Seletar community served as the secondary data (Figure 10a, 10b & Video 1) and later, which to be used in electroacoustic soundscape production inspired from the location's historical-culture background.

Figure 10a. 'Unheard' Found Soundscapes field recordings locations in regards to the Seletar People historical-cultural narrative at Pulau Kukup and Tanjung Piai Mangrove swamp park and Seletar village, Kampung Simpang Arang, Johor Malaysia conducted on July 2020 and March 2021 (delegate).

Figure 10b. Found Soundscapes field recordings on Pulau Kukup and Tanjung Piai Mangrove swamp park

Figure 10b. Found Soundscape field recordings at Seletar indigenous people village, Kampung Simpang Arang, Johor Malaysia.

Video 1. Field recordings (video footage) on Seletar people settlement, way of life, and language at Kampung Simpang Arang, Johor Malaysia by Siti, N. & Nizam, Z. (2021).

Tasik Chini is rich in myth and legend and an ancient Khmer city is believed to be at the bottom of the lake. A dragon or legendary giant serpent called the Naga Seri Gumum by the Jakuns (sometimes referred to as Malaysia’s version of the Loch Ness monster) is said to live in the lake and guard the sunken city. (Star Media Group, 2008, P.A. Van der Helm 2008)

Jakun people or Orang Ulu or Orang Hulu (meaning, "people of the upstream") are mostly populated at the various remote geographical area, from wet swampy to dense tropical rainforest in the southern region of the Malay Peninsula, in the interior of the southwest Pahang and north Johor, (COAC, 2012). The habitual most Jakun people adhere to their animistic beliefs that are closely related to their natural surroundings in which they believe that not only people have souls but animals, plants, and even inanimate objects such as mountains, hills, settlements, rivers, rocks, caves, and so on (Ali, M. et al. 2002). It is observed that some of the remote Jakun settlements have become eco-tourism hotspot location primarily due to the attraction of tourist on the untouched natural landscape around the area such as the Tasik Chini (Lake Chini), which also popularly known for its local myth and legend about the sunken ancient city of the Khmer Empire and the Naga Gumum sea monster. Therefore, the Tasik Chini underwater found soundscape will be produced as the ‘unheard’ underwater wetland found soundscape (Figure 11a & 11b) and later, to be applied in electroacoustic soundscape production inspired from the location's historical-culture background.

 

Figure 11a. Field recordings located at Tasik Chini, Pahang Malaysia for 'unheard' underwater found soundscapes made in August 2020

Figure 11b. Field recordings at Tasik Chini, Pahang Malaysia for 'unheard' underwater found soundscapes made in August 2020

To be updated. sound and human - habitual.

orang asli diaspora - mix between negrito, proto-malay and senoi. - cave sound,

paddy field (artificial landscape)

peat-swamp forest (artificial biosphere - FRIM)

other 'unheard' soundscape narrative

SoL Movement No 1
 Ancestral Dance: Mangrove Forest Wetland Soundscape
SoL: Ancestral Dance

Ancestral Dance is an electroacoustic soundscape composition and served as the first movement for the Seed of Life artistic research production. The objective of this artistic work is to celebrate the 'unheard' found soundscape and its eco-cultural identity from acoustemology point of view in an alternative new art form, electroacoustic soundscape, a hybrid genre (Audio 1.1), while archiving (preserving) the pre-production of 'unheard' sonic prints for future reference and appreciation through Nada Bumi webpage. In line with the Nada Bumi theme, the 'unheard' found soundscape biophony and geophony features from mangrove forest which was inhabited by the Seletar, Proto-Malay indigenous people community in Johor Bahru along the southern coastline of Peninsular Malaysia were accentuated in this artistic work (Audio 1.2).

 

The mangrove seeds dispersal explosion biophonically suggested the re-birth of nature and celebrated by other natural beings in an organised, ritualistic, trans-like manner, and gravitated at the center region of the ecotone between the real and unreal soundworld dimension, which expressed in the Ancestral Dance through immersive surround sound, the binaural sound-field. From the 'unheard' found soundscape biophonic featuring the vocalization of newly discovered soniferous mudskipper species by the evolution scientist known as Periophthalmodon Septemradiatus in Tanjung Piai mangrove forest swamp, to the 'unheard' voice of Seletar people mind and language symbolised the threatened eco-culture of both habitant and habitual due to insensitive economic development around their landscapes. Moreover, the low rumbling rock-ish synthesised found sound personifies the local folktale narrative, Batu Belah Batu Bertangkup, a cursed devouring clamp-like giant rock associated with the mudskipper (Ikan Tembakul) inhabited around the mangrove forest swamp. The geophonic materials retrieved from the aquatic space (ambient) tone of Pulau Kukup underwater found soundscapes was abstractly applied through the audio synthesis process and resulting a 'biomorphonic' sounds like transformation gestures, which allude to the idea of origin and evolution. The approach, methods, and techniques for materialising the thoughts and expressions of the Ancestral Dance soundworld will be discussed further in detail. 

Audio 1.1. The Seed of Life: Ancestral Dance, a binaural* mix electroacoustic soundscape composition (Ver.1) *please use headphone to listen

Tanjung Piai and Pulau Kukup soundscapes final mix for the Ancestral DanceArtist Name
00:00 / 11:36

Audio 1.2. A sound layering between two natural soundscape field recordings of Mangrove forest in Pulau Kukup and Tanjung Pia which served as the 'key' sonic composition for the Seed of Life: Ancestral Dance.

A. Sound Field Recording and Sampling 

The mangrove forest soundscapes and found sounds recordings are done by utilising several stereo sound field recording techniques established on several key factors; aesthetic and sonic qualities of 'featured' sounds that are available on sites, recording tools available, and recording safety (weather-environment and location accessibility). The Pulau Kukup and Tanjung Piai mangrove forests have high biodiversity, which provides habitats for many species of aquatic fishes and important stopover sites for migratory waterbird and contains a representative, rare, or unique example of a natural or near-natural wetland type found within the appropriate biogeographic region such as the aquatic life; mudskipper, mud lobsters, fiddle crabs, tree-climbing crabs, and non-aquatic life; snails, birds, cicadas, crickets just to list a few (Johor National Parks, 2019). Hence, various biophony with broad sound spectrum range including ultrasounds may be encountered during the sound field recordings. Nevertheless, existing Pulau Kukup and Tanjung Piai sound field recordings, which only available on the internet archive as in video documentaries (amateurs and professionals) were examined to determine the type of sounds and its surrounding environment and to identify the 'unheard' soundscapes; biophony and geophony.

 

Pulau Kukup and Tanjung Piai are gazetted as part of Malaysia National Park and permission to conduct the sound field recording was granted by the Johor National Parks Corporation Research and Conservation Department according to their opening days and visiting hours in which the recording was made on a sunny breeze day approximately from 3.30 pm to 5.30 pm in Tanjung Piai on 15th June 2020 and in Pulau Kukup on 8 July 2020. Due to Covid-19 movement control, the field recording session was kept short and 'capture and go' mode whereby, each field recording session duration at a specific walkable path was limited between 4 to 12 minutes. Furthermore, three types of small-diaphragm microphones' models and microphones' positions; 1) Oktava MK012 stereo match pair cardioid microphone in A-B spaced array position, installed between an acoustic barrier; 2) Zoom XYH-6 stereo microphone in X-Y coincident array position, and; 3) Rode NTG-4 shotgun microphone in a blimp were used simultaneously by feeding all microphones input signals (2+2+1) into a battery-operated multi-track audio recorder, Zoom H6. All microphones sets were placed inline and as near as possible to one and another to avoid comb filtering effects during the post-production, due to 'out of phase' multi-stereo audio summed signal caused by the time-difference of sound taken to reach each microphone capsules' diaphragm (Everest, F. A., & Pohlmann, K. C. 2015. Krause, B. 2016). The multitrack sound recordings were audio sampled at 44.1kHz sampling rate with 24-bit depth audio resolution.

"The AB setup provides a pleasant reproduction of the reverberant sound field and provides useful spatial information. This setup takes advantage of the omnidirectional (pressure) microphones' rich low-frequency response. The directional information, however, is slightly less distinctive compared to other setups." (DPA Microphone, 2019)

IMG_20200616_140014_970.jpg

Figure 1.1 A-B stereo microphone recording setup using a self-made 'Jecklin disk' unit like between two match pair cardioid microphone with wind sheilds.

It is observed that a pilot free-far sound field recording with a pair of stereo A-B cardioid microphones positioned between 0.2 to 0.25 meter apart does provide near 'natural-ambient' listening experience from the audio recordings playback, except lack sound source localisation cues; directional intensity and clarity compared to the X-Y cardioid microphone positioned at 120 degrees (Eargle, J. 2012). Furthermore, the stereo image intensity gap between the X-Y and A-B microphone position is greater when replacing the A-B cardioid microphone capsules with omni-directional capsules, and more 'unpleasant background noise appeared. However, the A-B cardioid microphones provide a better near 'natural-ambient' listening with clearer sound source localisation cues when a self-made shoe-box like acoustic array baffled (inspired by Jeklin disk unit which originally invented by Jürg Jecklin which served as an acoustic barrier and filter) is fitted in between the A-B microphone pairs at 0.20 meter apart (Billingsley, M., & Bartlett, B. 1990, Smith, I.C. 2021). This stereo field recording rig is known as Head-Spaced Parallel Barrier Arrays (Olsen, C.R. 2020). Hence, the A-B cardioid microphones with shoe-box acoustic array baffled, and the X-Y cardioid microphones were applied to manipulate the 'true' stereo image sound intensity and source localisation clarity in the audio studio during the post audio production (electroacoustic soundscapes composition). Even so, several stereo microphone techniques such as X-Y Stereo Ambient Arrays Microphone (SAAM) and Stereo Parabolic Microphone (SPM) by Wiltronics and other stereo techniques may be explored further for producing the real 'natural-ambient' stereo field listening experience in future audio stereo recording (Hass, C. 2017., Tagg, J. 2012).

Zoom XYH-6 microphone: Tanjung Piai raw soundscapes recordingArtist Name
00:00 / 00:50
Oktava MK012 microphone: Tanjung Piai raw soundscapes recordingArtist Name
00:00 / 00:50

Audio 1.3. Sonic quality comparison* between Zoom XYH-6 and Oktava MK012 microphone raw soundscapes recording snippets  *please use headphone to listen

Three types of microphone models with slightly different frequency response characters and sensitivity within audible range of 20 Hertz (H) to 20 kiloHertz (kHz) were chosen to produce a variety of sonic coloration' mixture and signature with 'tolerable' self-noise (audio 1.3). Low self-noise produced by the microphone electronic circuits is crucial when capturing a very low sound pressure level (SPL) fluctuations (soft sound) of ambient and 'small-sounded' sound source such as ants; the self-noise of a microphone will be audibly obvious, to interfere with the recorded sound source when increasing the microphone input signal voltage gain. However, unwanted sound (noise) from external vibration to the sound field recording rig such as strong wind hitting the microphone diaphragm and strong wind shaking the microphone stand and cable, just to mention a few,  can be reduced through physical intervention such as attaching high-quality microphone blimp or windshield (also known as windscreen), pointing the microphone pick-up pattern away from against the wind direction, sheltering the microphone rig from the wind direction, using heavy-base microphone stand and microphone suspended mount, just to suggest a few (Smith, I. 2021, Krause, B. 2016). However, the audio electronic self-noise and other unwanted sounds such as wind noise and electromagnet (EM) humming noise just to list a few, can be reduced or removed through audio filterings such as high-pass filter, low-pass filter, multi-band filters, and audio 'filter-sculpturing' such as multi-band equaliser and audio spectral editor (Kromhout, M. J. 2017, Robjohns, H. 2005). However, some 'natural-original' sound discoloration of the audio will occur depending on the degree of the noise printed on the 'natural-original' sound in the audio recordings, tools, and techniques applied (audio 1.2 and 1.3). In addition, 'targeted' audio filterings may use to 'cosmetically' enhanced certain sound color by increasing or decreasing targeted frequencie(s) amplitudes, which significantly used in the mangrove forest sound field recordings post-production for SoL: Ancestral Dance electroacoustic soundscape composition and sound archival including sound bank and audio loops collection for future artistic research projects.

 

Furthermore, an omnidirectional hydrophone Aquarian Audio H2A is used to record the mangrove forest underwater soundscapes and found sound, which served to explore the 'unheard' aquatic soundscapes and soniferous aquatic life. The single omnidirectional hydrophone audio signal is captured in the Zoom H6 portable audio recorder. The aquatic soundscape recordings are made during high tides in Pulau Kukup island whereby, the H2A is submerged underwater below a walking path and placed near to the mangrove roots. Unwanted sounds picked up by the H2A from the vibrating cables caused by the winds and walk path footsteps are reduced by cloth padding below the XL cables from making direct contact with the walking path floor,  setting the XLR cable short enough to increase the cable's tension, and isolating the recording site approximately 5 meters away from possible humans activities. Due to Covid-19 movement restrictions, there are no human visitors observed during the soundscapes field recording activities in Pulau Kukup and Tanjung Piai, which helps to minimised the unwanted sound printed in the recordings.

It is observed that there are no audible biophonic components from the mudskipper, fish, or other soniferous aquatic life forms present during the mangrove forest underwater aquatic soundscapes recording in Pulau Kukup island. However, a mudskipper vocalisation found sound was retrieved through audio archival research from an article research paper entitle Acoustic Communication at the Water's Edge: Evolutionary Insights from a Mudskipper by Polgar, G., Malavasi, S. et al. in 2011 whereby, the 'unheard' mudskipper vocalisations were detected via sound amplification and recorded using a miniature omnidirectional hydrophone dipped into shallow mud water near the mudskipper habitat space during the sampled mudskippers out of water for a competitive feeding for bio-communication observation in a controlled environment water tank. From the research "an analysis of intraspecific variability showed that specific acoustic components of the sampled mudskipper vocalisations may act as tags for individual recognition, further supporting the sounds' communicative value". Leading to this, the mudskipper vocalistion from a newly discovered species from Tanjung Piai, known as Periophthalmodon Septemradiatus, is retrieved from the research project and analysed for acoustical and musical insights (video 2) through acoustemology point of view. From the analysis, a repeated set of steady low monotonous tones in regular pulsating beats, sometimes started with a glottal tone speech-like style (uh-oh oh, oh, oh) and wrap with an accented high tone cadence (Uh!) derived from a low tone pitch slide, is observed progressing similarly to a colotomic structure cycle. Nevertheless, the mudskipper vocalisation is not audible through our natural auralisation state and many field recordists were observed to be surprised by knowing that the mudskipper is a soniferous being (able to produce sound) after presenting this soundscape artistic project in Acoustic Commons Creative Technical Workshop #2 January 2021.

Due to limited resources and Covid-19 pandemic restriction, the found sound of the Seletar Proto-Malay peoples' 'voices' is retrieved from a found video recording footage of the Seletar villagers interview session, delegated by personal associates in Johor, Malaysia. Based on the recordings (video 2), the Seletar language (also known as Malayic language) is very similar to Malay language and may be counted as a dialect of that language through cultural assimilation (G. Benjamin, 2019). However, the speaking population is unknown but is likely in the range of a few thousand. Not many in-depth and serious measures are observed in conserving the knowledge of the Seletar language (google Scholar) and for these reasons, the language is considered severely endangered by UNESCO and listed for UNESCO Atlas of the World's Languages in Danger (UNESCO, 2017).

Video 2. Mudskipper vocalisation during competitive feeding.

Species: Periophthalmodon Septemradiatus

Location: Mangrove Forest/ Beach Kuala Selangor, Malaysia

 

Transcribed by Ainolnaim (ig: anllume)

Audio recording by Gianluca Polgar et al.

Video excerpt (illustration only) https://www.youtube.com/watch?v=AWLhdPpPMyQ

 

Transcription tools:

iAnalysis (Sonogram/ Oscillogram/ Intensity)

Ableton (Wave)

Sibelius (Musical notation)

Audio recording tools: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3125184/

B. Audio Processing, Sound Design & Synthesis 

 

The collection of mangrove forest soundscapes field recordings are auditioned in stereo multi-tracks; ZOOM0002_LR audio file from X-Y Zoom XYH-6 microphone and ZOOM002_Tr12 audio file from A-B Oktava MK012 to identify the raw sonic-prints and then, filtered and sculptured for 'clean and clear' soundscapes sonic identity (Figure 1.2). An internal audio filter-sculpturing, the Ableton eight-bands equaliser (EQ-8) is applied by activating the first band high-pass filter with a steep low cut slope setting (48 dB filter) to remove noise, particularly low-rumbling wind noise, and activating band two to five with a combination of 'bell' slope band setting to attenuate the printed unwanted sound such as hums from the nearby mangrove forest park office Genset power unit. With an open-back studio-quality headphone (Beyerdynamic DT-990 Pro), a make-up EQ output gain of 12 decibels (dB) is applied to audibly 'zoom-in' the soundscapes sonic-prints. Following this, the noise attenuation is further refined by adjusting the 'scale' parameter, which increases or decreases (n% range) and inverse (-n% range) the band's slope tip globally (Ableton Tutorial), and narrowing the band slope width; Q values to further refine the noise frequency isolation and minimise the 'natural-original' soundscapes sonic discoloration.

Screen Shot 2021-03-08 at 5.03.29 PM.png

Figure 1.2 Audio filter-sculpturing using Ableton eight-bands equaliser (EQ-8) with audio bypass and audition enabler. Hi-pass filter is applied to filter out wind noise by cutting the frequency at band 1 to 2.5 and 'sculpting the humming noise at band 4 and 5 by attenuating the 'harsh' noise fundamental and harmonics.

The ZOOM0002_LR track is duplicated and layered below the original track with re-equalised settings; slightly different audio filter-sculpturing shapes from original tracks to highlights the biophonic frequencies printed across the muti-bands such as birds, cicadas, monkey and so on. The same approach is applied to the rest of the soundscapes with different audio filter-sculpturing shapes for different microphone models due to the individual microphone's frequency response characters. Again, the raw recordings and modified recordings are compared throughout the process to minimise the 'natural-original' soundscapes sonic discoloration and to induce the 'natural-original' onsite listening experience.

"Multiband compressors and dynamic EQs are some of the most useful tools available to audio engineers. They allow for dynamic control of defined frequency ranges, providing some of the functionality and benefits of both EQs and compressors. Their ability to correct “problem” frequencies in a detailed and generally transparent way makes them extremely helpful for balancing a single sound or a full mix." (Brown, G. 2020).

Moving forward, a hybrid of audio filter-sculpturing and compressor plugin; dynamic equaliser such as Izotop Ozone dynamic equaliser is adapted to 'tame' the dynamic spike at specific sound frequency band, which in this case will be the snap sounds like of the exploded mangrove seeds occurred along the Pulau Kukup soundscapes ZOOM0004 audio file stereo track (Figure 1.3). Multiband compressor offers individual audio compression on each three to four bands frequency (division of 20-20kHz) while dynamic equaliser offers individual audio compression at targetted multi-band frequency with eight-band equaliser features such as band's slope width adjustment (Q-values) and band's filter shape (bell, high shel, low shelf, notch and more), which dynamic equaliser offers great details and controls in crafting the audio wave. Track ZOOM0004 audio wave is processed through several ableton EQ-8 to remove the unwanted sound and to highlight the biophonic and geophonic (mangrove seeds explosion) throught the audio track and then, feed through Izotope Ozone dynamic equaliser to push down dynamic level of targetted bandwidth beyond the dynamic treshold. The peculiar-unheard, energatic, random and organic rhythms and  uique sonic texture mangrove seeds explosion sound will be re-synthesis and used in the SoL: Ancestral Dance.  (Audio 1.4)

 

Screen Shot 2021-03-11 at 3.31.47 PM.png

Figure 1.3. An Izotop Ozone dynamic equaliser to 'tame' dynamic spike at targeted frequency bandwidth details.

Pulau Kukup Clean MixArtist Name
00:00 / 01:01
Pulau Kukup Raw MixArtist Name
00:00 / 01:01

Audio 1.4. Sonic quality comparison* between Pulau Kukup clean and raw soundscapes recording mix snippets resulted from Ableton EQ-8 and Izotope Ozone Dynamic EQ audio processor  

*please use headphone to listen

The Pulau Kukup (P.Kukup) and Tanjung Piai (T.Piai) clean mix, which works as the 'sonic 'key' or sound theme for SoL: Ancestral Dance is further re-sampled and processed for timbral manipulation via audio synthesis which served as a collection of found sound with various sonic spectrum 'scale' range, timbre and texture. A sample-based subtractive synthesis tehcnique is applied on the mangrove forest clean mix to produce a found sound with a new timbre and contrast low spectrum with a hint of the 'foggy' upper band biophony to preserve some of the sonic 'identity' and the origin of the soundscape. This is done by processing the audio wave mix into Ableton internal multiple audio processing chains; EQ-8 to subtract multiple tones (frequencies filtering) for timbral variation and contrast. In addition, a grain synthesis techniques, Ableton Grain Delay (delayed granular audio), Low Feedback is used after the third audio signal process chain (Figure 1.4) to produced low feedback rhythmic texture from the mangrove seed explosion sounds and 'gently' articulate it further through Ableton Saturator (waveshaping effect); mimicking a chanting drone, yet recalling a suttle (EQ-8) primitive, wooden double barrel drum rhythm playing like.

Screen Shot 2021-03-11 at 4.32.36 PM.png
Screen Shot 2021-03-11 at 4.32.54 PM.png
Screen Shot 2021-03-11 at 4.33.34 PM.png

Audio 1.4. T.Piai & P.Kukup soundscape sample-based subtractive and granular synthesis techniques through above multiple audio signal processing chain (audio signal flow from left to right starting from top to bottom chain).

T.Piai & P.Kukup Soundscape Sample-based Subtractive Synthesis MixArtist Name
00:00 / 00:47

Audio 1.5. T.Piai & P.Kukup soundscape sample-based subtractive synthesis mix *please use headphone to listen

Other found sounds collected from the field-recordings and audio archive such as mudskipper vocalisation, exploded mangrove seeds, pestle and mortar, muted string freet gliss, and more were used in granular synthesis and wavetable synthesis to create a new sound texture and timbre that symbolise and recalling the eco-cultural narrative keywords; trans-like, ritualistic, bewildering, real and unreal world, unholy, wonders, threatened, and endangered which personifies the local folktale, the Selatar people habitual and the inhabit natural landscape (Audio 1.6). Grain synthesis is an audio synthesis process that slices an audio wave into a very small (short) 'grain' wavelength (between microseconds of 1 to 100 ) and the grain (as a sound source) is repeatedly organized in a sequence of a complete waveform cycle (audible).  According to Brown, G. (2019), "after creating a new sequence of grains, volume cross-fades will be applied to blend from one grain to the next in a process called “smoothing". The shape and length of the cross-fades (called the “window”) have important roles in determining the tone of the resulting sound." As contra to wavetable synthesis, an audible one cycle period (fixed length) of an audio waveform is used as a sound source and various audio waveform shapes (sine, triangle, or anything) is listed in a table array within one cycle period is then combined periodically (back and forth) to form a new audio waveform (figure 1.5) (C. Roads 2004).

grain.png

Figure 1.5. Granular synthesis and Wavetable synthesis concept

"The Grain Delay effect slices the input signal into tiny particles (called ”grains”) that are then individually delayed and can also have different pitches compared to the original signal source. Randomizing pitch and delay time can create complex masses of sound and rhythm that seem to bear little relationship to the source. This can be very useful in creating new sounds and textures" (DeSantis, D. et al. 2019).

In Ancestral Dance found soundscape composition, Ableton Grain Delay and Ableton Granulator II (external plugin, Max 4 Live) is applied by layering multiple grains of found sounds (figure 1.5a & 1.5b) and combine the grains periodically in graintable form, which similar concept to audio wavetable synthesis (Figure 1.6) (Audio 1.7). Grain slice positioning and size were set at random (Flux) however, multiple parameters such as grain filters, pitch, delay beat division, time-jitter (randomisation), and more are time-based modulated by using the automation.

Screen Shot 2021-03-27 at 3.43.52 PM.png
Screen Shot 2021-03-27 at 3.44.03 PM.png

Figure 1.5a. Ableton Granulator II, a granular synthesis audio processor with grain and filter settings developed by Robert Henke

Screen Shot 2021-03-27 at 3.48.35 PM.png

Figure 1.5b. Ableton Grain Delay, a granular synthesis audio processor by slicing (fixed grain size) incoming audio in very small chunks, called grains, and emits each grain after a delay.

Screen Shot 2021-03-27 at 4.21.32 PM.png

Figure 1.6. Ableton Wavetable synthesis using preset noise waveforms, to be layered with granular synthesised found sounds

Audio 1.6. Multiple granular synthesised sounds were (re-)synthesised by chaining into other audio signals processors to produce new sonic texture and timbre.

Audio 1.7. Wavetable synthesised sounds were (re-)synthesised by chaining into other audio signals processors and layered with sample-based synthesised sounds to produce new sonic texture and timbre.

The Ableton Simpler is a powerful hybrid synthesiser build in Ableton used for the Ancestral Dance. Simpler is a type of sampled-based synthesis whereby recorded sounds were used as a sound source (sound generator) for the audio synthesis process. In Simpler, a specific audio wave part of the found sound source can be isolated and re-sampled within a resizeable loop window. The re-found sound can be triggered according to the piano-roll note scale range sample mapping (Figure 1.8). The loop range (blue) can be narrowed into grain size and multisampling can be done to produce layers of grain sounds through the piano-roll scale range. Similar to the Granulator II, the time-based position of the loop window can be arranged freely throughout the found sound audio wave to produced various rich types of re-synthsised found sound texture and timbre. Moreover, a time-based jittering (randomisation) of the parameters are possibly be done through automation or random generator patch in Pure Data via virtual MIDI send.

Screen Shot 2021-03-27 at 6.43.00 PM.png
Screen Shot 2021-03-27 at 6.40.47 PM.png

Figure 1.8. Ableton Simpler, is a sample-based synthesiser with granular synthesis approach whereby, a found sound file (mudskipper) is re-sampled within the loop window (blue) and mapped into a piano-roll by retuning the loop re-found sound according to a set of note pitch frequency scale range (green horizontal lines).

C. Electroacoustic 'Symphony-ing' and Soundscapes 'Abstracting'

 

 

To be updated: symphony - sounds together

Need to re-clear the ideas, terms, definitions, and meanings!

To be updated: ecotone a region of transition between two biological communities > compositional concept (metaphor) - a soundworld region of transition between natural soundscapes and found sound. - seed of life - transition/morph. what is soundworld? sound object? 

From stereo-field recordings to Binaural mix - virtual space-area.

Illuminating the organised natural flora and fauna sounds from Tanjung Piai Mangrove Forest, Johor Malaysia soundscape in July 2020.

 

Transcription Tools:

iAnalysis (Spectrogram)

Sibelius (Music notation) Ableton (Wave audio)

Sound Field Recording Tools:

Zoom H6 Recorder Zoom

H6 Stereo X-Y Microphone

Oktava Stereo Pair Microphone with Jecklin Disc

Aquarian Audio Hydrophone Microphone

SoL Movement No. 2
 The Travellers: Underwater Lake Wetland Soundscape
SoL: The Travellers

To be updated: underwater recordings techniques.

SoL Movement No. 3
 The Black Forest: World-Wide-Forest Soundscape
 
SoL: The Black Forest

To be updated: cave tone

SoL Movement No. 4
Rest and Shelter: Limestone Cave Soundscape
 
SoL: Shelter & Rest

To be updated: found sound recording (boom mic)

SoL Movement No. 5
 Harvesting the Spirit of Paddy: Rice Paddy Field Soundscape
 
SoL: H. Spirit of Paddy

To be updated: found sound recording (boom mic)

 

 

References:

Ableton. Blog Tutorial; Make Most of EQ-Eight. Retrieved from

https://www.ableton.com/en/blog/make-most-eq-eight/

Acoustic ecology: Soundscape links. (n.d.). Retrieved Jan 03, 2020, from

https://aeinews.org/aeiarchive/soundscapelinks.html

Adriaensen, F. (2007, March). A tetrahedral microphone processor for ambisonic recording. In Proc. of the 5th

Int. Linux Audio Conference (LAC07), Berlin, Germany (pp. 64-69). Retrieved from https://kokkinizita.linuxaudio.org/papers/tetraproc.pdf

Ali, M. (2002). SINGAPORE’S ORANG SELETAR, ORANG KALLANG, AND ORANG SELAT. Tribal Communities in

the Malay World, 273.

Benjamin, G. (2019). ‘Why you should study Aslian languages' in Research Mosaics of Language Studies

in Asia: Differences and Diversity. Salasiah Che Lah & Rita Abdul Rahman Ramakrishna (eds). Penang: Penerbit Universiti Sains Malaysia, pp. 8–60. [ISBN 9789674613693.]. Retrieved from https://d1wqtxts1xzle7.cloudfront.net/65583767/Benjamin_Why_you_should_study_Aslian_languages.pdf?1612270076=&response-content-disposition=inline%3B+filename%3DWhy_you_should_study_Aslian_languages.pdf&Expires=1616842878&Signature=LDPEJisd5DD1Md1bSYxv0gXWMW0H1-oMqBAQPep0xH6QJmAhwlDPjxGCiStsnZXIQKw8OXCYIFhUrubHL4YgiKemzZ6iF0BL2XxPmJ~1Cam14vfACQr9Etau-ZBw263r54qCEJoxGoROx5Mtjjnib0cQuBrr-Y~TaGHZ4FvBeiiZT9tOnIfGxb0HV9HbzKPGynvXC9hCeFUiPX4VPbdJ~gvKnrYELJMAdJ2xOJ5VVYe5fGoCGUh9pp7IpZ0HPBNwK9v6asrUAALpSu7ARc~OvCL1~UC4kGr7wjvF1zZSRCK3ohk5jTYyY~NLVu4a28QpavsIj5x2SznBCSC-hRtBRw__&Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA

Billingsley, M., & Bartlett, B. (1990). Practical field recording applications: an improved stereo

microphone array using boundary technology. Journal of the Audio Engineering Society, 38(7/8), 553-565.

Brown, G. (2020). Multiband Compressors vs. Dynamic EQs: Differences and Uses. Retrieved

from https://www.izotope.com/en/learn/multiband-compressors-vs-dynamic-eqs.html

Brown, G. (2019). The Basics of Granular Synthesis.  Retrieved from

https://www.izotope.com/en/learn/the-basics-of-granular-synthesis.html

 

Brosius, C., & Polit, K. M. (Eds.). (2020). Ritual, heritage and identity: The politics of culture and

performance in a globalised world. Taylor & Francis.

Campbell, I. (2017). John Cage, Gilles Deleuze, and the Idea of Sound. Parallax, 23(3), 361-378. Retrieved

from https://www.tandfonline.com/doi/abs/10.1080/13534645.2017.1343785

 

Carlier, V. (2020). Panning types. FLUX. Retrieved from https://desk.flux.audio/hc/en-

us/articles/360008231154-Panning-types

Collins, N., Schedel, M., & Wilson, S. (2013). Electronic music (Ser. Cambridge introductions to music).

Cambridge University Press. Retrieved from https://www-cambridge-org.bris.idm.oclc.org/core/books/electronic-music/4FD04D2A538AB21D7504B9CEA054DB4F

Chion, M. (2019). Audio-vision: sound on screen. Columbia University Press.

Deng, Z., Kang, J., & Wang, D. (2015). 'Soundscape composition as a distinct music genre'. In: R.

Timmers, N. Dibben, Z. Eitan, R. Granot, T. Metcalfe, A. Schiavio, & V. Williamson (Eds.). Proceedings of ICMEM 2015. International Conference on the Multimodal Experience of Music. Sheffield: The Digital Humanities Institute, 2015. ISBN 978-0-9571022-4-8. Available online at: <https://www.dhi.ac.uk/openbook/chapter/ICMEM2015-Deng>

DeSantis, D., Hughes, M., Gallagher, I., Haywood, K., Knudsen, R., Behles, G., Rang, J., Henke, R., Slama,

T. (2019). Ableton Reference Manual Version 10. Retrieved from https://www.ableton.com/en/manual/live-audio-effect-reference/

DPA. (2020). Immersive Sound/ Object-based Audio - and Microphones. Retrieved

from https://www.dpamicrophones.com/mic-university/immersive-sound-object-based-audio-and-microphones

DPA. (2019). Stereo Recording Techniques and Setup. https://www.dpamicrophones.com/mic-

university/stereo-recording-techniques-and-setups

Everest, F. A., & Pohlmann, K. C. (2015). Master handbook of acoustics. McGraw-Hill Education.

Farina, A. (2013). Soundscape ecology: principles, patterns, methods and applications. Springer Science

& Business Media.

Farina, A., Armelloni, E., & Chiesi, L. (2011). Experimental evaluation of the performances of a new

pressure-velocity 3D probe based on the ambisonics theory. In 4th international conference and exhibition on Underwater Acoustic Measurements: Technologies and Results. Retrieved from https://www.researchgate.net/profile/Enrico-Armelloni-2/publication/228962451_Experimental_evaluation_of_the_performanceS_of_a_new_pressure-velocity_3D_probe_based_on_the_Ambisonics_theory/links/09e415093da4b79670000000/Experimental-evaluation-of-the-performanceS-of-a-new-pressure-velocity-3D-probe-based-on-the-Ambisonics-theory.pdf

Floss Manuals (1991). Pure Data (Version 2). Retrieved from http://write.flossmanuals.net/pure-

data/introduction2/

Geographical. (2016, May). Malaysia Biodiversity Information System (MyBIS). Retrieved March 22, 2021,

from https://www.mybis.gov.my/art/143.

Godøy R.I. (2018) Sonic Object Cognition. In: Bader R. (eds) Springer Handbook of Systematic

Musicology. Springer Handbooks. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-662-55004-5_35 

Gracey and Associates. Acoustic Glossary. Retrieved from https://www.acoustic-glossary.co.uk/sound-

fields.htm#free-sound-field

Hass. C. (2017). Microphones for nature recording 1.: types and arrays. Retrieved

from https://www.wildmountainechoes.com/equipment/choosing-microphones-2/

Hendy, D. (2013). Noise: A human history of sound and listening. Profile books.

Hug, L. A., Baker, B. J., Anantharaman, K., Brown, C. T., Probst, A. J., Castelle, C. J., ... & Banfield, J. F.

(2016). A new view of the tree of life. Nature microbiology, 1(5), 1-6.

Hyperphysics. (2016). Sound. Georgia State University. Retrieved from http://hyperphysics.phy-

astr.gsu.edu/hbase/Sound/soucon.html#soucon

Jackson, A. (1968). Sound and ritual. Man, 3(2), 293-299. 

 

JKOA. 2021. Suku Kaum (Ethnicity), Jabatan Kemajuan Orang Asli Malaysia (The Department of Indegenous

People Development). Retrieved from https://www.jakoa.gov.my/en/suku-kaum/

Johor National Parks. (2019, July 3). RAMSAR Sites. Retrieved from

https://www.johornationalparks.gov.my/v3/ramsar-site/

Krause, B. (2016). Wild soundscapes: discovering the voice of the natural world. Yale University Press.

Kreidler, J. (2009). Programming Music in Pd. Retrieved from http://www.pd-

tutorial.com/english/index.html

Kromhout, M. J. (2017). Noise resonance: Technological sound reproduction and the logic of filtering.

Universiteit van Amsterdam.

Malham, D. G., & Myatt, A. (1995). 3-D sound spatialization using ambisonic techniques. Computer music

journal, 19(4), 58-70. Retrieved from https://www.jstor.org/stable/3680991?seq=1#metadata_info_tab_contents

Manning, P. (2013). Electronic and computer music (Fourth). Oxford University Press. Retrieved

from https://oxford-universitypressscholarship-com.bris.idm.oclc.org/view/10.1093/acprof:oso/9780199746392.001.0001/acprof-9780199746392

Marentakis, G., Zotter, F., & Frank, M. (2014). Vector-base and ambisonic amplitude panning: A

comparison using pop, classical, and contemporary spatial music. Acta Acustica united with Acustica, 100(5), 945-955. Retrieved from http://it.hiof.no/~georgiom/publications/papers/marentakis_2014c.pdf

Maran, T. (2017). Mimicry and meaning: Structure and semiotics of biological mimicry. Springer

International Publishing.

McQuinn, A. (2020). Becoming Audible: Sounding Animality in Performance (Vol. 18). Penn State Press.

https://www.jstor.org/stable/10.5325/j.ctv1c5csfk 

Mohd Sam, S.A. & Seow, T. W. (2013). Practice cultural of orang asli Jakun at Kampung Peta. Retrieved

from https://core.ac.uk/download/pdf/42954306.pdf

Montgomery, W. (2018). Writing the field recording : sound, word, environment. (S. Benson, Ed.).

Edinburgh University Press. Retrieved from https://www-jstor-org.bris.idm.oclc.org/stable/10.3366/j.ctv7h0w3f

Mousavi Haji, S. R., Mousavi, S. M., & Aryamanesh, S. (2020). Reflection and Analysis of the Tree of Life

and its Transformation into the Flower of Life in the Near East. The International Journal of Humanities, 27(2), 70-89.

Norhalifah, H. K., Syaza, F. H., Chambers, G. K., & Edinur, H. A. (2016). The genetic history of Peninsular

Malaysia. Gene, 586(1), 129-135. Retrieved fromhttps://www.sciencedirect.com/science/article/abs/pii/S0378111916302566 

Olson, C.R. (2020). Stereo Microphone Arrays for Ambient Field Recording. Retrieved

from https://www.trackseventeen.com/mic_rigs.html

Oktava Microphone. (2016). MK-012 cardioid capsule MSP. Retrieved from https://www.oktava-

shop.com/MK- 012-100-Series-modular-system/Capsules/Oktava-MK-012-cardioid-capsule-MSP.html
 

P.A. Van der Helm. (2008). Stories of the Lake or Cerita Dongeng Tasik Chini. Retrieved

from https://ppw.kuleuven.be/apps/petervanderhelm/srigumum/doc/stories.html#legenda_naga_tasik_chini

Polgar, G., Malavasi, S., Cipolato, G., Georgalas, V., Clack, J. A., & Torricelli, P. (2011). Acoustic communication at

the water's edge: evolutionary insights from a mudskipper. PloS one, 6(6), e21434. Retrieved from https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3125184/ 

Popper, A. N., & Fay, R. R. (2005). Sound source localization (Ser. Springer handbook of auditory research,

v. 25). Springer.

Pulkki, V. (1997). Virtual sound source positioning using vector base amplitude panning. Journal of the

audio engineering society, 45(6), 456-466. Retrieved from http://lib.tkk.fi/Diss/2001/isbn9512255324/article1.pdf

Roads, C. (2004). Microsound. MIT press.

Robjohns, H (2005). Sound on Sound: Q. What's the difference between filtering and EQ?

retrieved from https://www.soundonsound.com/sound-advice/q-whats-difference-between-filtering-and-eq

Rosli, M. H. W., Cabrera, A., Wright, M., & Roads, C. (2015). Granular model of multidimensional spatial

sonification. Sound and Music Computing, SMC. Retrieved from https://static1.squarespace.com/static/5ad03308fcf7fd547b82eaf7/t/5b75a255352f53388d8ef793/1534435933359/EvolutionGranSynth_9Jun06.pdf

Rumsey, F., & McCormick, T. (2014). Sound and recording : applications and theory (7th ed.). Focal

Press. Retrieved from https://ebookcentral.proquest.com/lib/bristol/detail.action?docID=1638630

Schafer, R. M. (1993). The soundscape: Our sonic environment and the tuning of the world. Simon and

Schuster.

Simensen, T., Halvorsen, R., & Erikstad, L. (2018). Methods for landscape characterisation and mapping: A

systematic review. Land use policy, 75, 557-569.

Sitt, P. (2017). What is... Higher Order Ambisonic. Retrieved from https://www.ssa-

plugins.com/blog/2017/07/18/what-is-higher-order-ambisonics/

Smith, I. C. (2021). Field Recording A Technical Introduction. Retrieved

from https://moonblink.info/FieldRecording?fbclid=IwAR0njHkLCbnHw16lGMKUxe7xxiSacFQZLDQLwuK3MteLJ9_V6c0STkN7F1g

Star Media Group. (2008). Lake of myth and lagend. Retrieved

from https://www.thestar.com.my/news/nation/2008/12/15/lake-of-myth-and-legend

Tagg, J. (2012, October). A Microphone Technique for Improved Stereo Image, Spatial Realism, and

Mixing Flexibility: STAGG (Stereo Technique for Augmented Ambience Gradient). In Audio Engineering Society Convention 133. Audio Engineering Society.

Toh, T. (2020). Malaysian graphic novel 'Batu Belah' gives a familiar folk tale a dark spin. Retrieved

from https://www.thestar.com.my/lifestyle/culture/2020/05/17/malaysian-graphic-novel-039batu-belah039-gives-a-familiar-folk-tale-a-dark-spin

Truax, B. (2008). Soundscape composition as global music: Electroacoustic music as soundscape.

Organised Sound, 13(2), 103.

Vail, M. (2014). The synthesizer : a comprehensive guide to understanding, programming, playing, and

recording the ultimate electronic music instrument. Oxford University Press, USA. http://public.ebookcentral.proquest.com/choice/publicfullrecord.aspx?p=1602530.

Vella, R., Drummond, J., White, G., & Marcellino, R. (n.a.). Creative Thinking with Sound and Textures.

Retrieved on 2nd March 2021 from http://www.squelch.com.au/creativethinking/fmb.html

 

Vennerød, J. (2014). Binaural Reproduction of Higher Order Ambisonics. Norwegian University of Science

and Technology, Department of Electronics and Telecommunications. Retrieved from https://core.ac.uk/download/pdf/30806443.pdf

Waves. (2017) Ambisonics Explained: A Guide for Sound Engineers. Retrieved

from https://www.waves.com/ambisonics-explained-guide-for-sound-engineers

 

Westerkamp, H. (2002). Linking soundscape composition and acoustic ecology. Organised Sound, 7(1),

51-56.

Willis, K., & McElwain, J. (2014). The evolution of plants. Oxford University Press. 

Zainuddin, Z. (Ed.). (2012). Genetic and Dental Profiles of Orang Asli of Peninsular Malaysia (Penerbit

USM). Penerbit USM.

bottom of page