The Observation of Movementv6 - Vincent Verfaille

pitch and dynamics, as well as morphing (interpolation between instruments). 88. Chapter 6 ... 3-order phase polynomial model (McAulay and Quatieri, 1986),.
14MB taille 2 téléchargements 328 vues
Ssynth: a Real Time Additive Synthesizer with Flexible Control Vincent Verfaille, Julien Boissinot, Philippe Depalle, Marcelo Wanderley Sound Processing and Control Lab / Input Device and Music Interation Lab

C I R MM T

Centre for Interdisciplinary Research in Music Media and Technology

Abstract This research project concerns the simulation of interpretation in live performance using digital instruments. It addresses mapping strategies between gestural controls and synthesizer parameters. It requires the design and development of a real time additive synthesizer with flexible control, allowing for morphing, interpolating and extrapolating instrumental notes from a sound parameters database. We 88 Chapter 6 : ESCHER - An Application Example present the synthesizer, its additive and spectral envelope control units, and the morphing they allow for. WX9 breath

loudness

The combination of heavily computational synthesis techniques (e.g. additive synthesis) with gestural control devices in performance situation offers the sound quality of offline applications together with the control quality of real time applications. It however requires to consider synthesis from the control viewpoint, in terms of design and implementation. The Ssynth additive synthesizer includes flexible control of additive and source-filter models of sound.

lip pressure

fingering

breath

fingering

extrapolation

clarinet

dynamics mf

ff

dynamics (Y)

vibrato (X, relative)

fundamental frequency

breath dynamics (Y) loudness lip pressure

fingering

vibrato (X, relative)

fundamental frequency (X, absolute)

Figure 2: Mapping strategy, involving an abstract parameter layer (left). The first strategies. mapping Figure 6.13: A possible pedagogical use of mapping stage (right) simulates coupling between physical parameters (Wanderley et al., 1998)

A beginner could make use of the simpler mapping in order to concentrate on fingering, for instance. In this case the performance of the ESCHER instrument would be similar to playing a recorder. Another user could like to concentrate on another aspect of the instrument’s behavior and then the mapping layer could be adapted according to this direction. For instance, the removal of the timbral variation with the embouchure in order to concentrate on the correct embouchure, but keeping the virtual flow effect. An expert player, on the other hand, would benefit from the complex mapping since it reproduces the behavior of the acoustic instrument.

Controlling additive synthesis The gestural control of additive synthesis from sound parameters database requires morphing (Depalle, Garcia and Rodet, 1995; Haken, Tellman and Wolfe, 1998) to infer new sounds, since not all sounds exist in the database, in terms of fundamental frequency, intensity and dynamics. Specific morphing strategies are derived from (Tellman,6.4Haken and Holloway, 1995). Conclusions

Correlation Function r(k)

pol

y2t

abl

2ac y l o p oly p 2 ac

e

Boxes This chapter presented ESCHER, a prototyping system for human-computer interaction in the context Cep. Coef parametric of musicalc(k) performance using IRCAM’s jMax real-time processing environment. The system allows the models definition of sound synthesis control algorithms providing instrument-likespectral behavior in a modular way. ESCHER was designed to provide an intuitive control of sound synthesismodels in real-time. A strong accent for2table Formants is given to an easy modeling of the relation between a particular human action and the produced sound temporal table2for F(j), Bw(j) permitting an easy adaption to aopotentially wideA(j), range of different controllers and/or different sound ly models p 2 synthesis methods. for y2for Arrows l o A practical example of an acoustic single reed instrument was presented, where varip AR Filter coef. on the simulation exact conversions: ous mapping were rused to performer actions. A right_to_left (a(k)),strategies G c2p to experiment with the instrument’s response detailed description of this implementation and comments on its use and onleft_to_right pedagogical outcomes have pol oly y2r also beenrc2ac discussed. c Reflection approximated conv: right_to_left Coefficients k(j) ac2rc left_to_right cep2poly

Table E(f)

table2dcep dcep2table ep 2c e l b ta ble a t 2 cep

poly2cep

interpolation

fundamental frequency (X, absolute)

(X, absolute)

table2ac ac2table

morphing

saxophone

oboe

lip pressure

Table a(i), f(i)

instrument

vibrato (X, relative)

loudness

Ssynth implements: - 3-order phase polynomial model (McAulay and Quatieri, 1986), - interpolating/extrapolating data from the database, and morphing, - synthesizing polyphonic sounds, - handling OSC messages (Wright, 1997) to carry control information, - implemented in C, can be compiled as a stand alone program or as a Pd object, using the Pd scheduler for output audio. The sound parameters database: - McGill master samples database (Opolko and Wapnick, 1987), - additive analysis using standard techniques implemented in Additive, - fundamental frequency estimation using HMM (Doval and Rodet, 1993), - frames organized as a 3-dimensional mesh as in (Haken, Tellman and Wolfe, 1998) according to pitch, dynamics and instrument, - instruments: clarinet, oboe, trumpet and saxophone, - spectral envelope models.

dynamics (Y)

Figure 3: Interoperability system for conversion of spectral envelope representations

Controlling the spectral envelope The spectral envelope is a function of frequency. It simplifies the amplitude pitch control of partials in Ssynth, and its modification is useful to morph sounds. Figure 1: Example of trajectory in the database, involving interpolation and extrapolation of Gestural control of the spectral envelope may require conversions from one pitch and dynamics, as well as morphing (interpolation between instruments) model into another, more suited to provide a spectral envelope corresponding Modular mapping structure to provide a gestural control (see Figure 2): to a stable filter for a given control. Fig. 3 depicts the implemented conversions. - 1st part: Pd patches converting the transducer data into abstract parameters Indirect conversions are then derived by combination of basic conversions. by rendering the acoustical couplings that exist between lip pressure, air pressure and fingerings, in order to provide fundamental frequency, intensity and To conclude dynamics (Wanderley, Schnell and Rovan, 1998). In the context of gestural control of additive synthesis for interpolating and ex- 2nd part: additive synthesizer with abstract parameters and spectral envetrapolating instrumental notes, our contribution lies in the systematic design of lope parameters input. An internal mapping layer converts the abstract paramthe synthesis environment for allowing flexible control. This implies a potential eters into additive parameters (partials frequencies and amplitudes) by control of the additive part by spectral envelopes, parameterized in various interpolating/extrapolating the database. forms. pp

Bibliography Depalle, P., G. Garcia, and X. Rodet (1995). Reconstruction of a castrato voice: Farinelli’s voice. In Proc. IEEE Workshop Appl. of Digital Sig. Proc. to Audio and Acoustics, New Palz, USA, pp. 242–5. Doval, B. and X. Rodet (1993). Fundamental frequency estimation and tracking using maximum likelihood harmonic matching and HMM’s. In Proc. IEEE-ICASSP’93, Minneapolis, USA, Volume 1, pp. 221–4. Haken, L., E. Tellman, and P. Wolfe (1998, Spring). An indiscrete music keyboard. Computer Music J. 22(1), 30 – 48. Opolko, F. and J. Wapnick (1987). McGill University Master Samples,Montreal, Canada. Oppenheim, A. V. and R. W. Schafer (1975). Digital Signal Processing. Prentice Hall, Englewood Cliffs. Schwarz, D. and X. Rodet (1999). Spectral envelope estimation and representation for sound analysis-synthesis. In Proc. Int. Comp. Music Conf. (ICMC’99), Beijing, China, pp. 351–4. Serra, X. and J. O. Smith (1990). A sound decomposition system based on a deterministic plus residual model. J. Acoust. Soc. Am., sup. 1, 89(1), 425–34. Tellman, E., L. Haken, and B. Holloway (1995). Timbre morphing of sounds with unequal numbers of features. J. Audio Eng. Soc. 43(9), 678–89. Wanderley, M., N. Schnell, and J. B. Rovan (1998). Escher - modeling and performing composed instruments in real-time. In Proc. IEEE Int. Conf. Syst., Man and Cybernetics, San Diego, USA, pp. 1080–4. Wright, M. and A. Freed (1997). Open Sound Control: A new protocol for communicating with sound synthesizers. In Proc. Int. Comp. Music Conf. (ICMC’97), Thessaloniki, Greece, pp. 101–4.