FRIDAY MORNING, 7 JUNE 2002 LE BATEAU ROOM, 8:00 TO 9:50

Jun 7, 2002 - A specially designed acoustic exposure system, capable of gen- ... the signal-to-noise ratio, as in the case of Integrated Backscatter. The .... Active control of acoustic radiation pressure and of electrostatic ... Ford Motor Co., MD3083/SRL, P.O. Box 2053, ...... more natural neural responses to electrical stimuli.
219KB taille 19 téléchargements 239 vues
FRIDAY MORNING, 7 JUNE 2002

LE BATEAU ROOM, 8:00 TO 9:50 A.M. Session 5aBBa

Biomedical UltrasoundÕBioresponse to Vibration: Interactions of Ultrasound with Tissue Christy K. Holland, Cochair Department of Radiology, University of Cincinnati, 234 Goodman Street, Cincinnati, Ohio 45219-2316 Mark E. Schafer, Cochair Sonic Tech, Incorporated, 275 Commerce Drive, Suite 323, Fort Washington, Pennsylvania 19034 Chair’s Introduction—8:00

Contributed Papers 8:05

8:35

5aBBa1. Reduction of tissue injury without compromising stone comminution in shock wave lithotripsy. Yufeng Zhou, Brian Auge, Glenn M. Preminger, and Pei Zhong 共Depts. of Mech. Eng. and Mater. Sci. and Urologic Surgery, Duke Univ., Box 90300, Durham, NC 27708兲

5aBBa3. Cavitation bubble cluster activity in the breakage of stones by shock wave lithotripsy. Yuriy A. Pishchalnikov, Oleg A. Sapozhnikov 共Dept. of Acoust., Phys. Faculty, M. V. Lomonosov Moscow State Univ., Moscow 119899, Russia, [email protected]兲, James C. Williams, Jr., Andrew P. Evan, James A. McAteer 共School of Medicine, Indiana Univ., Indianapolis, IN兲, Robin O. Cleveland 共Boston Univ., Boston, MA兲, Tim Colonius 共California Inst. of Technol., Pasadena, CA兲, Michael R. Bailey, and Lawrence A. Crum 共Ctr. for Industrial and Medical Ultrasound, Appl. Phys. Lab., Seattle, WA兲

To ameliorate vascular injury without compromising stone comminution in shock wave lithotripsy, we have recently developed an in situ pulse superposition technique to suppress large intraluminal bubble expansion 关Zhong and Zhou, J. Acoust. Soc. Am. 110, 3283–3291 共2001兲兴. This strategy was implemented using a simple modification of a HM-3 lithotripter reflector. In this work, further optimization of the reflector geometry was carried out based on theoretical analysis and in vitro pressure waveform measurements using a fiber optical hydrophone. Using the upgraded reflector, no rupture of a cellulose hollow fiber 共i.d.⫽0.2 mm兲 vessel phantom could be observed around the lithotripter beam focus even after 200 shocks at 24 kV. In comparison, less than 50 shocks were needed to cause a rupture of the vessel phantom using the original reflector at 20 kV. At corresponding output settings, stone comminution is comparable between the two reflector configurations, although the size of the fragments produced by the upgraded reflector is slightly larger. In addition, preliminary results from animal studies have demonstrated a significant reduction in tissue injury using the upgraded reflector, which confirms the validity of this approach in vivo. 关Work supported by NIH.兴

8:20

High-speed photography was used to investigate cavitation at the surface of artificial and natural kidney stones during exposure to lithotripter shock pulses in vitro. It was observed that numerous individual bubbles formed over virtually the entire surface of the stone, but these bubbles did not remain independent and combined with one another to form larger bubbles and bubble clusters. The movement of bubble boundaries across the surface left portions of the stone bubble free. The biggest cluster grew to envelop the proximal end of the stone 共6.5 mm diameter artificial stone兲 then collapsed to a small spot that over multiple shots formed a crater in that face of the stone. The bubble clusters that developed at the sides of stones tended to align along fractures and to collapse into these cracks. High-speed camera images demonstrated that cavitation-mediated damage to stones was due not to the action of solitary, individual bubbles, but to the forceful collapse of dynamic clusters of bubbles. 关Work supported by NIH DK43881.兴

5aBBa2. Effects of an acoustic diode on lithotripter shock wave, cavitation, and stone fragmentation. Songlin Zhu 共Dept. of Mech. Eng. and Mater. Sci., Duke Univ., Box 90300, Durham, NC 27708兲, Thomas Dreyer, Marko Liebler 共Univ. of Karlsruhe, Karlsruhe, Germany兲, and Pei Zhong 共Duke Univ., Durham, NC 27708兲 8:50

2461

J. Acoust. Soc. Am., Vol. 111, No. 5, Pt. 2, May 2002

5aBBa4. 30 MHz backscatter and Doppler signals from individual microbubbles undergoing inertial cavitation. Johanna M. Yoon 共Eng. Dept., Swarthmore College, Swarthmore, PA 19081兲 and E. Carr Everbach 共Swarthmore College, Swarthmore, PA 19081兲 Short pulses 共1–2 microseconds duration兲 of 30 MHz ultrasound were used to interrogate individual OptisonTM and PESDA microbubbles convected in a coaxial jet flow or adhering to a thin MylarTM window. Backscattered signals were recorded as the bubbles were forced into symmetrical or asymmetrical collapse by application of 1 MHz ultrasound pulses from a focused source in water. Peak-to-peak amplitudes of the backscattered signal were converted to radius–time curves via comparison with similar signals from a monodisperse population of polystyrene spheres of known diameter. Additionally, backscattered signals were mixed with reference sinusoids at 30 MHz and low-pass filtered to yield Doppler signals. Results are consistent with theoretical models and provide a possible method to quantify asymmetrical bubble collapse via Doppler signature. 143rd Meeting: Acoustical Society of America

2461

5a FRI. AM

Recent studies suggest that reducing the large intraluminal bubble expansion in small blood vessels may ameliorate the potential for vascular injury in shock wave lithotripsy. To achieve this objective without compromising stone comminution, a selective truncation of the tensile component of lithotripter shock wave 共LSW兲 is needed. In this work, an acoustic diode 共AD兲 of Riedlinger’s design was constructed and evaluated. The AD consists of two peripherally secured membranes having opposite surfaces held in direct contact under partial vacuum. The AD permits transmission of the leading compressive component of a LSW; yet the membranes may separate under tension, thus blocking the transmission of the trailing tensile component of the LSW. Following each LSW, the membranes will again establish direct contact due to the partial vacuum between them. Using the AD at a vacuum level of 10.75 in. Hg, the collapse time of the LSW-induced bubble cluster at the beam focus of a HM-3 lithotripter at 20 kV was found to be reduced by 29%, whereas the compressive pressure and stone comminution were only reduced slightly by 4% and 5%, respectively. Thus the AD may be used to reduce tissue injury produced by LSW. 关Work supported by Whitaker Foundation and NIH.兴

9:05 5aBBa5. Broadband noise emissions produced by pulsed 1-MHz ultrasound exposures in the presence or absence of Optison, and their relationship to the hemolytic bioeffect. Andrew A. Brayman, Wen S. Chen, Thomas J. Matula, and Lawrence A. Crum 共Ctr. for Indust. Med. Ultrasound, Appl. Phys. Lab., Univ. of Washington, 1013 NE 40th St., Seattle, WA 98105-6698, [email protected]兲 Gas-based contrast agents are known to increase ultrasound-induced bioeffects, presumably via an inertial cavitation 共IC兲 mechanism. The relationship between IC ‘‘dose’’ 共ICD兲 共cumulated rms broadband noise amplitude in the frequency domain兲 and 1.1-MHz ultrasound-induced hemolysis in whole human blood was explored with additions of Optison or degassed saline; the hypothesis was that hemolysis would correlate with ICD. Four experimental series were conducted, with variable: 共1兲 peak negative acoustic pressure 关 P⫺兴 ; 共2兲 Optison concentration; 共3兲 pulse duration; and 共4兲 total exposure duration and variable Optison concentration. The P – thresholds for hemolysis and ICD above noise levels were ⬃0.5 MPa. Enhancement of ICD and hemolysis was detected even at the lowest Optison concentration tested 共0.1%兲 at P⫺⫽3 MPa. At 2 MPa P –共0.3% Optison兲, significant hemolysis and ICD were detected with pulse durations as brief as 2 and 4 cycles, respectively. At 3 MPa P –, hemolysis and ICD evolved as functions of time and Optison concentration; ultimate levels of hemolysis and ICD depended strongly on initial Optison concentration, but initial rates of change did not. Within experimental series, hemolysis was significantly correlated with ICD; across series, the correlation was significant at p less than 0.001. 9:20 5aBBa6. Ultrasound mediated gene transfection. Rene G. Williamson, Robert E. Apfel 共Dept. of Mech. Eng., Yale Univ., New Haven, CT 06520-8286, [email protected]兲, and Janet L. Brandsma 共Yale Univ. School of Medicine, Cedar St., New Haven, CT 06520兲 Gene therapy is a promising modality for the treatment of a variety of human diseases both inherited and acquired, such as cystic fibrosis and cancer. The lack of an effective, safe method for the delivery of foreign genes into the cells, a process known as transfection, limits this effort.

2462

J. Acoust. Soc. Am., Vol. 111, No. 5, Pt. 2, May 2002

Ultrasound mediated gene transfection is an attractive method for gene delivery since it is a noninvasive technique, does not introduce any viral particles into the host and can offer very good temporal and spatial control. Previous investigators have shown that sonication increases transfection efficiency with and without ultrasound contrast agents. The mechanism is believed to be via a cavitation process where collapsing bubble nuclei permeabilize the cell membrane leading to increased DNA transfer. The research is focused on the use of pulsed wave high frequency focused ultrasound to transfect DNA into mammalian cells in vitro and in vivo. A better understanding of the mechanism behind the transfection process is also sought. A summary of some in vitro results to date will be presented, which includes the design of a sonication chamber that allows us to model the in vivo case more accurately.

9:35 5aBBa7. Lung damage from exposure to low-frequency underwater sound. Diane Dalecki, Sally Z. Child, and Carol H. Raeman 共Dept. of Biomed. Eng. and the Rochester Ctr. for Biomed. Ultrasound, Univ. of Rochester, Rochester, NY 14627兲 The effects of low-frequency (⬃100⫺2500 Hz兲 underwater sound are most pronounced in and near tissues that contain resonant gas bodies. The response of gas bodies in vivo 共such as the lung and intestine兲 to lowfrequency underwater sound was characterized through a series of investigations. A specially designed acoustic exposure system, capable of generating maximum acoustic fields of ⬃200 dB re: 1 microPa over the 100–2500 Hz frequency range, was implemented for these investigations. Acoustic scattering techniques were used to characterize the response of gas bodies to underwater sound exposure and to determine the resonance frequency of murine lungs. Lung damage was observed in mice exposed to underwater sound at the resonance frequency of their lung. The extent of tissue damage to the lung 共and surrounding tissues such as the liver兲 increased with increasing pressure amplitude. Damage to lung tissue correlated with acoustic pressure amplitude and not acoustic particle velocity. Similar investigations were performed with murine intestinal gas in vivo.

143rd Meeting: Acoustical Society of America

2462

FRIDAY MORNING, 7 JUNE 2002

LE BATEAU ROOM, 10:25 A.M. TO 12:30 P.M. Session 5aBBb

Biomedical UltrasoundÕBioresponse to Vibration: Scattering Theory Applications for Biomedical Media J. Brian Fowlkes, Chair Department of Radiology, University of Michigan Medical Center, 200 Zina Pitcher Place, Ann Arbor, Michigan 48109-0553 Chair’s Introduction—10:25

Invited Papers

10:30 5aBBb1. Linking theoretical predictions of backscatter from biological media with experimental estimates. James G. Miller, Rebecca L. Trousil, Scott M. Handley, and Mark R. Holland 共Dept. of Phys., Box 1105, Washington Univ., 1 Brookings Dr., Saint Louis, MO 63130兲 Medical ultrasonic imaging is based on scattering processes that arise because of the inhomogeneous nature of biological media. In laboratory-based investigations, reduction of experimental rf data to the true backscatter coefficient is accomplished by compensating for measurement system, attenuation, and diffraction effects. In clinical imaging, image brightness is qualitatively related to local values of the backscatter coefficient after operator-adjusted compensation for attenuation 共Time Gain Compensation兲. Clinical backscatter-based approaches to tissue characterization typically provide semi-quantitative data in regions-of-interest somewhat larger than the resolution cell of the image. This spatial averaging can improve the stability of the backscatter estimates. In addition, averaging over a range of frequencies can also improve the signal-to-noise ratio, as in the case of Integrated Backscatter. The sophisticated post-processing of backscattered data to form images poses an additional layer of complexity in compensating clinical data for imaging system-dependent effects. The objective of this talk is to address some approaches, and the corresponding approximations and compromises required, for comparing the results of laboratory and clinical estimates of backscatter with theoretical predictions of the backscatter coefficient 共i.e., the differential scattering cross section per unit volume at 180 degrees兲. 关Work supported in part by R37HL40302.兴

11:00 5aBBb2. Statistical modeling of scattering from biological media. P. M. Shankar 共ECE Dept., Drexel Univ., 3141 Chestnut St., Philadelphia, PA 19104兲 The statistics of the backscattered ultrasonic echo from tissue can provide information on its characteristics. Such information is useful in the classification of tissues in biomedicine. For example, some of the tissue properties may point to malignancies in certain lesions in liver, breast, or kidneys. The models employed in describing the backscattered echo are therefore very crucial to the success of these classification methods. These models must take into account the number density of scatterers, cross sections of scatterers, variation in cross sections of the scatterers, and any alignment 共periodic, quasiperiodic, and purely random兲 of the scatterers. Parameters reflecting these features can be extracted from the backscattered echo using these models. They can be directly related to the properties of the tissue such as the presence of an abnormal growth, and further classification of the growth as benign and malignant. They may also be used to form parametric images to assist the clinicians in making a medical diagnosis. A number of models ranging from Rayleigh, Poisson, K-, Weibull, and Nakagami will be discussed along with the relevance of their parameters and utility of the parameters in biomedicine. Specific applications to classification of breast lesions in ultrasonic B-scans will be described. 关Work supported by NIH-NCI No. 52823.兴

11:30

5a FRI. AM

5aBBb3. High frequency ultrasonic scattering by biological tissues. K. Kirk Shung 共Dept. of Bioengineering, 231 Hallowell Bldg., Penn State Univ., University Park, PA 16802兲 and Subha Maruvada 共Brigham and Womens Hospital, Dept. of Radiol., Focused Ultrasound Group, 221 Longwood Ave., Boston, MA 02115兲 High frequency 共HF兲 diagnostic ultrasonic imaging devices at frequencies higher than 20 MHz have found applications in ophthalmology, dermatology, and vascular surgery. To be able to interpret these images and to further the development of these devices, a better understanding of ultrasonic scattering in biological tissues such as blood, liver, myocardium in the high frequency range is crucial. This work has previously been hampered by the lack of suitable transducers. With the availability of HF transducers going to 90 MHz, HF attenuation and backscatter experiments have been made on porcine red blood cell 共RBC兲 suspensions, for which much data on attenuation and backscatter can be found in the literature in the lower frequency range for frequencies, from 30 to 90 MHz and on bovine tissues for frequencies from 10 to 30 MHz using a modified substitution method that allow the utilization of focused transducers. These results will be reviewed in this talk along with relevant theoretical models that could be applied to interpreting them. The relevance of the parameter that has been frequently used in the biomedical ultrasound literature to describe backscattering, the backscattering coefficient, will be critically examined. 2463

J. Acoust. Soc. Am., Vol. 111, No. 5, Pt. 2, May 2002

143rd Meeting: Acoustical Society of America

2463

Contributed Papers 12:00

12:15

5aBBb4. Multiparameter classification of masses in ultrasonic mammography. Vishruta Dumane, Mohana Shankar, Reid John 共ECE Dept., Drexel Univ., 3141 Chestnut St., Philadelphia, PA 19104兲, Catherine Piccoli, Flemming Forsberg, and Barry Goldberg 共Thomas Jefferson Univ., Philadelphia, PA 19107兲

5aBBb5. The effect of hemolysis on acoustic scattering from blood. Constantin-C. Coussios and Shon E. Ffowcs Williams 共Eng. Dept., Cambridge Univ., Trumpington St., Cambridge CB2 1PZ, UK兲

Ultrasonic characterization of breast masses for the detection of cancer can be performed based on various features of the mass. Statistical techniques that utilize the parameter of the Nakagami distribution after spatial diversity and compounding have been utilized in the past to perform the classification. The parameter demonstrated a reasonable capability for characterization of the tissue. However, there is a need to improve the performance to reach clinically acceptable standards. The work described here undertakes a combination of the Nakagami parameter after spatial compounding at the site, the skewness parameter at the site, the signal-tonoise ratio of the envelope at the site of the mass, and the margin index parameter that describes the sharpness of the boundary. The improvement in performance for characterization of the masses will be demonstrated through ROC analyses. 关Work supported by NIH-NCI Grant No. CA52823.兴

FRIDAY MORNING, 7 JUNE 2002

In an attempt to develop a direct method for measuring the extent of red cell damage in vitro, the effect of the degree of hemolysis on ultrasonic scattering from blood was investigated. Starting with a suspension of 30% hematocrit, a series of suspensions containing different relative concentrations of healthy and damaged red cells in saline were prepared, with the total number of cells present in any one suspension being constant. For each sample, a suspension of equal concentration of healthy cells, but no lyzed cells, was also produced. Using a specially designed container, all samples were exposed to 15 MHz ultrasound in pulse-echo mode and measurements of backscattering were obtained. At high hematocrits, the samples containing damaged cells were found to scatter substantially more than the suspensions containing exclusively healthy cells. This indicates that damaged cells contribute significantly to the overall backscattered intensity. Below a concentration of 13% per volume of healthy cells, scattering levels from healthy and hemolyzed suspensions were comparable. A theoretical model, which treats healthy cells as weak-scattering spheres and damaged cells as hard thin disks, is proposed to interpret the observed scattering behavior.

STERLINGS ROOMS 2 AND 3, 8:00 TO 10:15 A.M. Session 5aPA

Physical Acoustics: Materials Characterization, Bubbles and Drops David B. Thiessen, Chair Department of Physics, Washington State University, Pullman, Washington 99164-2814 Contributed Papers 8:00 5aPA1. Forgotten acousticians. Robert T. Beyer 共Dept. of Phys., Brown Univ., Providence, RI 02912, [email protected]兲 In the French edition of his book on acoustics 共1807兲, Ernest Chladni cited the work of many acousticians in or before his time. The names of Bernoulli, Biot, D’Alembert, Euler, and Laplace are familiar to us all, but Kircher, Lambert, Monro, Perolle, Riccati, Scarpa, and Mathew Young are far less so. This paper will recall the work of these individuals and others, and the contributions they made to early work on sound.

8:15 5aPA2. UV spectroscopic studies of SBSL bubbles in lithium halides. Anthony Khong, Ning Xu, Elizabeth Doschek, and Robert Apfel 共Dept. of Mech. Eng., Yale Univ., 9 Hillhouse Ave., New Haven, CT 06520兲 As was reported previously, stably levitated bubbles were observed in LiCl and LiBr solutions under SBSL conditions. Stable bubbles were recorded for LiCl concentrations ranging from 0.47 to 1.4 M. Beyond 1.4 M, no SL was detected. In contrast, stable SBSL can be detected over a larger range of LiBr concentrations, from 0.56 to 2.5 M. At 3.0 M, unstable short-lived transient bubbles were noticed. A striking feature common to both salt solutions is the pronounced decrease in SL light intensity, measured with a PMT with peak detection sensitivity at 400 nm, as the salt concentration increases. Light intensities were close to one order of magnitude less than in pure water under similar conditions. The focus of the current study is geared toward resolving the observed reduction in light intensity with respect to chemical processes occurring in the bubble. UV 2464

J. Acoust. Soc. Am., Vol. 111, No. 5, Pt. 2, May 2002

spectroscopy will be key to procuring vital information relating to these processes based on the absorption patterns and the spectral ranges of peak absorptions. Similar salt solutions in D2 O, aimed at revealing differences in the chemical processes in the case of heavy water, will also be investigated. 关Work supported by a generous grant from University of Washington.兴

8:30 5aPA3. Effects of mixing He and Ar on single bubble sonoluminescence „SBSL…. Julio da Grac¸a and Harry Kojima 共Dept. of Phys. and Astron., Rutgers Univ., 136 Frelinghuysen Rd., Piscataway, NJ 08854-8019兲 When a large pressure gradient is imposed in a gas mixture, a segregation of different species is expected to occur. Preliminary result of our search for segregation effects on the SBSL in ensonified water containing mixtures of 4 He 共partial pressure⫽pHe), Ar (p Ar) and N2 (p N) gases will be presented. Deionized and initially degassed water was mixed with p He⫹p Ar⫽1.5 Torr and p N⫽148.5 Torr. The emitted light intensity I and the bubble radius dynamics 共via Mie scattering兲 were measured as a function of the imposed 17.3 kHz acoustic drive amplitude for mixtures with varied x⫽p He /(p Ar⫹p He). The acoustic pressure amplitude ( P a ) at the bubble and the ambient bubble radius were extracted by fitting the Mie scattering data using Rayleigh–Plesset equation. In the range where SL occurs 关 P lo(x)⬍ P a ⬍ P hi(x) 兴 , the measured I increased linearly with Pa for x⫽0, 0.5, 0.75 and 1. The slope dI/d P a decreases with x. The results will be compared with expectations of homogeneous mixing of 4 He and Ar and segregation in the SL bubbles. 143rd Meeting: Acoustical Society of America

2464

8:45

9:30

5aPA4. Optical observations of single droplets during acoustic vaporization. Oliver D. Kripfgans, Paul L. Carson, and J. Brian Fowlkes 共Univ. of Michigan Health Systems, Dept. of Radiol., Ann Arbor, MI 48109, [email protected]

5aPA7. Cuts with negative Poisson’s ratio in alloys and rocks. Svetlana P. Tokmakova 共Andreev Acoust. Inst., Shvernika 4, Moscow 117036, Russia, [email protected]

9:00 5aPA5. Active damping of capillary oscillations on liquid columns. David B. Thiessen, Wei Wei, and Philip L. Marston 共Dept. of Phys., Washington State Univ., Pullman, WA 99164-2814兲 Active control of acoustic radiation pressure and of electrostatic stresses on liquid columns has been demonstrated to overcome the Rayleigh–Plateau instability that normally causes long liquid columns to break 关M. J. Marr-Lyon et al., J. Fluid Mech. 351, 345 共1997兲; Phys. Fluids 12, 986 –995 共2000兲兴. Though originally demonstrated for liquid– liquid systems in plateau tanks, the electrostatic method also works on columns in air in reduced gravity 关D. B. Thiessen, M. J. Marr-Lyon, and P. L. Marston, ‘‘Active electrostatic stabilization of liquid bridges in low gravity,’’ J. Fluid Mech. 共in press兲兴. In new research, the electrostatic stresses are applied in proportion to the velocity of the surface of the column so as to actively dampen capillary oscillations of the surface. The mode amplitude is optically sensed and the rate-of-change is electronically determined. Plateau tank measurements and theory both show that the change in damping rate is proportional to the feedback gain. The results suggest that either active control of electrostatic stresses or of acoustic radiation stresses can be used to suppress the response of interfaces to vibration. 关Work supported by NASA.兴 9:15 5aPA6. Inferring pore size distributions in cast metals from ultrasonic attenuation. George Mozurkewich, Bita Ghaffari, Larry A. Godlewski, and Jacob W. Zindel 共Ford Motor Co., MD3083/SRL, P.O. Box 2053, Dearborn, MI 48121-2053, [email protected]兲 The frequency dependence of ultrasonic attenuation in cast metals has previously been shown to contain information about the volume fraction and size of pores. While direct metallographic examination shows that the actual size distribution can be quite broad, previous investigations have analyzed ultrasonic attenuation data on the assumption that all pores have the same, or nearly the same, size. That restriction can be eliminated by analyzing the data using the concept of maximum entropy. Pore sizes are divided into discrete bins, each containing a fraction f i of all the pores, and the entropy, obtained by summing ⫺ f i ln(f i) over all bins, is maximized subject to the constraints of the frequency-dependent attenuation data. The resulting pore size distributions for cast aluminum samples containing various levels of porosity are often found to be approximately log-normal. Volume fractions of pores deduced from these distributions are in excellent agreement with determinations using the Archimedes method.

2465

J. Acoust. Soc. Am., Vol. 111, No. 5, Pt. 2, May 2002

Stretching of material with negative Poisson’s ratio leads to an unexpected transverse expansion. Stretching of anisotropic material can cause expansion in one direction and contraction in another direction. For example it is possible for Poisson’s ratio in crystals to be negative in one direction and highly positive in another direction. In the paper a set of hexagonal, monoclinic and cubic crystals of rocks, minerals and alloys were investigated. The stereographic projections of Poisson’s ratio were computed for each crystal. From these stereographic projections the Poisson’s ratio for any directions of stretch and lateral strain in crystal were calculated and orientations of stretch and lateral strain with extremes values of Poisson’s ratio were obtained. By analysis of the above results cuts with negative values of Poisson’s ratio were revealed in zinc, molybdenum sulfide, polypropylene, graphite, carbon, natural precious minerals labradorite and augite, complex silicate and in copper, zinc, iron, nickel, indium, silver and gold alloys. For these crystals the angular dependence of stretched orientations corresponding to negative Poisson’s ratio were determined. Elastic modules of crystals from the manual of Landolt– Bornstein were applied in calculations. Some simple models with negative Poisson’s ratio are considered and some possible applications of materials with negative Poisson’s ratio basing on their unusual acoustic properties are discussed.

9:45 5aPA8. Acoustic measurements for nanoscale test instruments. Antanas Daugela 共Hysitron, Inc., 5251 W. 73rd St., Minneapolis, MN 55439兲 Acoustic methods have been used successfully for microscale contact nondestructive evaluation, but until very recently, were not available for the nanometer scale. Noncontact submicrometer resolution ultrasonic microscopy and scanning probe microscopy 共SPM兲 operate at higher acoustic modes and have great potential to be qualitative but face data interpretation difficulties. Nanomechanical test instruments offer nanometer scale quantitative characterization. These tools coupled with in situ SPM-type imaging and simultaneous acoustic response monitoring open new instrumentation horizons for micro/nano fracture mechanics. Both active and passive acoustic methods can be utilized to characterize a large variety of substrates and coatings. Examples of AE monitoring of nanoindentation/ scratch on data storage media will be presented. Plastic deformation induced events can be separated from contact friction events by identifying AE signatures. Active ultrasonic methods can be utilized for nanoscale characterization of tribological surfaces. A friction coefficient reduction of 20% was observed on an ultrasonically excited surface during nanoscratch testing and was further investigated using post-scratch SPM-type imaging. A synergy of localized ultrasonic monitoring and nanoindentation technique can lead to the development of new and promising instrumentation for characterizing in vivo/vitro biological tissues at the molecular level. Examples on evaluating ultrasonically transmitted signal through biological samples will be discussed.

10:00 5aPA9. Piezoelectric control of sculptured thin films. Fei Wang, Akhlesh Lakhtakia, and Russell Messier 共Dept. of Eng. Sci. and Mech., Penn State Univ., PA 16802-6812, [email protected]兲 It shows that the Bragg center wavelength of a polymeric chiral sculptured thin film 共STF兲 can be shifted by the axial tension generated in a co-bonded piezoelectric disk by a dc voltage. This attractive possibility can be exploited for tunable optical filters as well as lasers made of chiral STFs, and can be extended to other types of STFs.

143rd Meeting: Acoustical Society of America

2465

5a FRI. AM

It has been shown previously that Acoustic Droplet Vaporization 共ADV兲 causes bubbles to form from small droplets. This paper shows that prior to vaporization the droplets undergo translatory 共dipole兲 oscillations when exposed to tone bursts. A high speed video system was used to monitor droplets in a flow tube when single-element focused transducers 共3.5 and 10 MHz兲 were used for ADV on individual droplets. Single sinusoidal tone burst 共⬍3.3 ␮ s兲 were sufficient for ADV of droplets with 5 to 25 micrometer diameter. Dipole oscillations of 1.3 ␮ m independent of diameter were found. Variations in the droplet diameter of up to 15% were observed during the onset of acoustic irradiation. The pressure threshold for ADV decreased with increasing droplet diameter. The onset of vaporization was seen either as a localized event within or homogeneously throughout the imaged droplet 共possibly due to the temporal resolution of the imaging system兲. Localized nucleation was solely observed along the direction of dipole motion of the droplet, which is the same as the direction of the propagating acoustic wave. Typically, sites on ‘‘north and/or south poles’’ were observed. 关Research supported by PHS Grant No. R01HL54201 and US Army Grant No. DAMD17-00-1-0344.兴

FRIDAY MORNING, 7 JUNE 2002

GRAND BALLROOM 4, 8:00 A.M. TO 12:00 NOON Session 5aPPa

Psychological and Physiological Acoustics: Binaural Hearing, Monaural Phase Effects and Loudness Douglas S. Brungart, Chair Air Force Research Laboratory, Wright–Patterson Air Force Base, Ohio 45433-7022 Contributed Papers 8:00

8:30

5aPPa1. A counterexample of spatial unmasking in multitalker speech perception. Douglas S. Brungart 共Air Force Res. Lab., 2610 Seventh St., Wright–Patterson AFB, OH 45433兲 and Brian D. Simpson 共Veridian, 5200 Springfield Pike, Ste. 200, Dayton, OH 45431兲

5aPPa3. On the ability of human listeners to detect dispersion in head-related transfer functions. Zachary A. Constan and William M. Hartmann 共Dept. of Phys. and Astron., Michigan State Univ., East Lansing, MI 48824, [email protected]

In listening tasks that involve more than one competing talker, substantial improvements in performance can usually be obtained by moving the target speech signal to a different location than the interfering speech signal. However, recent results in our laboratory suggest a possible listening configuration where informational and energetic masking effects might cause performance to decrease when an interfering speech signal is moved from the same ear as the target speech to the opposite ear. In this experiment, listeners were asked to respond to a color and number coordinate in a target phrase that was presented in two different listening configurations. In the first configuration, a high-level interfering talker and a low-level interfering talker were presented in the same ear as the target speech; in the second configuration, a high-level interfering talker was presented in the same ear as the target speech and a low-level interfering talker was presented in the ear opposite the target speech. The results confirm that the low-level interfering talker sometimes produced more interference when it was presented in a different ear than the target speech than when it was presented in the same ear.

Because of dispersion around the head, the interaural time difference 共ITD兲 depends on frequency. However, virtual reality experiments have shown that human listeners cannot distinguish between veridical headrelated transfer functions 共HRTFs兲 and HRTFs with a carefully chosen constant ITD. A reasonable explanation for this result is that listeners are insensitive to the kind of dispersion created by the head. This explanation was tested in headphone experiments using incident angles and particular noise bands chosen to give listeners the best opportunity to detect dispersion. The dispersive ITD was modeled using Kuhn’s equations for pressure on a spherical surface due to an incident plane wave 关J. Acoust. Soc. Am. 62, 157–167 共1977兲兴. Listeners were required to distinguish between noise with dispersive ITD and noise with constant ITD. Experiment 1 varied the ITD and found a pronounced minimum in the percentage of correct responses at an optimal value of ITD. Experiment 2 used optimal ITDs only in order to remove lateralization cues. Experiment 3 reversed the sign of the dispersion, causing the ITD to increase with increasing frequency rather than decrease. All experiments led to the same conclusion: listeners cannot detect head-related and similar dispersion. 关Work supported by NIDCD.兴

8:15 5aPPa2. Transposed stimuli improve sensitivity to envelope-based interaural timing information for stimuli having center frequencies of up to 10 kHz. Leslie R. Bernstein and Constantine Trahiotis 共Dept. of Neurosci. and Dept. of Surgery 共Otolaryngol.兲, Univ. of Connecticut Health Ctr., Farmington, CT 06030, [email protected]兲 Threshold interaural temporal disparities 共ITDs兲 at high frequencies are larger than threshold ITDs obtained at low frequencies. Colburn and Esquissaud 关J. Acoust. Soc. Am. Suppl. 1 59, S23 共1976兲兴, hypothesized that this reflects differences in peripheral processing of the stimuli rather than in binaural mechanisms that mediate performance. Previously 关L. R. Bernstein and C. Trahiotis, J. Acoust. Soc. Am. 109, 2485 共2001兲兴 this hypothesis was supported in ITD-discrimination experiments employing high-frequency ‘‘transposed stimuli’’ centered at 4 kHz that were designed to provide high-frequency channels with envelope-based information mimicking that normally available only in low-frequency channels. Here, we report new results using stimuli centered at 4, 6, and 10 kHz. It was found that 共1兲 transposed stimuli can yield relatively small threshold ITDs, even at 6 and 10 kHz, 共2兲 the data could be well accounted for in terms of a constant-criterion change in normalized interaural correlation computed subsequent to bandpass filtering, compression, rectification, and low-pass filtering. In addition, it was found necessary to incorporate a specific limitation to capture the inability of the auditory system to follow envelope fluctuations greater than 150 Hz. 关Work supported by NIH DC 04147.兴 2466

J. Acoust. Soc. Am., Vol. 111, No. 5, Pt. 2, May 2002

8:45 5aPPa4. Auditory feature detection for stimuli presented from different directions. Adelbert W. Bronkhorst 共TNO Human Factors, P.O. Box 23, 3769 ZG Soesterberg, The Netherlands, [email protected]兲 and James T. Townsend 共Dept. of Psych., Indiana Univ., Bloomington, IN 47405兲 The processing of stimulus features by the visual system is commonly studied by measuring response times 共RTs兲 for detection of targets presented with a varying number of distracters. In audition, however, interpretation of the interaction between target and distracters is complicated by effects like masking, binaural gain and fusion. In the present paradigm, which uses stimuli presented from different directions in the horizontal plane, these confounding effects were either minimized or kept constant. Targets were synthetic vowels and RTs were measured for detection of three features: F0, spectrum 共type of vowel兲 and direction. Both single features and conjunctions were studied. Independent variables were the type of distracters 共synthetic vowels or spectrally shaped noise兲 and their number. Results show that the effect of the number of distracters on the RT is smallest for the spectrum feature; RTs for the detection of F0 are largest and vary considerably across subjects. For conjunctions of features, the RTs lie, as expected, between the RTs for the single features. The effect of the type of distracter on the RT is small—this indicates that the processing involved in segregating vowels from noise is not more efficient than that used for segregating vowels from other vowels. 143rd Meeting: Acoustical Society of America

2466

5aPPa5. Discrimination of interaural envelope correlation for highand low-standard correlations. Mark A. Stellmack and Neal F. Viemeister 共Dept. of Psych., Univ. of Minnesota, 75 E. River Rd., Minneapolis, MN 55455兲 This experiment examined the cues that listeners use in discriminating interaural envelope correlation. Thresholds for discriminating normalized interaural envelope correlation 关r; see L. R. Bernstein and C. Trahiotis, J. Acoust. Soc. Am. 100, 1754 –1763 共1996兲兴 were measured for a sinusoidally amplitude-modulated 4-kHz tone. The parameter r was manipulated by varying the interaural phase difference 共IPD兲 of the signal envelope. Using a method of constant stimuli that measured percent correct, threshold ⌬r (⌬IPD兲 was estimated for standard r⫽1 共IPD⫽0 deg兲 and r⫽0.4235 共IPD⫽180 deg兲, with modulation frequencies ( f m ) of 4 –256 Hz and modulation index⫽0.9. For low f m , thresholds expressed as ⌬r were from 10– 60 times larger for the small standard correlation than for r⫽1. However, when expressed in terms of difference in the peak interaural level difference 共ILD兲, thresholds were within several dB across standards. Furthermore, thresholds were comparable to those for discrimination of fixed ILDs. For high f m , discrimination thresholds were smaller than those at low f m . The difference at high f m may be due to processing of the interaural envelope differences as fixed interaural time differences rather than dynamically varying ILDs. 关Work supported by NIDCD DC00683.兴 9:15 5aPPa6. Using a combined localizationÕdetection model to simulate human localization performance near the masked detection threshold level. Jonas Braasch 共Institut fu¨r Kommunikationsakustik, Ruhr-Universita¨t Bochum, 44780 Bochum, Germany兲 Recently, the perceptual lateralization of a partly masked target has been successfully simulated using the interaural cross-correlation difference 共ICCD兲 model 关Braasch, J. Acoust. Soc. Am. 108, 2597 共2000兲兴. However, in the accompanying listening tests, the target level was often found to be below the masked detection threshold level. To improve the model performance, a detection threshold model has been implemented in the localization model so that the localization process is triggered only when the target level is above a threshold. Otherwise, the position of the sound source is determined according to a behavioral pattern. The detection stage is based on an on- and offset detection algorithm that analyzes the derivative in time. For the simulation of the binaural conditions, the equalization-cancellation algorithm of Durlach 关J. Acoust. Soc. Am. 35, 1206 –1218 共1963兲兴 has been included in the model. It has been shown that for target levels above the masked detection threshold, the on-/offset detection algorithm is accurate enough to trigger the subtraction process of the ICCD algorithm. Furthermore, the improved model has resulted in better simulations of human localization patterns in the presence of a distracting sound. 9:30 5aPPa7. Phase-locked onset detectors for monaural sound grouping and binaural direction finding. Leslie Smith 共Dept. of Computing Sci. and Mathematics, Univ. of Stirling, Stirling FK9 4LA, Scotland, [email protected]兲 Locating sound sources is an important task for animals. IIDs and ITDs are normally used to provide information about the instantaneous direction of sound received at the ears. In a reverberating environment, this may differ from the direction of the sound source. However the IID and ITD always provide information about sound source direction at onset, since onsets always arrive from the shortest, direct path. Binaural recordings were filtered using a gammatone filterbank, converted to a phase-locked spike code, and passed to a leaky integrate-and-fire neuron through a rapidly depressing synapse. This provides a phase-locked onset detector in each bandpassed channel. Nearly coincident onsets from different channels in each ear were grouped. IIDs and ITDs were computed when grouped onsets in both ears occur at almost the same time. ITDs were converted to azimuth geometrically: IIDs were converted using the 2467

J. Acoust. Soc. Am., Vol. 111, No. 5, Pt. 2, May 2002

impulse response at each ear. The results show that even in a reverberating environment, sound direction can be found from a single onset. Wideband and long sounds provide better results. Multiple sound sources can be accommodated. The system exhibits the precedence effect since a second onset 共without intermediate offset兲 will be ignored because the depressing synapses will not have recovered. 9:45 5aPPa8. Discrimination of approaching and receding sounds using three auditory motion cues. Mark Ericson 共Air Force Res. Lab., AFRL/HECB Bldg. 441, Wright–Patterson AFB, OH 45433, [email protected]兲 The ability to determine whether a simulated moving sound source was approaching or receding was measured in several experiments using a 2 alternative, forced-choice task. Three auditory motion cues of intensity changes, Doppler frequency shifts and inter-aural time delays were encoded onto puretone stimuli and presented over headphones. Six subjects were instructed to discriminate simulated approaching versus receding sounds. The cues were presented for several velocities, durations and simulated motion paths parallel to the median-saggital and frontal planes. As found in previous studies 共Ryffert et al., 1979; Altman, 1999兲, approaching sounds encoded with monaural cues of frequency and intensity changes congruent with the frontal and median-saggital planes were detected at slower velocities than receding sounds. Subjects were more sensitive to ITD changes for paths parallel to the median-saggital plane than those parallel to the frontal plane 共Mills, 1958; Hartman and Rakerd, 1985; Jenison and Wightman, 1995兲. Approaching versus receding discrimination thresholds were lower with the combined cues than with the three cues separated. The subjects were able to combine the monaural frequency and intensity changes and binaural inter-aural time delays to improve their judgments of approaching and receding sounds. 10:00–10:15

Break

10:15 5aPPa9. A new account of monaural phase sensitivity. Robert P. Carlyon 共MRC Cognition & Brain Sci. Unit, 15 Chaucer Rd., Cambridge, UK, [email protected]兲 and Shihab Shamma 共Univ. of Maryland, College Park, MD 20742兲 Phase differences between partials of complex tones are detectable only when they interact within an auditory filter. This is consistent with auditory models whereby across-channel timing information is explicitly discarded. However, these models 共e.g., the autocorrelogram兲 fail to account for listeners’ ability to detect across-channel phase differences for some sounds. For example, when two groups of unresolved harmonics are filtered into different frequency regions and presented concurrently, listeners can detect a 1- to 2-ms ‘‘pitch pulse asynchrony’’ between the peaks of the waveforms of the two groups. We attribute phase insensitivity for resolved partials to the rapid phase transition near the peak of the traveling wave, causing auditory neurons responding to each partial to do so at a variety of phases. In contrast, a group of unresolved harmonics does not produce a sharp traveling-wave peak, and all neurons responding do so synchronously. We propose a model whereby ‘‘auditory spectrograms’’ are processed by cortical filters tuned to characteristic frequency, modulation rate, and spectral ‘‘scale.’’ The model accounts quantitatively for detection of envelope phase disparities between AM tones, discrimination of AM from QFM, and discrimination of Huffman sequences. We argue that auditory models should not discard all across-channel phase information. 10:30 5aPPa10. Phase effects in simultaneous and forward masking. Elizabeth A. Lerner, Daniel L. Weber, and Brian J. Harward 共Dept. of Psych., Wright State Univ., Dayton, OH 45435兲 We estimated masked thresholds for a 20-ms 共two 10-ms ramps兲, 1-kHz sinusoid as a function of its temporal relation to a 400-ms, 1-kHz sinusoid in the transition from simultaneous to forward masking condi143rd Meeting: Acoustical Society of America

2467

5a FRI. AM

9:00

tions 共signal center minus masker offset times of ⫺200, ⫺20, ⫺10, ⫺3.75, ⫺2.40, ⫺1.25, 0.00, 1.25, 2.40, 3.75, 10, and 20 ms兲. In simultaneous masking conditions, thresholds for signals presented in phase with the masker were lower than for signals added in quadrature, but there was no such phase effect in forward masking conditions. When there was partial overlap of signal and masker, the phase effect was apparent until ⫹3.75-ms center-offset time, where the phase effect disappeared and thresholds decreased. Thresholds for just the portions of the 20-ms signal that occurred during the masker 共simultaneous partial signals兲 and for the portions of the 20-ms signals after the masker 共forward partial signals兲, generally were consistent with results for whole 共20-ms兲 signals: when thresholds for the whole signal appeared to be determined by simultaneous masking, thresholds for the simultaneous partial signals matched those of the whole signal and also showed a phase effect, whereas thresholds for the forward partial signals were higher and showed no phase effect.

10:45 5aPPa11. Discriminating change in the envelope phase spectrum. Stanley Sheft and William A. Yost 共Parmly Hearing Inst., Loyola Univ., Chicago, 6525 N. Sheridan Rd., Chicago, IL 60626兲 Models of a modulation filterbank retain envelope phase information for only the lowest rates. To evaluate model predictions, the ability to discriminate change in the envelope phase spectrum of modulated wideband noise was measured. Modulators were narrow-band noises with bandwidth ranging from 5 to 160 Hz. The contrasting envelopes were generated by manipulating the modulator phase spectrum while leaving the amplitude spectrum unchanged. Modulator phase spectrum was varied by either reversing the modulator waveform, randomizing or zeroing phase arguments in the modulator spectrum, or generating a minimumphase reconstruction through cepstral analysis. For time reversal and randomization, discrimination ability decreased as modulator bandwidth increased, indicating both a loss of phase information with increasing rate and masking of the low-rate information by higher-rate modulation. For zero- and minimum-phase modulators, at some point performance improved with bandwidth. This result implies a sliding temporal window to preserve power fluctuations in the filterbank output. Comparable performance was obtained when equal-bandwidth zero-phase regions were restricted to either a low- or higher-rate region. This result may reflect nonlinearity in modulation processing whereby intermodulation of high-rate components introduces perceptible low-rate modulation. 关Work supported by NIH.兴

11:00 5aPPa12. FM phase and the continuity illusion. Robert P. Carlyon, Christophe Micheyl, John M. Deeks 共MRC Cognition & Brain Sci. Unit, 15 Chaucer Rd., Cambridge CB2 2EF, UK, [email protected]兲, and Brian C. J. Moore 共Univ. of Cambridge, Cambridge CB2 3EB, UK兲 In experiment 1 subjects discriminated between a ‘‘regular’’ 1-kHz FM tone and one where the FM phase reversed midway through, at a zero crossing of the modulator waveform. Zero-peak FM depth was 100 Hz. Performance was near-perfect when the modulator frequency was 2.5 Hz, but declined sharply as it increased above 10 Hz. This is consistent with instantaneous frequency being smoothed over a finite time window, and with the phase reversal producing a ‘‘bump’’ in the window output, which decreases for faster modulations. In experiment 2 listeners discriminated between an FM tone containing a phase reversal and one where the FM depth of one-half modulator cycle was increased by various amounts 共‘‘df’’兲—thereby also producing a bump in the window output. Performance reached a minimum 共at df⫽ approx 5%兲, consistent with the theory. Finally, a 200-ms gap was inserted at the phase reversal point in a 5-Hz FM stimulus, removing subjects’ ability to detect a phase reversal. Filling the gap with noise introduced a sensation of continuity, but subjects could not detect a phase reversal, even though this would have been easy in a physically continuous stimulus. Results suggest that FM phase is not encoded explicitly in the auditory system. 2468

J. Acoust. Soc. Am., Vol. 111, No. 5, Pt. 2, May 2002

11:15 5aPPa13. Plasticity of loudness perception. Craig Formby, LaGuinn P. Sherlock, and Susan L. Gold 共Univ. of Maryland, Baltimore, MD 21201兲 Evidence from the management of tinnitus and hyperacusis suggests that loudness perception is plastic and adaptable. We have undertaken a study to evaluate this idea. The motivation followed from clinical observations suggesting that the magnitude of perceived loudness and, in turn, the rate of loudness growth can be manipulated either upward or downward by prolonged reduction or enhancement in the level of background sound to which a listener is exposed. Accordingly, volunteers were fitted bilaterally with in-the-ear noise instruments 共NI treatment兲 or soundattenuating earplugs 共EP treatment兲. Both treatments produced audibility threshold shifts, mainly above 1000 Hz. The effects of each treatment were evaluated after 2 weeks of continuous use relative to pretreatment loudness response data obtained using the Contour Test of loudness perception. The resulting loudness data for warble tones revealed opposite patterns of plasticity for the two treatments, with steeper and shallower 共than pretreatment兲 loudness growth functions measured, respectively, for the EP and NI treatments. These effects were significantly different for loudness response categories judged to be comfortably loud or louder at both 500 and 2000 Hz. Possible mechanisms for this apparent plasticity of loudness will be discussed. 关Research supported by NIDCD.兴

11:30 5aPPa14. Least-mean-square estimation of new equal-loudness level contours from recent data based on a loudness perception model. Yoˆiti Suzuki 共Res. Inst. of Elec. Commun., Tohoku Univ., 2-1-1, Katahira, Aoba-ku, Sendai 980-8577, Japan兲 and Hisashi Takeshima 共Sendai Natl. Coll. Tech., 1, Sendai 989-3124, Japan兲 Since probable large errors were suggested in 1985 in the equalloudness level contours by Robinson and Dadson 关Br. J. Appl. Phys. 7, 166 –181 共1956兲兴, which were standardized as ISO 226, a considerable amount of data on the equal-loudness relation has been accumulated. Most of the data consistently show a large discrepancy up to more than 20 phons from the contours, especially below 1 kHz. To obtain reliable contours from these new data sporadically given for some specific frequencies and phons, a model function representing the equal-loudness relation was derived from a loudness function modified by the two-stage loudness perception model. Values of the parameters of the model function were obtained by fitting the function to the experimental data. Equal-loudness level contours could be drawn by use of the model function with the parameter values interpolated along the frequency axis. The resultant equal-loudness level contours showed clear differences from those by Robinson and Dadson for all loudness levels over the whole frequency range, particularly in the frequency range lower than 1 kHz. In contrast, the contours rather resemble those given by Fletcher and Munson in 1933 and by Churcher and King in 1937 in the midfrequency range at relatively low loudness levels. 关Work supported by NEDO.兴

11:45 5aPPa15. Loudness for tone underwater. Edward Cudahy and Derek Schwaller 共Naval Submarine Medical Res. Lab., Box 900, Groton, CT 06349-5900, [email protected]兲 The loudness for pure tones was measured by loudness matching for 1-s pure tones from 100 to 50 000 Hz. The standard tone was 1000 Hz. Subjects were instructed to match the loudness of the comparison tone at one of the test frequencies to the loudness of the standard tone. The standard was presented at one of five sound pressure levels 共SPL兲 for each set of frequencies. The standard SPL was varied randomly across test series. The subjects were bareheaded US Navy divers tested at a depth of 3 m. All subjects had normal hearing. The tones were presented to the right side of the subject from an array of underwater sound projectors. The SPL was calibrated at the location of the subject’s head with the subject absent. The loudness increased more rapidly as a function of standard SPL at midfrequencies than at either high or low frequencies. The most compact loudness contours 共least SPL change across range of standard SPL兲 were 143rd Meeting: Acoustical Society of America

2468

at 50 000 Hz. The underwater loudness contours across frequency are significantly different from in-air measurements and have a minimum in the 1000 Hz region rather than the 2– 4 kHz region observed for in-air measurements. 关Work supported by ONR.兴

5aPPb16. Second-order temporal modulation transfer functions „TMTFs… in normal-hearing, hearing-impaired, and cochlear implant listeners. Christian Lorenzi, Jerome Sibellas 共LPE, UMR CNRS 8581, Inst. de Psychologie, Univ. Paris 5, 71 Av Vaillant, 92774 Boulogne-Billancourt, France, [email protected]兲, Stephane Garnier 共Groupement d’audioprothesistes ENTENDRE, 78760 Jouars-Pontchartrain, France兲, and Stephane Gallego 共Laboratoire MXM, 06224 Vallauris Cedex, France兲 ‘‘Second-order’’ TMTFs are obtained by measuring detection thresholds for second-order modulation 共that is, sinusoidal modulation applied to

FRIDAY MORNING, 7 JUNE 2002

the modulation depth of a sinusoidally amplitude-modulated tone or noise carrier兲, as a function of fm⬘, the rate of second-order modulation. Here, the modulated tone or noise acts as a carrier stimulus of rate fm. Such TMTFs may be viewed as descriptions of the attenuation characteristics of the envelope beat component produced by second-order modulation at rate fm⬘. Second-order TMTFs assessed in normal-hearing listeners, listeners with moderately-severe cochlear damage, and cochlear implantees are similar 共i.e., low pass兲 in shape. Overall, detection thresholds are higher in hearing-impaired than normal-hearing listeners. However, the differences in thresholds are small 共5 dB兲. In cochlear implantees, detection thresholds are generally improved when amplitude compression is applied to the stimuli. However, the increase in sensitivity produced by compression is modest 共4 dB兲. Taken together, these results may be accounted for in terms of modulation filters, under the following assumptions: 共i兲 a weak distortion product is generated by the peripheral auditory system at the envelope beat rate fm , and 共ii兲 a salient envelope beat cue appears at the output of ⬘ modulation filters tuned near the carrier rate fm.

GRAND BALLROOM 2, 8:30 A.M. TO 1:00 P.M. Session 5aPPb

Psychological and Physiological Acoustics: Potpourri „Poster Session… Christine R. Mason, Chair Department of Communication Disorders, Boston University, 635 Commonwealth Avenue, Boston, Massachusetts 02215 Contributed Papers All posters will be on display from 8:30 a.m. to 1:00 p.m. To allow contributors an opportunity to see other posters, contributors of odd-numbered papers will be at their posters from 8:30 a.m. to 10:45 a.m. and contributors of even-numbered papers will be at their posters from 10:45 a.m. to 1:00 p.m.

The present study tested the existence of modulation filters by transposing the concept of excitation pattern to the amplitude modulation 共AM兲 domain. In the first experiment, AM rate-discrimination thresholds 共DLFs兲 for white-noise carriers with AM reference rates of 4, 16, and 64 Hz were measured in five normal-hearing listeners. AM depth was either 共i兲 fixed at 0.2, 0.6, or 1.0, or 共ii兲 varied randomly between 0.2 and 1.0. Randomization of the modulation depth occurred either between trials or between the two intervals of the same trial. In a second experiment, detection thresholds of slow 共1 or 4 Hz兲 frequency modulation 共FMDLs兲 applied to the AM carrier 共of identical rates as in the previous experiment兲 were measured, with and without the presence of a slow sinusoidal modulation applied to the depth of the AM carrier. Overall, the data show that DLFs are less affected than FMDLs by variations in the depth of AM. The results are discussed in light of current models of AM perception.

5aPPb2. Detection and discrimination of second-order sinusoidal amplitude modulation „SAM…. Christian Fu¨llgrabe and Christian Lorenzi 共LPE-UMR CNRS 8581, Univ. Rene Descartes-Paris V, 71 Av. Vaillant, 92774 Boulogne-Billancourt, France, [email protected]兲 The perception of complex temporal envelopes has been recently studied using second-order SAM 关Lorenzi et al., J. Acoust. Soc. Am. 110, 1030 共2001兲兴. In these stimuli, the modulation depth of a SAM signal 共of 2469

J. Acoust. Soc. Am., Vol. 111, No. 5, Pt. 2, May 2002

rate f m) is sinusoidally amplitude-modulated at a rate f m ⬘ , thereby generating two additional components at f m⫾ f m ⬘ in the modulation spectrum, and a slow beat at f m ⬘ in the temporal envelope. The present study investigates the respective contribution of these two cues to the perception of second-order SAM. In four normal-hearing listeners, second-order SAM detection and rate-discrimination abilities are measured at a high ‘‘carrier’’ ( f m⫽256 Hz), but low beat rates ( f m ⬘ ⭐128 Hz), as a function of stimulus duration 共250 ms to 2 s兲. The data are compared to firstorder SAM detection and rate-discrimination thresholds measured in similar conditions at 1⭐ f m⭐128 Hz. At common modulation rates ( f m ⫽ f m ⬘ ), the results show that 共i兲 first- and second-order thresholds increase similarly when stimulus duration decreases, and 共ii兲 first- and second-order SAM rate-discrimination thresholds are basically identical. The data therefore indicate that detection and discrimination of secondorder SAM are mainly based on the slow temporal envelope beat cue.

5aPPb3. The influence of practice on the detectability of auditory sinusoidal amplitude modulation. Matthew B. Fitzgerald and Beverly A. Wright 共Dept. of Commun. Sci. and Disord. and the Inst. for Neurosci., Northwestern Univ., 2299 N. Campus Dr., Evanston, IL 60208-3550兲 The capacity to detect fluctuations in sound amplitude influences the perception of many everyday sounds, including speech. Here, the influence of practice on this ability was investigated. Between two testing sessions, one group of nine listeners who were tested on the detection of sinusoidal amplitude modulation 共SAM兲 improved by about 0.7 dB on each of five conditions 共300 trials/condition兲. Nine other listeners participated in these same sessions, but between them, practiced 4320 trials 143rd Meeting: Acoustical Society of America

2469

5a FRI. AM

5aPPb1. Modulation rate-discrimination and frequency modulation detection thresholds in the amplitude modulation domain. Christian Fu¨llgrabe, Christian Lorenzi 共LPE-UMR CNRS 8581, Univ. Rene Descartes, Paris 5, 71 Av. Vaillant, 92774 Boulogne-Billancourt, France兲, and Laurent Demany 共Univ. Victor Segalen, 33076 Bordeaux, France兲

detecting the presence of 80-Hz SAM with a 3- to 4-kHz narrow-band carrier. Only three of these listeners, who had among the highest initial detection thresholds on the trained condition, improved during this training phase. These learners subsequently improved at untrained modulation rates 共30 and 150 Hz兲 with the trained carrier, but not at the trained modulation rate with untrained carriers 共0.5–1.5 kHz and 0–5 kHz兲. These data suggest that 共1兲 at some stage, modulation processing is more linked to the carrier spectrum than to the modulation rate, and 共2兲 while most normal-hearing listeners reach their best modulation-detection performance with minimal experience, listeners with high initial thresholds benefit from extended practice. Thus, training may aid populations that have difficulty detecting amplitude modulation. 关Work supported by NIDCD.兴

5aPPb4. Signal–masker similarity and perceptual segregation in informational masking: Some examples. Christine R. Mason, Gerald Kidd, Jr., Nathaniel I. Durlach, Tanya L. Arbogast 共Hearing Res. Ctr., Boston Univ., 635 Commonwealth Ave., Boston, MA 02215, [email protected]兲, Barbara Shinn-Cunningham, and H. Steven Colburn 共Hearing Res. Ctr., Boston Univ., 635 Commonwealth Ave., Boston, MA 02215兲 In a companion paper at this meeting 共Durlach et al.兲 it was suggested that changes in similarity and/or perceptual segregation of a signal and masker could affect the amount of informational masking in ways that would not be predicted based on the amount of uncertainty. In that paper, several simple stimulus manipulations are used as illustrative examples of the predicted changes. Although these were suggested as gedanken 共thought兲 experiments, data are presented here confirming these suggestions. Perceptual segregation of the signal was accomplished by varying the degree of similarity or coherence between signal and masker along a relevant stimulus dimension. For example, when the masker consisted of a set of randomly drawn upward glides, listeners perceptually segregated a downward gliding signal. This reduced the amount of informational masking obtained relative to the condition in which both signal and masker were similar upward glides. The same was true for asynchrony of onsets, difference in perceived interaural location, and variation/coherence in spectrotemporal pattern. The purpose of these experiments was to provide data that will be useful in the development of a general theory of informational masking and multisource listening that takes into account perceptual grouping/segregation and perceived similarity. 关Work supported by NIH/NIDCD.兴

5aPPb5. Internal noise invariance across two informational masking tasks. Zhongzhou Tang and Virginia M. Richards 共Dept. of Psych., Univ. of Pennsylvania, Philadelphia, PA 19104, [email protected]兲 The detectability of a 1000-Hz signal tone added to a six-tone masker was measured using a two-interval, forced-choice task. The frequencies of the masker components were randomly drawn either within a trial 共different maskers in two intervals; within condition兲 or between trials 共same maskers across two trials; between condition兲. Using a linear channel model with channel variances that are unaltered across conditions, weights for different frequency regions were estimated using individual responses in the within condition. Additionally, estimates of total decision variance were estimated using psychometric functions. For two observers, thresholds in the within- and between conditions were approximately the same, indicating a failure of the linear model. For a third observer the data were sufficiently variable that the psychometric functions and the linear model were poorly fitted. For the remaining five observers, thresholds in the within condition were on average 7 dB higher than in the between conditions. However, for these observers estimates of internal noise did not 2470

J. Acoust. Soc. Am., Vol. 111, No. 5, Pt. 2, May 2002

reliably differ across conditions. Provided the modeling assumptions are reasonable, this result suggests that for these five observers the change in threshold can be accounted for by the change in stimulus uncertainty.

5aPPb6. Sensitivity to changes in amplitude envelope. Erick Gallun, Ervin R. Hafter, and Anne-Marie Bonnel 共Dept. of Psych., Univ. of California, Berkeley, CA 94720兲 Detection of a brief increment in a tonal pedestal is less well predicted by energy-detection 共e.g., Macmillan, 1973; Bonnel and Hafter, 1997兲 than by sensitivity to changes in the stimulus envelope. As this implies a mechanism similar to an envelope extractor 共Viemeister, 1979兲, sinusoidal amplitude modulation was used to mask a single ramped increment 共10, 45, or 70 ms兲 added to a 1000-ms pedestal with carrier frequency 共cf兲 ⫽477 Hz. As in informational masking 共Neff, 1994兲 and ‘‘modulationdetection interference’’ 共Yost and Sheft, 1989兲, interference occurred with masker cfs of 477 and 2013 Hz. While slight masking was found with modulation frequencies 共mfs兲 from 16 to 96 Hz, masking grew inversely with still lower mfs, being greatest for mf⫽4 Hz. This division is reminiscent of that said to separate sensations of ‘‘roughness’’ and ‘‘beats,’’ respectively 共Terhardt, 1974兲, with the latter also being related to durations associated with auditory groupings in music and speech. Importantly, this result held for all of the signal durations and onset–offset ramps tested, suggesting that an increment on a pedestal is treated as a single auditory object whose detection is most difficult in the presence of other objects 共in this case, ‘‘beats’’兲.

5aPPb7. Informational masking without maskers. Robert A. Lutfi and Joshua M. Alexander 共Waisman Ctr. and Dept. of Communicative Disord., Univ. of Wisconsin, Madison, WI 53706兲 Informational masking is often interpreted as a failure of listeners to ‘‘perceptually segregate’’ the signal from the masker based on their different spectral-temporal properties. Though popular, such interpretations are difficult to test, as they make few specific predictions. There is, however, one prediction clearly implied by perceptual segregation. It is that informational masking should be eliminated on trials in which the masker is absent. Without a masker there can be no failure of perceptual segregation and so no masking. We report results inconsistent with this prediction. On masker-absent trials randomly interleaved with masker-present trials we show elevations in signal threshold in excess of 20 dB for some listeners. These results imply a process of perceptual summation rather than segregation. In particular, they are predicted by a model in which the decision variable is a weighted sum of signal and masker levels on each trial 关Lutfi, J. Acoust. Soc. Am. 94, 748 –758 共1993兲兴. Elevations in signal threshold on masker-absent trials are possible according to this model because the weighted sum of levels is similar on signal-alone and masker-alone trials 关Work supported by NIDCD.兴

5aPPb8. Amplitude modulation perception for people with normal hearing. Jan Koopman, Niels Plasmans, and Wouter Dreschler 共Dept. of Clinical & Exp. Audiol., Academic Medical Ctr., P.O. Box 22660, Meibergdreef 9 1100 DD, Amsterdam, The Netherlands, [email protected]兲 Our experiments focus on the perception of amplitude modulation. In total, four experiments were carried out. In the first experiment, the sensitivity for amplitude modulation was determined. In the other three tests, amplitude modulation has been matched for signals that differ with respect to one of the following parameters: bandwidth, center frequency, and sensation level. The equally perceived modulation depth 共EPMD兲 was approximately equal to the reference depth for signals with a 15 dB difference in sensation level. For the bandwidth and center frequency, we generally found an interference of the internal fluctuations on the perceived modulation depth. Data can be described reasonably well by determining the differences in sensation depth 共i.e., the amplitude modulation 143rd Meeting: Acoustical Society of America

2470

5aPPb9. Effects of signal duration and temporal position on physiological measures of the growth of masking. Jiayun Liu and Ann Clock Eddins 共Dept. of Commun. Disord. and Sci., Univ. at Buffalo, Buffalo, NY 14214兲 Psychoacoustic studies have shown that the growth of masking for pure-tone signals in broadband noise is linear when the signal onset is delayed relative to the masker but is nonlinear when signal and masker onsets are synchronous 关e.g., Strickland 共2001兲兴. Curiously, for very short duration signals 共10 ms兲, nonlinear effects have been observed for some delayed-onset conditions 共Oxenham et al., 1997兲. While nonlinear effects may result from peripheral compression, it is unclear if other physiological mechanisms might be responsible for the influence of signal temporal parameters on the growth of masking. To address this issue, the growth of masking was studied using evoked-potential responses recorded from the chinchilla inferior colliculus 共IC兲. Quiet and masked thresholds were obtained for 1.0-, 4.0-, and 6.5-kHz signals as a function of signal duration 共2–100 ms兲 and temporal position 共delayed or nondelayed兲 within a 400-ms broadband noise masker. Masker spectrum levels ranged from ⫺10 to 20 dB. Consistent with Stricklands data, physiological growth of masking was linear under delayed conditions and nonlinear under nondelayed conditions. Specifically, for nondelayed conditions, the degree of nonlinearity was greatest for high-frequency signals but did not vary with signal duration. 关Work supported by NSF IBN-9996379.兴

5aPPb10. Analytical predictions of auditory-nerve response to noisemodulated pulsatile electrical stimulation. William D. Ferguson, Yifang Xu 共Dept. of Elec. and Computer Eng., Duke Univ., P.O. Box 90291, Durham, NC 27708, [email protected]兲, Roger L. Miller 共Duke Univ. Medical Ctr., Durham, NC 27710兲, and Leslie M. Collins 共Duke Univ., Durham, NC 27708兲 One factor that may impede speech recognition by cochlear implant subjects is that electrically stimulated auditory nerves respond with a much higher level of synchrony than is normally observed in acoustically stimulated nerves. Thus, the response patterns received by higher processing centers are likely to be substantially different from those generated under normal acoustic stimulation. These differences may form the basis for a degradation of speech understanding since the patterns generated under electrical stimulation may be interpreted incorrectly by higher processing centers. Based on recent findings, the implant research community has suggested several techniques to mitigate this synchrony, which may in turn provide some implanted individuals with improved speech recognition. In this work, the inter-stimulus interval histogram 共ISIH兲 is utilized to compare the response of an electrically stimulated auditory nerve with that of an acoustically stimulated nerve of a cat. The ISIH data were generated from a stochastic model of the cochlea with noise-modulated pulsatile stimulation. Simulated ISIHs are presented along with corroborating analytical predictions of the results. When compared to an acoustically generated ISIH, these data indicate that the addition of noise may provide more natural neural responses to electrical stimuli. 关Work supported by NSF.兴

5aPPb11. Evaluation of interval-based and beat-based timing mechanisms for duration discrimination. Melody S. Berens and Richard E. Pastore 共Dept. of Psych., Binghamton Univ., PO Box 6000, Binghamton, NY 13901, [email protected]兲 An important topic in music perception is how people track time or judge duration 关E. W. Large and M. R. Jones, Psychol. Rev. 106共1兲, 119– 159 共1999兲兴. Two typically proposed duration judgment timing mechanisms are interval-based timing 共judgment of discrete events兲 and beat2471

J. Acoust. Soc. Am., Vol. 111, No. 5, Pt. 2, May 2002

based timing 共judgments of beat regularity兲 关S. W. Keele, N. Nicoletti, R. I. Ivry, and R. A. Pokorny, Psychol. Res. 50, 251–256 共1989兲兴. Models were developed for several different versions of these hypothetical timing mechanisms. Predictions from these models were then evaluated in experiments that estimated thresholds for judging difference in judgment of interval duration. Musicianship of participants was also evaluated. Each trial began with an initial training sequence of intervals that enhanced the stored interval or established the internal rhythm or beat. The training interval was followed by a specific delay that varied across conditions. Each trial ended with a target and a comparison interval. Participants judged whether the comparison interval was equal to or longer in duration than the target interval. Results were systematic, but not consistent with simple, straightforward versions of either timing mechanism.

5aPPb12. A system for improving the communication of emotion in music performance by feedback learning. Erwin Schoonderwaldt, Anders Friberg, Roberto Bresin 共Royal Inst. of Technol., Speech Music and Hearing, Drottning Kristinasv. 31, 100 44, Stockholm, Sweden, [email protected]兲, and Patrik Juslin 共Uppsala Univ., 751 42, Uppsala, Sweden兲 Expressivity is one of the most important aspects of music performance. However, in music education, expressivity is often overlooked in favor of technical abilities. This could possibly depend on the difficulty in describing expressivity, which makes it problematic to provide the student with specific feedback. The aim of this project is to develop a computer program, which will improve the students’ ability in communicating emotion in music performance. The expressive intention of a performer can be coded in terms of performance parameters 共cues兲, such as tempo, sound level, timbre, and articulation. Listeners’ judgments can be analyzed in the same terms. An algorithm was developed for automatic cue extraction from audio signals. Using note onset–offset detection, the algorithm yields values of sound level, articulation, IOI, and onset velocity for each note. In previous research, Juslin has developed a method for quantitative evaluation of performer–listener communication. This framework forms the basis of the present program. Multiple regression analysis on performances of the same musical fragment, played with different intentions, determines the relative importance of each cue and the consistency of cue utilization. Comparison with built-in listener models, simulating perceived expression using a regression equation, provides detailed feedback regarding the performers’ cue utilization.

5aPPb13. Modeling sound transmission and reflection in the pulmonary system and chest with application to diagnosis of a collapsed lung. Thomas J. Royston, Xiangling Zhang 共Univ. of Illinois at Chicago, 842 West Taylor St., MC 251, Chicago, IL 60607, [email protected]兲, Hussein A. Mansy, and Richard H. Sandler 共Rush Medical College, Chicago, IL 60612兲 Experimental studies have shown that a pneumothorax 共collapsed lung兲 substantially alters the propagation of sound introduced at the mouth of an intubated subject and measured at the chest surface. Thus, it is hypothesized that an inexpensive diagnostic procedure could be developed for detection of a pneumothorax based on a simple acoustic test. In the present study, theoretical models of sound transmission through the pulmonary system and chest region are reviewed in the context of their ability to predict acoustic changes caused by a pneumothorax, as well as other pathologic conditions. Such models could aid in parametric design studies to develop acoustic means of diagnosing pneumothorax and other lung pathologies. Extensions of previously developed simple models of the authors are presented that are in more quantitative agreement with experimental results and that simulate both transmission from the bronchial airways to the chest wall, as well as reflection in the bronchial airways. 关Research supported by NIH NCRR Grant No. 14250 and NIH NHLBI Grant No. 61108.兴 143rd Meeting: Acoustical Society of America

2471

5a FRI. AM

depth 关in dB兴 above the threshold for modulation detection兲. An alternative way to model the results of this study is to determine the standard deviation of energy values for a sliding temporal integrator. Both models can be improved significantly by taking into account the differences in growth of loudness functions for the two conditions.

5aPPb14. Differential auditory signal processing in an animal model. Dukhwan Lim, Chongsun Kim, and Sun O. Chang 共Dept. of Otolaryngol., Seoul Natl. Univ., 28 Yungundong, Chongnogu, Seoul 110-744, Korea兲

5aPPb15. Measuring the performance of personal hearing protectors for high-noise environments. William A. Ahroon, Dale A. Ostler 共U.S. Army Aeromedical Res. Lab., P.O. Box 620577, Fort Rucker, AL 36362-0577, [email protected]兲, and Martin B. Robinette 共Lyster Army Hospital, Fort Rucker, AL 36362兲

Auditory evoked responses were collected in male zebra finches 共Poephila guttata兲 to objectively determine differential frequency selectivity. First, the mating call of the animal was recorded and analyzed for its frequency components through the customized program. Then, auditory brainstem responses and cortical responses of each anesthetized animal were routinely recorded in response to tone bursts of 1– 8 kHz derived from the corresponding mating call spectrum. From the results, most mating calls showed relatively consistent spectral structures. The upper limit of the spectrum was well under 10 kHz. The peak energy bands were concentrated in the region less than 5 kHz. The assessment of auditory brainstem responses and cortical evoked potentials showed differential selectivity with a series of characteristic scales. This system appears to be an excellent model to investigate complex sound processing and related language behaviors. These data could also be used in designing effective signal processing strategies in auditory rehabilitation devices such as hearing aids and cochlear implants. 关Work supported by Brain Science & Engineering Program from Korean Ministry of Science and Technology.兴

The American National Standards Institute currently identifies two real-ear attenuation at threshold 共REAT兲 methods for measuring the performance of hearing protective devices 共HPD兲. The experimentersupervised-fit method 共Method A兲, employed in earlier standards, permits the subject nearly unlimited assistance from the experimenter in fitting an HPD under test. Conversely, the subject-fit method 共Method B兲 allows no experimenter assistance and the only fitting instructions that are available to the subject are those published by the manufacturer of the HPD and included in the HPD packaging. It is assumed that the Method A procedure provides a best-case estimate of the HPD performance while the Method B procedure provides a better estimate of the performance that can be reasonably obtained in real-world industrial settings. While measurements using the subject-fit method are preferred, there are questions as to whether this method is appropriate for measuring HPD performance in very high-noise environments such as those experienced in some military vehicles 共i.e., rotary-wing aircraft and tracked vehicles兲. Comparisons of REAT measurements using different fitting methods are presented from studies of single- and double-protection hearing protection strategies. The preference of the ANSI S12.6-1997 subject-fit method of measuring the performance of HPDs for hearing protectors in high-level noise environments is questioned.

FRIDAY MORNING, 7 JUNE 2002

KINGS GARDEN NORTH, 8:00 TO 10:30 A.M. Session 5aSAa

Structural Acoustics and Vibration: Modeling Joseph M. Cuschieri, Chair Department of Ocean Engineering, Florida Atlantic University, Center for Acoustics and Vibration, 777 Glades Road, Boca Raton, Florida 33431 Contributed Papers 8:00

The dynamic stability of an elastically supported Timoshenko beam excited by constant-velocity equally spaced traveling masses has been investigated.The regions of dynamic stability are determined for different values of the elastic foundation stiffness. Floquet theory is utilized to study the parametric regions of stability and instability, which are displayed in graphical form. Since the occurrence of this dynamic instability reduces the axial buckling load of the beam, the result is important for the study of buckling of a continuous beam.

versity of Texas at Austin Applied Research Laboratories. The beams were made of an aluminum 6061 alloy and were cracked with varying depths at either the midpoint or off-center. The auto-bicoherence and crossbicoherence of both a finite-element model 共FEM兲 and experimental beams will be discussed. The excitation was both band-limited noise and narrow-band forcing at one end of the beams. While the FEM results show a crude relationship between crack depth and modal interaction when excited with band-limited noise, the experimental results only show such a relationship when the excitation is narrow-band. The results do not demonstrate a clear method of determining the crack location. Further studies should be conducted with more discrete crack depths and different narrowband force excitation. 关Work supported by Univ. of Texas, Austin Applied Research Laboratories IR&D Program.兴

8:15

8:30

5aSAa2. Auto-bicoherence and cross-bicoherence of the transverse vibration signals of cracked Bernoulli–Euler beams. Daniel Linehan 共Appl. Res. Labs., Univ. of Texas, Austin, Austin, TX 78758兲

5aSAa3. Mode excitation and imaging by the radiation force of ultrasound. Mostafa Fatemi and Mohammad R. Zeraati 共Mayo Foundation/Clinic, Rochester, MN 55905, [email protected]

The transverse vibration of a cracked Bernoulli–Euler beam is nonlinear if the crack does not remain open. Because this nonlinearity is evenordered, an examination of the third-order frequency domain statistics 共auto-bicoherence and cross-bicoherence兲 of the forcing input and acceleration output time histories should reveal quadratic nonlinearities in the form of modal interactions. To this end, the transverse vibration behavior of unconstrained, cracked Bernoulli–Euler beams was studied at the Uni-

A method for noncontact excitation and displaying the vibration mode shapes in solids is introduced. This method utilizes the radiation force of amplitude-modulated focused ultrasound to remotely excite a resonant mode in the object. The ultrasound transducer is driven by a sinusoidally modulated continuous wave signal to produce an oscillatory radiation force on the object and drive it into one of its natural resonance modes. The acoustic field resulting from object vibration is detected by a hydro-

5aSAa1. Dynamic stability of a beam excited by moving masses. Seroj Mackertich 共The Penn State Univ., Harrisburg, Middletown, PA 17057兲

2472

J. Acoust. Soc. Am., Vol. 111, No. 5, Pt. 2, May 2002

143rd Meeting: Acoustical Society of America

2472

8:45 5aSAa4. LocalÕglobal homogenization „LGH… applied to sound reflection from a flexible barrier with impedance discontinuities. Donald Bliss, Linda Franzoni, and Pavel Danilov 共Dept. of Mech. Eng., Duke Univ., Durham, NC 27708-0300兲 A new homogenization method for complex structures has been developed. The method utilizes a local/global decomposition to separate the low and high parts of the wave number spectrum. The low wave number global problem has an infinite-order structural operator, and structural discontinuities are replaced by an equivalent distributed suspension. The rapidly varying local problem, which provides transfer function information for the global problem, is solved separately. Once formulated for a specific structure, the self-contained global problem is solved first, and the local solution can be reconstructed afterwards. The LGH reformulation, which applies over the entire frequency range, allows the global problem to be solved at much lower resolution than the length of flexural waves on the original structure. To demonstrate the approach, the problem of sound reflection from a flexible barrier with impedance discontinuities in a channel is described. The effects of radiating acoustic modes are transferred entirely to the smooth global problem, whereas evanescent acoustic modes are contained within the global structural operator. Sample calculations are presented comparing the method with the exact solution. 9:00 5aSAa5. Density of Axisymmetric Modes of Closed Prolate Spheriodal Shells. Courtney B. Burroughs 共Appl. Research Lab., The Penn State Univ. State College, PA 16804兲 The frequencies of resonance of prolate spheriodal shells may be estimated either by numerical methods or analytically via variational methods. With either approach, the computational effort increases as the order of the mode increases, making it difficult to obtain accurate estimates at frequencies where the modal density is high. However, at high frequencies, it is often necessary to obtain estimates only for the modal density since modal overlap will reduce the effects of individual modes on the response of the shell. In this paper, approximations for the modal density of the axisymmetric modes of closed prolate spheroidal shells are derived based on results obtained using analytic variational methods. 9:15–9:30

Break

9:30 5aSAa6. Structural and acoustic intensities of an infinite, pointexcited fluid-loaded elastic plate. Jungyun Won and Sabih Hayek 共212 EES Bldg., Penn State Univ., University Park, PA 16802-6812兲 In this paper, the active vibrational structural intensity 共VSI兲 in and the radiated acoustic intensity 共AI兲 from an infinite elastic plate in contact with a heavy fluid is modeled by the Mindlin plate theory. This theory includes the shear deformation and rotatory inertia in addition to flexure. The plate is excited by a point force, which generates a vector active VSI field in the plate. The active VSI has two components; one depends on the shear force, and the other depends on the moment. The resulting acoustic radiation generates an active AI in the fluid medium. First, the Green’s functions for the plate with and without fluid loading were developed. These were then used to develop expressions for VSI and AI vector fields. 2473

J. Acoust. Soc. Am., Vol. 111, No. 5, Pt. 2, May 2002

The displacement, shear deformation, VSI vector map, radiated acoustic pressure, and the AI vector map are computed for frequencies below and above the coincidence frequency. Below coincidence, a significant portion of the point force input power is trapped in the plate in the form of VSI. Above coincidence, a significant portion of the input source power is leaked to the fluid in the form of AI, with a small portion propagating to the far-field VSI. 9:45 5aSAa7. Vibration modeling of beam reinforced plates using a direct image method. Joseph Cuschieri, Alexandre Sarda 共Florida Atlantic Univ., 101 N. Beach Rd., Dania, FL 33004-3023兲, and Arcanjo Lenzi 共Universidade Federal de Santa Catarina, Florianopolis, SC, Brazil兲 Plates reinforced by beams are the main components of offshore structures as used in oil prospecting and production industry. The vibration generated by machinery on these platforms propagates through the structure and can generate high noise levels in the accommodation areas. To determine the power flow through the beam reinforced plates, models that include the effect of the web and flanges’ resonance in the beam and in-plane waves in all the structural components are necessary, since both of these are important especially when considering relatively high frequencies for such large structures. This work presents a model to determine the response and power flow through beam reinforced plates using a direct image method to obtain the component mobility functions for arbitrary boundary conditions. This approach is very efficient computationally and can generate accurate results up to relatively high frequencies. The results obtained using this method are compared with an exact solution using 3-D stress equations for a free rectangular parallelepiped. 10:00 5aSAa8. Experimental validation of RESOUND mid-frequency acoustic radiation models. Bryce K. Gardner and Philip J. Shorter 共Vibro-Acoust. Sci., 12555 High Bluff Dr., Ste. 310, San Diego, CA 92130兲 RESOUND is a full-spectrum structural acoustic analysis method that uses finite element analysis in the low frequency region, statistical energy analysis in the high frequency region and a hybrid approach in the mid frequency region. In this paper, acoustic radiation from a frame-stiffened panel into a large acoustic space will be investigated. Over the frequency range of interest, the frame has relatively few modes and exhibits long wavelength global behavior, while the panel has a large number of modes and exhibits short wavelength local behavior. Numerical and experimental results will be presented which illustrate how the frame and panel interact to give rise to the radiated sound field. 10:15 5aSAa9. Specific features of thickness resonance in finite elastic plate. Victor T. Grinchenko 共Hydroacoustic Dept., Inst. of Hydromechanics of NAS of Ukraine, 8/4 Zhelyabov St., 03680 Kiev, Ukraine兲 The notion of the thickness resonance in the theory of oscillation of elastic elements has been formed in the scope of the two-dimensional model for infinite elastic plate. It is presumed that the corresponding frequency is eigenfrequency for the finite plate and motion in corresponding natural mode is pistonlike. The natural modes of a finite plate are formed as a result of interaction of wave motion in plane and in thickness. Contrary to the case of ideal compressible fluid these types of motion in elastic body are tightly coupled. The interaction of two types of motion results in the essential complication of the eigenforms and spectrum of eigenfrequencies in a vicinity of the thickness resonance frequency. One of the interesting phenomena is that some eigenforms have frequencies growing together with an increase of the size of the plate. The quantitative characteristics of the eigenforms and eigenfrequency spectrum in relative highfrequency domain are presented for the case of the circular plate and finite cylinder. The distribution of displacement on the surfaces of the plate for various values of Poisson’s ratio are submitted. Comparison of experimental data with results of numerical calculation is given. 143rd Meeting: Acoustical Society of America

2473

5a FRI. AM

phone. A theoretical model has been developed that describes the relationship between the force, mode shape, and the acoustic field. It is shown that the acoustic field amplitude is proportional to the value of the mode shape at the focal point of the ultrasound beam. By scanning the ultrasound beam across the object it is possible to map the acoustic data into an image that represents the mode shape of the object. Experiments have been conducted on small steel, aluminum, and glass beams. The first five resonant modes were excited and imaged by the present method. Experimental results have shown remarkable agreement with the theory and computer simulation. Beam deflections in the order of tens of nanometers can be detected by this method.

FRIDAY MORNING, 7 JUNE 2002

KINGS GARDEN NORTH, 10:45 A.M. TO 12:15 P.M. Session 5aSAb

Structural Acoustics and Vibration: Radiation John B. Fahnline, Chair Pennsylvania State University, 16 Applied Science Building, University Park, Pennsylvania 16804 Contributed Papers 10:45 5aSAb1. Reconstruction of transient acoustic radiation from a thin disk. Manjit Bajwa and Sean Wu 共Dept. of Mech. Eng., Wayne State Univ., 5050 Anthony Wayne Dr., Detroit, MI 48202兲 The HELS method 关Wu, J. Acoust. Soc. Am. 107, 2511–2522 共2000兲兴 is extended to reconstruction of transient acoustic radiation from a highly nonspherical structure. The test object is a thin disk subject to an impulsive acceleration in an unbounded fluid medium. Since the HELS method allows piecewise reconstruction of acoustic quantities on the source surface, it is possible to focus on one side of the disk at a time. Also, since the origin of coordinates is arbitrary, one can set the spherical coordinates in such a way that the spherical surface looks almost flat locally. This treatment legitimizes the Rayleigh hypothesis and facilitates reconstruction of the normal surface velocity on the disk front surface. Reconstruction of normal surface velocity on the opposite side of the disk can be done in a similar manner. The input acoustic pressure signals are collected using an array of microphones in front of the disk and reconstructed acoustic quantities are compared with the analytic results 关Wu, J. Acoust. Soc. Am. 94, 542–553 共1993兲兴. Results show that the accuracy of reconstruction depends on that of input signals, and convergence of the reconstructed normal surface velocity improves with an increase in the cutoff frequency of input data. 关Work supported by NSF.兴 11:00 5aSAb2. Optimizing attenuation of sound radiation from a plate with passively shunted nonlinear piezoceramic patches. M. Bulent Ozer and Thomas J. Royston 共Univ. of Illinois at Chicago, 842 W. Taylor St., MC 251, Chicago, IL 60607, [email protected]兲 Passive control of multi-mode sound radiation from a simply supported plate using shunted piezoceramic PZT patches is investigated. Two methods are introduced and compared for calculation of the optimal inductance and resistance values of the shunt circuits: 共1兲 the first based on adapting Den Hartogs damped vibration absorber principle, and 共2兲 the second based on the Shermann–Morrison matrix inversion method. Both linear and nonlinear system responses are considered. At higher disturbance levels, hysteretic nonlinearity in PZT devices may degrade the passive damping performance. The modeling of this nonlinearity using an Ishlinskii hysteresis model and the determination of the optimal shunt circuit values taking hysteresis into account is discussed. Also, in this context, two different types of piezoceramic shunt circuit configurations are considered to achieve multi-mode attenuation: 共1兲 a single PZT patch with a multiple branched shunt circuit, and 共2兲 multiple PZT patches with single branch shunt circuits. 关Research supported by NSF Grant No. 9733565 and ONR Grant No. N00014-99-1-0342.兴 11:15 5aSAb3. New calculation strategy for acoustic radiation from a thick annular disk. Hyeongill Lee and Rajendra Singh 共Acoust. and Dynam. Lab., Dept. of Mech. Eng. and The Ctr. for Automotive Res., The Ohio State Univ., Columbus, OH 43210-1107兲 This article proposes a new semianalytical procedure for the calculation of sound radiation from a thick annular disk when it is excited by arbitrary harmonic forces. As the first step, structural eigensolutions for 2474

J. Acoust. Soc. Am., Vol. 111, No. 5, Pt. 2, May 2002

both in-plane 共radial兲 and out-of-plane 共flexural兲 modes are calculated using analytical methods. These are examined by the finite element analyses as well as the experimental investigations. The far-field sound pressure distributions 共including directivity兲 due to selected modes of the disk are obtained from numerically obtained surface velocities using Rayleigh integral solutions based on cylindrical or circular plate radiator formulations. Such formulations define the modal radiation solutions corresponding to the structural eigensolutions of a thick disk. The boundary element analyses and vibro-acoustic experiments validate analytical predictions. Surface velocity and far-field sound pressure due to an arbitrary harmonic force excitation are then obtained from the structural and acoustic normal mode expansions. Based on the far-field sound pressure, acoustic power and radiation efficiency spectra are obtained. The method is also confirmed by comparing analytical results with those from finite and boundary element analyses. Finally, the effect of coupling between modes is also investigated by the proposed procedure. 11:30 5aSAb4. Computing multipole expansions numerically. John B. Fahnline 共Appl. Res. Lab., University Park, PA 16802, [email protected]兲 Multipole expansions have long been used to classify and better understand sound fields. They are easily computed up through the quadrupole term using standard formulas found in basic acoustic texts. Here, the formulas are adapted for numerical computations of radiated acoustic power from vibrating structures. To perform the calculations, the normal surface velocity and pressure must be known in advance and are computed using a boundary element analysis. Several examples are given to demonstrate the utility of the calculations, including a baffled circular plate clamped around its periphery and a bass-reflex loudspeaker. The results show how the expansion is useful as a way of identifying radiation mechanisms and separating out the radiating component of the surface velocity profile from the nonradiating component. 11:45 5aSAb5. Prediction of acoustic radiation based on particle velocity measurements. Qiang Hu, Zhi Ni, Huancai Lu, Sean Wu 共Dept. of Mech. Eng., Wayne State Univ., 5050 Anthony Wayne Dr., Detroit, MI 48202兲, and Yang Zhao 共Dept. of Elec. and Computer Eng., Wayne State Univ., 5050 Anthony Wayne Dr., Detroit, MI 48202兲 It has been shown 关Wu and Hu, J. Acoust. Soc. Am. 103, 1763–1774 共1998兲; 104, 3251–3258 共1998兲兴 that the radiated acoustic pressure can be determined directly once the particle velocity distribution over an imaginary surface enclosing the object under consideration is obtained. This alternative formulation is advantageous over the classical Helmholtz integral theory, which requires the surface acoustic quantities to be completely specified before the field acoustic pressure can be calculated. The difficulty with this approach is the measurement of the fluctuating part of particle velocity in the fluid medium. This paper describes an attempt to measure particle velocities using a laser anemometer. To facilitate measurements, fine particles are sprayed in the air by a fog generator. These particles oscillate in an insonified field at the excitation frequency. Both amplitudes of particle velocities in the normal and tangential directions and phases are measured. These data are used to predict the field acoustic 143rd Meeting: Acoustical Society of America

2474

pressures, which are validated by measurements taken at the same locations. Since the field acoustic pressures are calculated directly, the nonuniqueness difficulties inherent in the Helmholtz integral formulation are no longer existent and the efficiency of numerical computations is significantly enhanced. 关Work supported by NSF.兴 12:00 5aSAb6. An efficient method to calculate the radiated pressure from a vibrating structure. Sunghoon Choi and Yang-Hann Kim 共Dept. of Mech. Eng., KAIST, Sci. Town, Taejon 305-701, Republic of Korea兲 An alternative formulation of the Helmholtz integral equation, derived by Wu et al. 关J. Acoust. Soc. Am. 103, 1763–1774 共1998兲兴, expresses the

FRIDAY MORNING, 7 JUNE 2002

pressure field explicitly in terms of the velocity vector of a radiating surface. This formulation, derived for arbitrary sources, is similar in form to Rayleigh’s formula for planar sources. Because the pressure field is expressed explicitly as a surface integral of the particle velocity, which can be implemented numerically using standard Gaussian quadratures, there is no need to use the boundary element method to solve a set of simultaneous equations for the surface pressure at the discretized nodes. Furthermore the nonuniqueness problem inherent in methods based on Helmholtz integral equation is avoided. Validation of this formulation is demonstrated first for some simple geometries. This method is also applied to general vibro-acoustic problems in which both the surface pressure and velocity components are unknown. 关Work sponsored by Ministry of Education, Korean Government under the BK21 program and Ministry of Science and Tech., Korean Government under National Research Lab. program.兴

GRAND BALLROOM 2, 8:30 A.M. TO 1:00 P.M. Session 5aSC

Speech Communication: Speech Potpourri: Production and Signal Processing „Poster Session… Fredericka Bell-Berti, Chair Department of Speech Communication Sciences and Theatre, St. John’s University, 8000 Utopia Parkway, Jamaica, New York 11439 Contributed Papers All posters will be on display from 8:30 a.m. to 1:00 p.m. To allow contributors an opportunity to see other posters, contributors of odd-numbered papers will be at their posters from 8:30 a.m. to 10:45 a.m. and contributors of even-numbered papers will be at their posters from 10:45 a.m. to 1:00 p.m.

In an attempt to more clearly understand the neural control of voice, a reaction time study was designed to investigate how rapidly normal subjects, i.e., nontrained singers, can voluntarily increase or decrease their voice fundamental frequency (F0) during sustained vocalizations when cued with a 1000-Hz auditory tone stimulus. Results revealed that overall reaction times 共RTs兲 (F⫽21.9, d f ⫽2, 150, p⫽0.01) for upward F0 modulations occurred faster 共range: 138 –176 ms兲 than downward responses 共range: 196 –234 ms兲. In contrast to the reaction time findings, slightly higher peak velocities were observed for downward responses compared to upward responses. Shorter RTs observed for F0 elevation are therefore possibly related to central mechanisms involved in the planning of or execution of the direction in which F0 is to be modulated instead of muscle biomechanics. The fastest RTs obtained from the present study 共138 ms兲 are slightly longer than the reflex latencies of the initial pitchshift reflex response 共100–130 ms兲 关Burnett, J. Acoust. Soc. Am. 103 共1998兲兴, and provide additional evidence that subjects normally respond to inadvertent changes in their voice F0 with a fast, but limited reflex, followed by a secondary voluntary response. 关Research supported by NIH Grant No. DC07264.兴 2475

J. Acoust. Soc. Am., Vol. 111, No. 5, Pt. 2, May 2002

5aSC2. The influence of phonotactics and phonological similarity on speech production. Michael Vitevitch, Duncan Eshelman, and Jonna Armbruster 共Dept. of Psych, Univ. of Kansas, 1415 Jayhawk Blvd., Lawrence, KS 66047, [email protected]兲 Phonotactic probability refers to the frequency with which segments and sequences of segments appear in a word or syllable. Neighborhood density refers to the number of words that are phonologically similar to a target word. These variables have been shown to influence word recognition, but little work has examined how these variables influence speech production. Although these two variables are positively correlated in English, words that varied orthogonally on these characteristics were selected and presented in a picture-naming task to assess the speed and accuracy of lexical retrieval during speech production. The results suggest that both facilitative and competitive processes operate during lexical retrieval in speech production. The implications for models of speech production are discussed. 关Work funded by NIH-NIDCD R03 DC 04259.兴

5aSC3. A physically informed glottis model for real glottal flow wave form reproduction. Carlo Drioli 共TMH, Dept. of Speech, Music and Hearing, Royal Inst. of Technol., Drottning Kristinas v. 31, Stockholm SE 10044, Sweden, [email protected]兲 A physically informed model of the glottal source is proposed. The model relies on a lumped mechanoaerodynamic scheme based on the mass-spring paradigm. The vocal folds are represented by a mechanical 143rd Meeting: Acoustical Society of America

2475

5a FRI. AM

5aSC1. Reaction time of voluntary modulations in voice F0 during sustained pitch vocalizations. Jay J. Bauer, Charles R. Larson 共Dept. of Commun. Sci. and Disord., Northwestern Univ., 2299 N. Campus Dr., Evanston, IL 60208兲, and Kathryn C. Eckstein 共Univ. of Tennessee, Memphis, TN 38163兲

resonator plus a delay line which takes into account the vertical phase differences. The vocal fold displacement is coupled to the glottal flow by means of a nonlinear subsystem, based on a general parametric nonlinear model. The principal characteristics of the flow-induced oscillations are retained, and the overall model is suited for an identification approach where real 共inverse filtered兲 glottal flow signals are to be reproduced. A data-driven identification procedure is outlined, where the parameters of the model are tuned in order to accurately match the target wave form. A nonlinear regression algorithm is used to train the nonlinear part. A set of inverse-filtered glottal flow wave forms with different pitch, open phase/ closed phase ratio, and shape of the glottal wave form period, are used to test the effectiveness of the approach. The results demonstrate that the model can reproduce a wide range of target wave forms. Moreover, the flow wave forms generated by the trained model are characterized by spectral richness and are perceived as natural. 关Work supported by EU.兴

5aSC4. Stress clash: Frequency and strategies of resolution. Sandra Levey 共Dept. of Speech-Lang.-Hearing Sci., Lehman College, 250 Bedford Park Blvd., Bronx, NY 10468, [email protected]兲 and Lawrence J. Raphael 共Adelphi Univ., Garden City, NY 11530兲 The hypothesis that speakers establish a strong–weak stress pattern to resolve stress clash 共the adjacency of two primary stressed syllables, e.g., racCOON COAT兲 was investigated through perceptual and acoustic analysis. Three hypotheses were tested: that primary stress is relocated to an earlier syllable of the sequence 共e.g., RACcoon COAT兲; that stress is reduced on the final syllable that bears primary stress in the first word of the sequence; that stress clash is avoided by pitch accent assignment to an early and to a late-occurring stressable syllable in a sentence. Ten speakers produced iambic target words in stress clash and non-clash contexts with target words placed in early and late-sentence position. Stress clash was resolved in less than 30% of the utterances but also occurred in non-clash contexts. Pitch accent was assigned to an early syllable for words placed in an early sentence position, increasing the frequency of stress shift judgments in both clash and non-clash contexts. Acoustic analysis showed that the information most likely to underlie shifts in stress location was located in the first syllable of target words and that fundamental frequency was the most probable of the potential cues.

5aSC5. Effects of subglottal and supraglottal acoustic loading on voice production. Zhaoyan Zhang, Luc Mongeau, and Steven Frankel 共School of Mech. Eng., Purdue Univ., West Lafayette, IN 47907兲 Speech production involves sound generation by confined jets through an orifice 共the glottis兲 with a time-varying area. Predictive models are usually based on the quasi-steady assumption. This assumption allows the complex unsteady flows to be treated as steady flows, which are more effectively modeled computationally. Because of the reflective properties of the human lungs, trachea and vocal tract, subglottal and supraglottal resonance and other acoustic effects occur in speech, which might affect glottal impedance, especially in the regime of unsteady flow separation. Changes in the flow structure, or flow regurgitation due to a transient negative transglottal pressure, could also occur. These phenomena may affect the quasi-steady behavior of speech production. To investigate the possible effects of the subglottal and supraglottal acoustic loadings, a dynamic mechanical model of the larynx was designed and built. The subglottal and supraglottal acoustic loadings are simulated using an expansion in the tube upstream of the glottis and a finite length tube downstream, respectively. The acoustic pressures of waves radiated upstream and downstream of the orifice were measured and compared to those predicted using a model based on the quasi-steady assumption. A good agreement between the experimental data and the predictions was obtained for dif2476

J. Acoust. Soc. Am., Vol. 111, No. 5, Pt. 2, May 2002

ferent operating frequencies, flow rates, and orifice shapes. This supports the validity of the quasi-steady assumption for various subglottal and supraglottal acoustic loadings.

5aSC6. Kinematics of normal lingual diadokokinesis. Kevin P. Flanagan and James S. Dembowski 共Commun. Disord. Dept., SUNY at New Paltz, 75 S. Manheim Blvd., New Paltz, NY 12561, [email protected]兲 Speech-language clinicians use diadochokinetic 共DDK兲 tasks as a behavioral measure of the status of the speech production system. The articulator kinematics of these repetitive syllable productions has been relatively little studied 共most studies of speech DDK have been acoustic兲. As a result, many clinicians misunderstand the relationship between syllable rate and movement parameters, such as rate of articulator movement and range of articulator movement. For example, clinicians assume that because ‘‘kuh’’ syllable repetitions are relatively slow, tongue dorsum movements must be slow. However, Westbury and Dembowski 关Ann. Bull. RILP No. 27 共1993兲兴 showed that the tongue dorsum may produce larger and faster movements than other tongue points 共such as the tongue tip兲. Therefore, syllable repetition rates may not reflect movement speeds for individual articulator points. This study replicates and extends these findings, using a larger speaker sample encompassing a wider age range. The goal of this study is to further clarify the relationship between articulator speed, range of articulator motion, and syllable rate. That is, it seeks to examine whether syllable rate is related to how quickly the tongue moves or how far the tongue moves. Data are derived from the University of Wisconsin X-ray Microbeam Database.

5aSC7. Aeroacoustic mechanisms of voiced sound production. Michael Krane 共CAIP Ctr., Rutgers Univ., Piscataway, NJ 08854-8088, [email protected]兲 The focus of this study is to quantify the order of magnitude of the direct effects of 共1兲 vocal-fold wall motion and 共2兲 glottal flow separation point movement on the production of voiced speech sounds. A solution for the sound-pressure field shows three source mechanisms: 共1兲 a volume source due to unsteady glottal air flow; 共2兲 a quadrupole source representing interaction of the glottal jet with the pharynx walls; and 共3兲 an octupole due to direct sound radiation by the glottal jet itself. A relation is derived expressing glottal volume flow in terms of transglottal pressure difference, vocal-fold wall motion, and separation point motion. Using scaling analysis, the transglottal pressure difference is shown to be the dominant effect on glottal volume flow, while vocal-fold wall motion is shown to have a negligible effect. However, separation point motion is shown to have a measurable effect during the closure phase of the vibration cycle. Using these results, the acoustic effect of separation point motion is shown to be measurable, while the effect of vocal-fold wall vibration is shown to be negligible. Relative contributions of these effects across age, gender, and degree of glottal closure are discussed.

5aSC8. Exploring the effects of gravity on tongue motion using ultrasound image sequences. Maureen Stone, Ulla Crouse, and Marty Sutton 共Univ. of Maryland Dental School, Dept. of OCBS, 666 W. Baltimore St., Baltimore, MD 21201兲 Our goal in the research was to explore the effect that gravity had on the vocal-tract system by using ultrasound data collected in the upright and supine positions. All potential subjects were given an ultrasound pretest to determine whether they could repeat a series of 3– 4 words precise enough to allow an accurate series of images to be collected. Out of these potential subjects, approximately 5–7 subjects were eventually used in the research. The method of collecting ultrasound data required the immobilization of the patient by restraining their neck in a custom fitted neck restraint. The neck restraint held an ultrasound transducer positioned at a critical angle underneath the patients’ lower jawbone, which served to 143rd Meeting: Acoustical Society of America

2476

5aSC9. Quantitative analysis of vocal fold vibration during register change by high-speed digital imaging system. Masanobu Kumada 共Natl. Rehabilitation Ctr. for the Disabled, Saitama, Japan兲, Noriko Kobayashi, Hajime Hirose 共Kitazato Univ., Kanagawa, Japan兲, Niro Tayama, Hiroshi Imagawa 共Univ. of Tokyo, Tokyo, Japan兲, Ken-Ichi Sakakibara 共NTT Commun. Sci. Labs., Kanagawa, Japan兲, Takaharu Nito, Shin’ichi Kakurai, Chieko Kumada 共Univ. of Tokyo, Tokyo, Japan兲, Mamiko Wada 共Tokyo Metropolitan Rehabilitation Hospital, Tokyo, Japan兲, and Seiji Niimi 共Intl. Univ. for Health and Welfare, Tochigi, Japan兲 The physiological study of prosody is indispensable in terms not only of the physiological interest but also of the evaluation and treatment for pathological cases of prosody. In free talk, the changes of vocal fold vibration are found frequently and these phenomena are very important prosodic events. To analyze quantitatively the vocal fold vibration at the register change as the model of prosodic event, our high-speed digital imaging system was used at a rate of 4500 images of 256 –256 pixels per second. Four healthy Japanese adults 共2 males and 2 females兲 were served as subjects. Tasks were sustained phonation containing register changes. Two major categories 共Category A and B兲 were found in the ways of changing of vocal fold vibrations at the register change. In Category A, changes were very smooth in terms of the vocal fold vibration. In Category B, changes were not so smooth with some additional events at the register change, such as the anterior–posterior phase difference of the vibration, the abduction of the vocal folds, or the interruption of the phonation. The number of the subtypes for Category B is thought to increase if more subjects with a wider range of variety are analyzed. For the study of prosody, our high-speed digital imaging system is a very powerful tool by which physiological information can be obtained.

5aSC10. Lexical and phonotactic effects on the perception of rate induced resyllabification. Kenneth de Jong, Kyoko Nagao, Byung-jin Lim, and Kyoko Okamura 共Dept. of Linguist., Indiana Univ., Bloomington, IN 47405兲 Stetson 共1951兲 noted that, when repeated, singleton coda consonants 共VC兲 appear to modulate into onset consonants 共CV兲 as the rate of repetition increases. de Jong et al. 共2001兲 found that naEe listeners robustly perceive such resyllabifications with labial consonants, and in a later study, that such perceptions broadly corresponded to changes in glottal timing. In the current study, stimuli included labial, coronal, and velar stops, creating mixtures of real words 共such as ‘‘eat’’兲, and nonwords 共such as ‘‘ead’’兲. A comparison of the perception of real and nonreal words reveals no robust effect of lexical status. In addition, vowels in the corpus were either tense or lax, so that the CV combination is phonotactically illegal in half of the corpus. The perception of resyllabification also does occur with these lax vowels, though only for voiced coronal and labial stops. Other stops did not exhibit resyllabification. Analyses of glottal and acoustic recordings are currently underway. 关Work supported by NIDCD and NSF.兴 2477

J. Acoust. Soc. Am., Vol. 111, No. 5, Pt. 2, May 2002

5aSC11. An integrated approach to improving noisy speech perception. Serguei Koval, Mikhail Stolbov, Natalia Smirnova, and Mikhail Khitrov 共STC, Krasutskogo Str. 4, St. Petersburg 196084, Russia, [email protected]兲 For a number of practical purposes and tasks, experts have to decode speech recordings of very poor quality. A combination of techniques is proposed to improve intelligibility and quality of distorted speech messages and thus facilitate their comprehension. Along with the application of noise cancellation and speech signal enhancement techniques removing and/or reducing various kinds of distortions and interference 共primarily unmasking and normalization in time and frequency fields兲, the approach incorporates optimal listener expert tactics based on selective listening, nonstandard binaural listening, accounting for short-term and long-term human ear adaptation to noisy speech, as well as some methods of speech signal enhancement to support speech decoding during listening. The approach integrating the suggested techniques ensures high-quality ultimate results and has successfully been applied by Speech Technology Center experts and by numerous other users, mainly forensic institutions, to perform noisy speech records decoding for courts, law enforcement and emergency services, accident investigation bodies, etc.

5aSC12. Stress shift in rhythmical speech. Hugo Quene´ and Robert Port 共Dept. of Linguist., Indiana Univ., Memorial Hall 322, 1021 E. Third St., Bloomington, IN 47405兲 In phrases like thirteen men, stress in thirteen is often shifted forward from its canonical final position. Presumably, the occurrence of this optional stress shift may be partly controlled by the rhythm of speech. Work on rhythmic speech production has demonstrated that given a repetition cycle, T, its harmonic fractions like T/2 attract stressed vowel onsets. Comparing phrases like ceMENT thirTEEN and GALaxy thirTEEN, differing in the number of weak syllables between strong ones, it was predicted that, during rhythmic production, the harmonic locations would attract shifted stress. Since shifting stress results in more even distribution of syllables through the cycle, we expected that faster repetition rates would also result in more stress shift. Dependent variables were the relative stress in the second word of each pair, and the location of onset of the nuclear vowel of the stressed syllable. Results confirmed the predictions, first, that with more intermediate unstressed syllables, stress was shifted forward more often 共thereby locating the stressed vowel onset closer to T/2) and, second, that stress shifted forward more often at faster speaking rates. 关Work supported by Fulbright Visiting Scholar program and by Utrecht University, The Netherlands.兴

5aSC13. Speaker adaptation of HMMs using evolutionary strategybased linear regression. Sid-Ahmed Selouani and Douglas O’Shaughnessy 共INRS-Telecommunications, 900 de la Gauchetiere West, Box 644, Montreal, QC H5A 1C6, Canada兲 A new framework for speaker adaptation of continuous-density hidden Markov models 共HMMs兲 is introduced. It aims to improve the robustness of speech recognizers by adapting HMM parameters to new conditions 共e.g., from new speakers兲. It describes an optimization technique using an evolutionary strategy for linear regression-based spectral transformation. In classical iterative maximum likelihood linear regression 共MLLR兲, a global transform matrix is estimated to make a general model better match particular target conditions. To permit adaptation on a small amount of data, a regression tree classification is performed. However, an important drawback of MLLR is that the number of regression classes is fixed. The new approach allows the degree of freedom of the global transform to be implicitly variable, as the evolutionary optimization permits the survival of only active classes. The fitness function is evaluated by the phoneme correctness through the evolution steps. The implementation requirements such as chromosome representation, selection function, genetic operators, and evaluation function have been chosen in order to lend more reliability 143rd Meeting: Acoustical Society of America

2477

5a FRI. AM

reduce errors and increase image resolution. To accurately analyze the series of images collected from ultrasound imaging, the surfaces of the tongue were digitized and tongue motion was time-aligned across the upright and supine sequences. Comparisons between the upright and supine data were then made by using L2 norms to determine averages and differences regarding the behavior between the two positions. Curves and locations of the maximum and minimum differences will be discussed.

to the global transformation matrix. Triphone experiments used the TIMIT and ARPA-RM1 databases. For new speakers, the new technique achieves 8 percent fewer word errors than the basic MLLR method.

5aSC14. The use of functional data analysis to study variability in childrens speech: Further data. Laura L. Koenig 共Dept. of Speech-Lang. Pathol. and Audiol., New York Univ. and Haskins Labs., 270 Crown St., New Haven, CT 06511兲 and Jorge C. Lucero 共Univ. of Brasilia兲 Much previous research has reported increased token-to-token variability in children relative to adults, but the sources and implications of this variability remain matters of debate. Recently, functional data analysis has been used as a tool to gain greater insight into the nature of variability in children’s and adults’ speech data. In FDA, signals are time-normalized using a smooth function of time. The magnitude of the time-warping function provides an index of phasing 共temporal兲 variability, and a separate index of amplitude variability is calculated from the time-normalized signal. Here, oral airflow data are analyzed from 5-year-olds, 10-year-olds, and adult women producing laryngeal and oral fricatives 共/h, s, z/兲. The preliminary FDA results show that children generally have higher temporal and amplitude indices than adults, suggesting greater variability both in gestural timing and magnitude. However, individual patterns are evident in the relative magnitude of the two indices, and in which consonants show the highest values. The time-varying patterns of flow variability over time in /s/ are also explored as a method of inferring relative variability among laryngeal and oral gestures. 关Work supported by NIH and CNPq, Brazil.兴

5aSC15. Event synchronous sinusoidal model based on frequency-toinstantaneous frequency mapping. Parham Zolfaghari, Hideki Banno, Fumitada Itakura 共CIAIR/CREST, Itakura Lab., Nagoya Univ., Furo-Cho 1, Chikusa-ku, Nagoya, Japan, [email protected]兲, and Hideki Kawahara 共ATR/CREST, Wakayama Univ., Japan兲 We describe a glottal event synchronous sinusoidal model for speech analysis and synthesis. The sinusoidal components are event synchronously estimated using a mapping from linearly spaced filter center frequencies to the instantaneous frequencies of the filter outputs. Frequency domain fixed points of this mapping correspond to the constituent sinusoidal components of the input signal. A robust technique based on a wavelet representation of this fixed points model is used for fundamental frequency extraction as used in STRAIGHT 关Kawahara et al., IEICE 共1999兲兴. The method for event detection and characterization is based on group delay and similar fixed point analysis. This method enables the detection of precise timing and spread of speech events such as vocal fold closure. A trajectory continuation scheme is also applied to the extracted sinusoidal components. The proposed model is capable of high-quality speech synthesis using the overlap–add synthesis method and is also applicable to other sound sources. System evaluation results using spectral distortion measures and mean opinion scores will be reported. A comparison with the fixed frame-rate sinusoidal models will be given.

5aSC16. Transconsonantal coarticulatory patterns in VCV utterances: Effects of a bite block. Yana Yunusova and Gary Weismer 共Waisman Ctr. and Dept. of Communicative Disord., 1500 Highland Ave., Madison, WI 53705-2280兲 The concept of coordinative structures, or articulatory synergies, envisions a collection of articulators organized to achieve a specific articulatory goal or acoustic goal. The mandible figures prominently in concepts of articulatory synergies because of its potential interaction with labial and lingual shaping of the vocal tract. What happens to these hypothesized synergies when one component is taken out of the collective? Bite block articulation is a common experimental approach to eliminating the jaw from its synergistic role in articulation, and studies have shown that the speech mechanism is able to reorganize its target configurations almost 2478

J. Acoust. Soc. Am., Vol. 111, No. 5, Pt. 2, May 2002

immediately when speaking with a fixed jaw. In the current study we examine vowel-to-vowel, transconsonantal effects with and without a bite block. Although there is one study 关Sussman, Fruchter, and Cable, J. Acoust. Soc. Am. 共1995兲兴 showing stability of coarticulatory effects in biteblock conditions, that conclusion was based primarily on locus equations. In the current study, we hypothesize that more traditional acoustic measures of right-to-left and left-to-right coarticulation will show that reducing an articulatory synergy by holding one of its components constant will result in different-from-typical coarticulatory behaviors. 关Work supported by NIDCD Award No. DC 000319.兴

5aSC17. Dynamic synapse neural networks with a Gauss–Newton learning method for speech processing. Hassan Heidari Namarvar, Alireza Afshordi Dibazar, and Theodore W. Berger 共Dept. of Biomed. Eng., Univ. of Southern California, 3650 McClintock St., OHE-500, Los Angeles, CA 90089-145, [email protected]兲 A continuous implementation of the biologically-based dynamic synapse neural network 共DSNN兲 共H. H. Namarvar et al., 2001兲 is created by replacing the discrete nonlinear function of the synaptic cleft mechanism, which represent the neurotransmitter release in the discrete DSNN, with a continuous nonlinear function. A Gauss–Newton learning algorithm is introduced and is shown to efficiently determine the optimal parameters of a continuous DSNN being applied to a nonlinear problem in nonstationary speech processing. The continuous DSNN incorporates a new feedback architecture to model biological inhibitory mechanisms. Optimality is determined by an objective error function on the continuous DSNN output. This network has been successfully applied to the task of phoneme recognition in continuous speech. Preliminary results demonstrate that a phoneme recognizer utilizing a continuous DSNN may be successfully used as a phone recognition module in future automatic speech recognition systems. 关Work supported by DARPA CBS, NASA, and ONR.兴

5aSC18. The relation between first-graders’ reading level and vowel production variability and presentation format: A temporal analysis. Kandice Baker 共Dept. of Speech, Commun. Sci., & Theatre, St. John’s Univ., Jamaica, NY 11439兲, Anne Fowler 共Haskins Labs., New Haven, CT兲, and Fredericka Bell-Berti 共St. John’s Univ., Jamaica, NY 11439兲 The purpose of this research is to determine if children with reading difficulties produce vowels with greater variability than children with normal reading ability. The vowels chosen for this study are /(/, /}/, and /,/, occurring in real and nonsense monosyllabic words. Our past research, examining spectral variability in vowels produced by first grade students as a function of whether they were reading words presented individually in random or blocked format, revealed no systematic effect of presentation format on variability 关K. Baker, A. Fowler, and F. Bell-Berti, J. Acoust. Soc. Am. 110, 2704 共2001兲兴. The purpose of this present study is to determine if good and poor readers differ in vowel duration variability. 关Work supported by U.S. Dept. of Education, McNair Scholars Program.兴

5aSC19. Applying and evaluating computer-animated tutors. Dominic W. Massaro 共Dept. of Psychol., Univ. of California, Santa Cruz, Santa Cruz, CA 95064, [email protected]兲, Alexis Bosseler 共Univ. of California, Santa Cruz, Santa Cruz, CA 95064, [email protected]兲, Patrick S. Stone 共Tucker-Maxon Oral School, Portland, OR 97202, [email protected]兲, and Pamela Connors 共Tucker-Maxon Oral School, Portland, OR 97202, [email protected]兲 We have developed computer-assisted speech and language tutors for deaf, hard of hearing, and autistic children. Our language-training program utilizes our computer-animated talking head, Baldi, as the conversational agent, who guides students through a variety of exercises designed to 143rd Meeting: Acoustical Society of America

2478

5aSC20. Spectral variability of ÕsÕ in sV and sCV sequences produced by adults and children. Benjamin Munson 共Dept. of Commun. Disord., Univ. of Minnesota, 115 Shevlin Hall, 164 Pillsbury Dr., SE, Minneapolis, MN 55455, [email protected]

neurotolerance based chemical impairment for alcohol, drugs, and medicine shall be presented, and shown not to fully support NIDA-SAMSHA drug and alcohol threshold used in drug testing domain.

5aSC22. Voice quality variations in English sentences. Melissa Epstein 共Linguist. Dept., UCLA, 3125 Campbell Hall, Los Angeles, CA 90095-1543兲 This study examines the predictability of changes in voice quality at the sentence level in English. Sentence-level effects can only be isolated once the effects of linguistic factors 共e.g., glottalization before a glottalized consonant兲, social or dialectal, and individual factors have been eliminated. In this study, these effects were controlled by obtaining a baseline value for each measurement for each word of the corpus. Voice quality variations were tracked using quantitative measurements derived from the LF model of the glottal source, and also qualitative descriptions of the waveforms. Preliminary results indicate that there are consistent voice quality differences at the sentence level and that pitch contours and sentence accent also produce predictable effects on voice quality.

5aSC23. Characteristics of diadochokinesis in Parkinson’s Disease and Multiple Sclerosis. Kris Tjaden and Elizabeth Watling 共Dept. of Communicative Disord. & Sci., SUNY at Buffalo, 3435 Main St., Buffalo, NY 14214-3005兲

Previous research has demonstrated that both children and adults produce /s/ with greater spectral variability in /sp/ sequences than /st/ sequences, when these sequences are embedded in the medial position of CVCCVC nonwords 关B. Munson, J. Acoust. Soc. Am. 110, 1203–1206 共2001兲兴. The current study examined whether this result could be replicated when /s/ is embedded in syllable-onset clusters, with a variety of following consonants and vowels. Adults and children aged 3–7 were recorded producing multiple tokens of sV and sCV nonwords, where the vowel was either /i/, /Ä/, or /u/, and the consonant was either /p/, /t/, /w/, or /l/. For each token, the spectral mean of non-overlapping 10-ms windows of frication noise was calculated. Nonlinear regressions of the form y ⫽ae bx were used to predict the spectral mean of each portion of frication noise from its position in the fricative. The resulting measure of model fit, R 2 , was used as an index of within-speaker variability. For each participant, separate R 2 values were calculated for /s/ in each of the 15 phonetic contexts. Analyses will address the influence of age, consonant context, and vowel context on spectral variability.

The current study applies a quantitative, acoustic analysis procedure for the study of rapid syllable productions outlined by Kent and colleagues 关R. D. Kent et al., J. Med. Sp. Lang. Path. 7, 83–90 共1999兲兴 to syllables produced by speakers with Parkinson’s Disease and Multiple Sclerosis. Neurologically healthy talkers will be studied for comparison purposes. Acoustic measures will be reported for syllable repetitions of /p./, /t./, and /k./. Temporal measures will include syllable duration, syllable rate, and stop gap duration. The energy envelope of syllable repetitions will be quantified using measures of rms amplitude minima and maxima. Acoustic measures will be contrasted to determine the extent to which acoustic profiles of diadochokinesis distinguish hypokinetic dysarthria associated with Parkinson’s Disease, ataxic dysarthria secondary to Multiple Sclerosis, and spastic dysarthria secondary to Multiple Sclerosis. It also is of interest to determine whether speakers with Parkinson’s Disease and Multiple Sclerosis judged to be nondysarthric via perceptual analyses also demonstrate objective, acoustic profiles of diadochokinesis that are within normal limits. 关Work supported by NIH.兴

5aSC21. Speech and neurology-chemical impairment correlates. Harb S. Hayre 共Chemical Fitness Screening, P.O. Box 19756, Houston, TX 77224-9756兲

5aSC24. High speed MRI of laryngeal gestures during speech production. Jon Nissenbaum, Robert E. Hillman, James B. Kobler 共Voice and Speech Lab., Massachusetts Eye and Ear Infirmary, 243 Charles St., Boston, MA 02114, jon_nissenbaummeei.harvard.edu兲, Hugh D. Curtin 共Massachusetts Eye and Ear Infirmary, Boston, MA 02114兲, Morris Halle 共MIT, Cambridge, MA 02139兲, and John E. Kirsch 共Siemens Medical Systems, Massachusetts General Hospital, NMR Ctr., Charlestown, MA 02129兲

Speech correlates of alcohol/drug impairment and its neurological basis is presented with suggestion for further research in impairment from poly drug/medicine/inhalent/chew use/abuse, and prediagnosis of many neuro- and endocrin-related disorders. Nerve cells all over the body detect chemical entry by smoking, injection, drinking, chewing, or skin absorption, and transmit neurosignals to their corresponding cerebral subsystems, which in turn affect speech centers-Broca’s and Wernick’s area, and motor cortex. For instance, gustatory cells in the mouth, cranial and spinal nerve cells in the skin, and cilia/olfactory neurons in the nose are the intake sensing nerve cells. Alcohol depression, and brain cell damage were detected from telephone speech using IMPAIRLYZER-TM, and the results of these studies were presented at 1996 ASA meeting in Indianapolis, and 2001 German Acoustical Society-DEGA conference in Hamburg, Germany respectively. Speech based chemical Impairment measure results were presented at the 2001 meeting of ASA in Chicago. New data on 2479

J. Acoust. Soc. Am., Vol. 111, No. 5, Pt. 2, May 2002

Dynamic sequences of magnetic resonance images 共MRI兲 of the vocal tract were obtained with a frame rate of 144 frames/second. Changes in vertical position and length of the vocal folds, both observable in the mid-sagittal plane, have been argued to play a role in consonant production in addition to their primary function in the control of vocal fundamental frequency (F0) 关W. G. Ewan and R. Krones, J. Phonet. 2, 327– 335 共1974兲; A. Lofqvist et al., Haskins Lab. Status Report Speech Res., SR-97/98, pp. 25– 40, 1989兴, but temporal resolution of available techniques has hindered direct imaging of these articulations. A novel data acquisition sequence was used to circumvent the imaging time imposed by standard MRI 共typically 100–500 ms兲. Images were constructed by having subjects rhythmically repeat short utterances 256 times using the same F0 contour. Sixty-four lines of MR data were sampled during each repetition, at 7 millisecond increments, yielding partial raw data sets for 64 time 143rd Meeting: Acoustical Society of America

2479

5a FRI. AM

teach vocabulary and grammer, to improve speech articulation, and to develop linguistic and phonological awareness. Baldi is an accurate threedimensional animated talking head appropriately aligned with either synthesized or natural speech. Baldi has a tongue and palate, which can be displayed by making his skin transparent. Two specific language-training programs have been evaluated to determine if they improve word learning and speech articulation. The results indicate that the programs are effective in teaching receptive and productive language. Advantages of utilizing a computer-animated agent as a language tutor are the popularity of computers and embodied conversational agents with autistic kids, the perpetual availability of the program, and individualized instruction. Students enjoy working with Baldi because he offers extreme patience, he doesn’t become angry, tired, or bored, and he is in effect a perpetual teaching machine. The results indicate that the psychology and technology of Baldi holds great promise in language learning and speech therapy. 关Work supported by NSF Grant Nos. CDA-9726363 and BCS-9905176 and Public Health Service Grant No. PHS R01 DC00236.兴

points. After all repetitions were completed, one frame per time point was constructed by combining raw data from the corresponding time point during every repetition. Preliminary results indicate vocal fold shortening and lowering only during voiced consonants and in production of lower F0.

5aSC25. The acoustic features of human laughter. Jo-Anne Bachorowski 共Dept. of Psych., Wilson Hall, Vanderbilt Univ., Nashville, TN 37203, [email protected]兲 and Michael J. Owren 共Cornell Univ., Ithaca, NY 14853兲 Remarkably little is known about the acoustic features of laughter, despite laughter’s ubiquitous role in human vocal communication. Outcomes are described for 1024 naturally produced laugh bouts recorded from 97 young adults. Acoustic analysis focused on temporal characteristics, production modes, source- and filter-related effects, and indexical cues to laugher sex and individual identity. The results indicate that laughter is a remarkably complex vocal signal, with evident diversity in both production modes and fundamental frequency characteristics. Also of interest was finding a consistent lack of articulation effects in supralaryngeal filtering. Outcomes are compared to previously advanced hypotheses and conjectures about this species-typical vocal signal.

and normalized amplitude levels of the burst and aspiration. Acoustic differences obtained will be discussed as a function of speaker type, phonetic context and, in the case of the TE speaker, experience with the device.

5aSC28. Nonlinear viscoelastic response of vocal-fold tissues. Roger W Chan 共Vocal Fold Physiol. and Biomechanics Lab., Audiol. and Speech Sci., Purdue Univ., West Lafayette, IN 47907, [email protected]兲 Previous rheological measurements of the viscoelastic shear properties of vocal-fold tissues have focused on the linear regime, in the small strain region typically with ␥ 0 ⭐1.0%. This imposed limit was necessary in order for the theory of linear viscoelasticity to be valid, yielding dynamic shear data that can be applicable for the biomechanical modeling of smallamplitude vocal-fold oscillation. Nonetheless, as the physiological range of phonation does involve more than small-amplitude oscillation, the large strain viscoelastic behaviors of vocal-fold tissues are equally important and remain to be quantified. This paper reports preliminary measurements of some of these viscoelastic behaviors in large strain shear. Excised sheep vocal-fold mucosal tissues were subject to stress relaxation, constant stress, and constant strain rate tests in a controlled-strain torsional rheometer. Results showed that vocal-fold tissues demonstrate nonlinear viscoelastic response in shear, including stress relaxation that is dependent on strain and strain creep that is dependent on stress. These findings cannot be adequately described by Y. C. Fung’s quasilinear viscoelasticity formulation, which assumes strain dependence and time dependence to be separable. A more general constitutive model is being developed to better characterize the observed nonlinear response.

5aSC26. The distribution of phonation type index k. Hansang Park 共Univ. of Texas, Cal 501, Univ. of Texas, Austin, Austin, TX 78712兲 A phonation type index k was proposed to account for variation in the phonation type. 共Park, 2001兲 In this study we investigated the distribution of the phonation type index k to see a mode of the phonation type. The distribution of the phonation type index k is expected to be multimodal if a phonation type is sustained through the entire vowel. However, it is expected to be unimodal if the phonation type difference is not so significant or if distinct phonation types are not maintained through the entire vowel but occur only at earlier part of the vowel. An experiment was conducted with Standard Korean data that have three linguistically distinct phonation types: aspirated, lenis, and fortis. The results showed that the distribution of the phonation type index k was unique to each speaker. The distribution of the phonation type index k was close to bimodal for one speaker while unimodal for the other two speakers. The distribution of the phonation type index k also showed differences in mean, standard deviation, skewness, and kurtosis across speakers.

5aSC27. Spectral moments analysis of stops in tracheoesophageal speakers. Kimberly Rosenbauer, Kerrie Obert, and Robert Allen Fox 共Dept. of Speech and Hearing Sci., The Ohio State Univ., 1070 Carmack Rd., Columbus, OH 43210-1002, [email protected]兲 Optimal speech intelligibility is naturally of primary concern for individuals who have had their larynxes removed due to cancer and are now using tracheoesophageal 共TE兲 speech as their primary mode of communication. The current study examines the acoustic characteristics associated with the oral stops /p b t d k g/ produced by TE speakers as compared to normal speakers. Of particular interest are the acoustic differences between these two sets of speakers in oral stop bursts and in the aspiration frication for the voiceless stops. A set of utterances in which these six stops occur in both initial position 共CV兲 and intervocalic position 共VCV兲 before a wide range of English vowels were recorded for each set of speakers. Appropriate acoustic measurements were then made for each stop. These measurements included the spectral moments of the burst and aspiration, VOT, closure duration 共for intervocalic stops兲, and the relative 2480

J. Acoust. Soc. Am., Vol. 111, No. 5, Pt. 2, May 2002

5aSC29. Temporal characteristics of the speech of typical and lexically precocious two-year-old children: preliminary observations. Bruce L. Smith and Karla K. McGregor 共Dept. of Commun. Sci. and Disord., Northwestern Univ., 2299 N. Campus Dr., Evanston, IL 60208-3570, [email protected]兲 To examine the extent to which temporal properties of speech might be affected by children’s lexical knowledge as opposed to their age and general development, productions by a group of two-year-olds with averagesized vocabularies were compared with those of a group of age-matched, lexically precocious children. It was hypothesized that because of their additional lexical knowledge and experience, the lexically precocious children would manifest shorter durations and/or less temporal variability in their speech. Multiple repetitions of several different target words were obtained from children with vocabularies at about the 50th percentile (ca. 300 words兲 versus the 90th percentile (ca. 600 words兲 on the MacArthur Communicative Development Inventory. In general, acoustic measurements indicated that there were no significant differences between the groups in terms of their segmental durations or temporal variability. Thus, the additional linguistic knowledge and experience the precocious talkers had gained from having learned to produce many more words did not appear to have influenced temporal properties of their speech. This suggests that the children’s age and/or other aspects of their development had a greater impact on temporal aspects of their speech than did their level of lexical knowledge and experience.

5aSC30. Harmonics to noise ratio in vocal professional voices. Luka Bonetti, Ana Bonetti, and Natalija Bolfan Stosic 共Dept. of Logoped., Faculty of Special Education and Rehabilitation, Univ. of Zagreb, Kuslanova 59 a, 10000 Zagreb, Croatia兲 There is no arguing about the importance of voice, especially in groups of vocal professional voices. The question is what characterizes, the most, normal or pathological voice in relation to aspects of human working life. Harmonics to noise ratio, according to findings from the field of voice disorders, is the most representative method to differ normal from pathological voice. In this research significant differences were found in harmonics to noise ratio in relation to the length of the working age of 29 143rd Meeting: Acoustical Society of America

2480

5aSC31. Relative timing of the three gestures of North American English ÕrÕ. Bryan Gick 共Dept. of Linguist., Univ. of BC, E270-1866 Main Mall, Vancouver, BC V6T 1Z1, Canada and Haskins Labs., [email protected]兲 and Louis Goldstein 共Yale Univ. and Haskins Labs., 270 Crown St., New Haven, CT 06511兲 Interarticulator timing of liquid consonants has been shown in recent years to be of importance in understanding syllable-based allophonic variation and related synchronic, historical, and developmental phonological phenomena. In this and other respects, the unusual complexity of North American English /r/ in particular has attracted much attention. Nevertheless, because of the difficulty of collecting simultaneous dynamic measurements of the lips, tongue body and tongue root, the articulatory dynamics of /r/ remain a subject of conjecture. A simultaneous ultrasound and video study of relative timing of these three gestures of /r/ will be presented. Preliminary results from three native North American English speakers show that, for the prevocalic allophone, the lip gesture reaches its target first, followed by the tongue body 共TB兲 gesture, then the tongue root 共TR兲 gesture last. As for the postvocalic allophone, the lip and TR constrictions are reduced in magnitude; also the TR gesture tends to precede the other gestures in both gestural onset and peak. These results support a view of the lip gesture of /r/ as a consonantal component, and presents a pattern for /r/ analogous to that previously observed for /l/. 关Research supported by NSERC and NIH.兴

5aSC32. Speaker recognition using dynamic synapse-based neural networks with wavelet preprocessing. Sageev George and Theodore Berger 共Dept. of Biomed. Eng., Univ. of Southern California, 3650 McClintock Ave., OHE 500, MC 1451, Los Angeles, CA 90089-1451兲 Two problems in the field of speaker recognition are noise robustness and low interspeaker variability. This project involved the design of a system that is capable of speaker verification on a closed set of speakers using a wavelet processing technique that allows for a speaker-dependent feature set extraction. Verification is accomplished using a dynamic synapse-based neural network with noise-resistance properties that is trained using a genetic algorithm technique. Using these techniques, the system was able to perform speaker verification without being adversely affected by normal levels of noise, and perform verification despite low variability between speakers.

5aSC33. Talker intelligibility: Child and adult listener performance. Duncan Markham and Valerie Hazan 共Dept. of Phonet. and Linguist., Univ. College, London, UK, [email protected]兲 In a study of talker intelligibility, 45 voices 共adults, 11–12 year old children兲 were presented to 135 listeners 共adults, 11–12, and 7– 8 year olds兲. Word materials were presented in a ‘‘single-word’’ condition, and in a ‘‘triplet’’ condition, where a ‘‘normalizing’’ precursor sentence preceded three keywords. In both conditions, voices were randomized, with no consecutive presentations from the same speaker. The specially designed word-set consisted of 124 words chosen to maximize consonant confusions. Adult female speakers were significantly more intelligible than other groups, as predicted by previous research, but the difference was small. The error rates for 7– 8 year olds were slightly but significantly higher than those for the older children and adults. The effect of presentation condition, however, was not significant for any listener group. Across all 2481

J. Acoust. Soc. Am., Vol. 111, No. 5, Pt. 2, May 2002

listener groups, rankings of speakers by error rates were strikingly consistent, with a distinct cluster of eight low-intelligibility speakers common to all listener groups. This suggests that speaker intelligibility is little influenced by listener-related factors. In terms of their perception of speaker characteristics, children aged seven and above are showing similar patterns of behavior to adults, even though the younger children showed marginally higher error rates. 关Work funded by the Wellcome Trust.兴

5aSC34. Adaptive interface for spoken dialog. Sorin Dusan and James Flanagan 共Ctr. for Adv. Information Processing, Rutgers Univ., 96 Frelinghuysen Rd., Piscataway, NJ 08854, [email protected]兲 Speech has become increasingly important in human–computer interaction. Spoken dialog interfaces rely on automatic speech recognition, speech synthesis, language understanding, and dialog management. A main issue in dialog systems is that they typically are limited to preprogrammed vocabularies and sets of sentences. The research reported here focuses on developing an adaptive spoken dialog interface capable of acquiring new linguistic units and their corresponding semantics during the human–computer interaction. The adaptive interface identifies unknown words and phrases in the users utterances and asks the user for the corresponding semantics. The user can provide the meaning or the semantic representation of the new linguistic units through multiple modalities, including speaking, typing, pointing, touching, or showing. The interface then stores the new linguistic units in a semantic grammar and creates new objects defining the corresponding semantic representation. This process takes place during natural interaction between user and computer and, thus, the interface does not have to be rewritten and compiled to incorporate the newly acquired language. Users can personalize the adaptive spoken interface for different domain applications, or according to their personal preferences. 关Work supported by NSF.兴

5aSC35. Tongue position and orientation for front vowels in the X-Ray Microbeam Speech Production DataBase. Richard S. McGowan 共CReSS LLC, 1 Seaborn Pl., Lexington, MA 02420兲 The positions and orientations of the secant lines between pellets will be examined for front vowels of four talkers in the X-Ray Microbeam Speech Production Database. The effect of vowel height and palate shape will be examined, as will the contextual effects of neighboring stop, fricative and nasal segments. This work is part of a project to describe tongue motion in terms of secant line kinematics. Preliminary results suggest that tongue blade orientation for high front vowels is determined largely by palate shape.

5aSC36. Isolated word recognition using dynamic synapse neural networks. Alireza A. Dibazar, Hassan H. Namarvar, and Theodore W. Berger 共Dept. of Biomed. Eng., Univ. of Southern California, OHE 500, University Park, Los Angeles, CA 90089-1451, [email protected]兲 In this paper we propose a new method for using dynamic synapse neural networks 共DSNNs兲 to accomplish isolated word recognition. The DSNNs developed by Liaw and Berger 共1996兲 provide explicit analytic computational frameworks for the solution of nonlinear differential equations. Our method employs quasilinearization of a nonlinear differential equation to train a DSNN. This method employs an iterative algorithm, which converges monotonically to the extremal solutions of the nonlinear differential equation. The utility of the method was explored by training a simple DSNN to perform a speech recognition task on unprocessed, noisy raw waveforms of words spoken by multiple speakers. The simulation results showed that this training method has very fast convergence with respect to other existing methods. 关Work supported by ONR and DARPA.兴 143rd Meeting: Acoustical Society of America

2481

5a FRI. AM

teachers of primary schools in Zagreb. Teachers with the longest working age 共40-yr.兲 showed the most distorted voices. The best quality of voice with great ratio of harmonics to noise was found in the group of teachers with 10 years of professional work. Acoustical analyses were made by TM EZVOICEPLUS version 2.0 and Gram. 2.3. Significant statistical differences were established by the T test of Statistica for Windows, version 4.5. 关Work supported by Ministry of Science and Technology of Republic of Croatia.兴

5aSC37. Vowels in clear and conversational speech: Talker differences in acoustic characteristics and intelligibility for normal-hearing listeners. Sarah Hargus Ferguson and Diane Kewley-Port 共Dept. of Speech and Hearing Sci., Indiana Univ., 200 S. Jordan Ave., Bloomington, IN 47405兲 Several studies have shown that when a talker is instructed to speak as though talking to a hearing-impaired person, the resulting ‘‘clear’’ speech is significantly more intelligible than typical conversational speech. Recent work in this lab suggests that talkers vary in how much their intelligibility improves when they are instructed to speak clearly. The few studies examining acoustic characteristics of clear and conversational speech suggest that these differing clear speech effects result from different acoustic strategies on the part of individual talkers. However, only two studies to date have directly examined differences among talkers producing clear versus conversational speech, and neither included acoustic analysis. In this project, clear and conversational speech was recorded from 41 male and female talkers aged 18 – 45 years. A listening experiment demonstrated that for normal-hearing listeners in noise, vowel intelligibility varied widely among the 41 talkers for both speaking styles, as did the magnitude of the speaking style effect. Acoustic analyses using stimuli from a subgroup of talkers shown to have a range of speaking style effects will be used to assess specific acoustic correlates of vowel intelligibility in clear and conversational speech. 关Work supported by NIHDCD-02229.兴

5aSC38. The relation between first-graders’ reading level and vowel production variability in real and nonsense words: A temporal analysis. Kimberly Lydtin 共Dept. of Speech, Commun. Sci., & Theatre, St. John’s Univ., Jamaica, NY 11439兲, Anne Fowler 共Haskins Labs., New Haven, CT兲, and Fredericka Bell-Berti 共St. John’s Univ., Jamaica, NY 11439兲 The focus of this study is to determine if children who are poor readers produce vowels with greater variability than children with normal reading ability, since earlier research has indicated possible links between phono-

logical difficulty, speech production variation, and reading problems. In continuation of our past research 关K. Lydtin, A. Fowler, and F. Bell-Berti, J. Acoust. Soc. Am. 110, 2704 共2001兲兴, where we looked at the spectral aspects of vowel production, we will report the results of our study of vowel duration and its variability in poor and good readers. The vowels chosen for this study are /(/, /}/, and /,/ in real and nonsense words occurring in both blocked and random presentation. 关Work supported by U.S. Dept. of Education, McNair Scholars Program.兴

5aSC39. Specifying voicing differences in children’s productions of syllable-final stops: Knowledge versus skill. Susan Nittrouer 共Boys Town Natl. Res. Hospital, 555 N. 30th St., Omaha, NE 68131兲 Among the acoustic correlates of phonetic identity considered to be universal is the length of vocalic segments preceding syllable-final stops, which is a correlate to the voicing of those stops. However, findings reported earlier 关S. Nittrouer et al., J. Acoust. Soc. Am. 109, 2312共A兲 共2001兲兴 showed that the commonly described length effect 共i.e., shorter segments before voiceless than before voiced stops兲 is attenuated in adults’ samples from continuous discourse, and that listeners of all ages fail to make much use of this effect in perceptual decisions, preferring instead to base voicing judgments on dynamic spectral information. Subsequent to that study, acoustic measures 共duration of the preceding vocalic segment and frequency of the first formant at voicing offset兲 of children’s 共5 and 7 years of age兲 productions of words differing in the voicing of syllable-final stops showed that by 5 years of age children’s productions generally had the same acoustic structure as those of adults, but within-speaker variability on both measures was roughly twice as great in children’s as in adults’ productions. Thus, children were trying to coordinate the vocal-tract closing and glottal abduction gestures as adults do, but were not skilled enough to do so reliably. 关Work supported by NIDCD.兴

FRIDAY AFTERNOON, 7 JUNE 2002

LE BATEAU ROOM, 2:00 TO 4:20 P.M. Session 5pBB

Biomedical UltrasoundÕBioresponse to Vibration: Ultrasonic Field Characterization Techniques and Novel Instrument Applications Thomas J. Matula, Chair Applied Physics Laboratory, University of Washington, 1013 Northeast 40th Street, Seattle, Washington 98105-6698 Chair’s Introduction—2:00

Contributed Papers 2:05 5pBB1. Wave phase conjugation of the second harmonic in a focused ultrasonic beam. A. P. Brysev, F. V. Bunkin, R. V. Klopotov, L. M. Krutyansky 共Wave Res. Ctr. of the General Phys. Inst. RAS, 38 Vavilov St., 119991 Moscow GSP-1, Russia兲, X. Yan, and M. F. Hamilton 共Univ. of Texas, Austin, TX 78712-1063兲 Wave phase conjugation of the second-harmonic component generated nonlinearly in a focused beam of ultrasound is investigated experimentally and theoretically. The incident field in this case is radiated from an extended volume of the fluid between the acoustic source and the phase conjugation system. A tone burst of frequency f ⫽3 MHz was radiated 2482

J. Acoust. Soc. Am., Vol. 111, No. 5, Pt. 2, May 2002

into water and focused at a point midway between the source and the conjugator. Phase conjugation of the second harmonic 2 f was performed inside a magnetostrictive ceramic modulated by a magnetic pump field at frequency 4 f . The conjugate beam at frequency 2 f reproduces quite accurately the incident second-harmonic beam everywhere between the focal plane and the conjugator. The agreement deteriorates somewhat between the focal plane and the acoustic source, because it is mainly in this region where second-harmonic generation occurs. Experimental observations are supported by analytical and numerical results. Phase conjugation using the nonlinearly generated second harmonic possesses some advantages over conventional phase conjugation of the sound beam at the source frequency. The obtained results may provide a basis for applications employ143rd Meeting: Acoustical Society of America

2482

ing phase conjugation of harmonics in acoustic imaging and nondestructive evaluation, such as second harmonic imaging in tissue. 关Work supported by RFBR, CRDF, and ONR.兴 2:20 5pBB2. Mapping high power ultrasonic fields using a scanned scatterer. Bryan Cunitz and Peter Kaczkowski 共Ctr. for Industrial and Medical Ultrasound, Appl. Phys. Lab, Univ. of Washington, Seattle, WA 98105兲 The conventional method used to map the field of high intensity focused ultrasound 共HIFU兲 transducers displaces a hydrophone over a grid of points in the zone of interest and provides a direct measure of the ultrasound pressure at each point. The approach has several major limitations: 共1兲 the hydrophone is likely to be damaged while repeatedly measuring high intensity fields, 共2兲 the resolution of the field map is limited to the size of the active area of the hydrophone which is typically on the order of 0.5 mm and large compared to some wavelengths of interest, and 共3兲 cavitation can limit the accuracy of measurements. By placing a small scatterer in the HIFU field and measuring the scattered wave with a sensitive hydrophone from a safe distance, the field can be measured at full power without harm. We have used this technique to acquire single frequency field maps of HIFU transducers at high intensities without any damage to the hydrophone. We also have been able to improve the spatial resolution of the field map by an order of magnitude. In addition, this technique permits measurement of some non-linear behavior 共e.g., harmonic content兲 at the focus at high intensities. 2:35 5pBB3. A study of angular spectrum and limited diffraction beams for calculation of field of array transducers. Jiqi Cheng and Jian-yu Lu 共Ultrasound Lab., Dept. of Bioengineering, The Univ. of Toledo, Toledo, OH 43606, [email protected]兲 Angular spectrum is one of the most powerful tools for field calculation. It is based on linear system theory and the Fourier transform and is used for the calculation of propagating sound fields at different distances. In this report, the generalization and interpretation of the angular spectrum and its intrinsic relationship with limited diffraction beams are studied. With an angular spectrum, the field at the surface of a transducer is decomposed into limited diffractions beams. For an array transducer, a linear relationship between the quantized fields at the surface of elements of the array and the propagating field at any point in space can be established. For an annular array, the field is decomposed into limited diffraction Bessel beams 关P. D. Fox and S. Holm, IEEE Trans. Ultrason. Ferroelectr. Freq. Control 49, 85–93 共2002兲兴, while for a two-dimensional 共2-D兲 array the field is decomposed into limited diffraction array beams 关J-y. Lu and J. Cheng, J. Acoust. Soc. Am. 109, 2397–2398 共2001兲兴. The angular spectrum reveals the intrinsic link between these decompositions. 关Work supported in part by Grant 5RO1 HL60301 from NIH.兴

phase along a surface surrounding the source, changing the sign of the phase, and theoretically backpropagating it to the source using the Rayleigh integral. The method was studied numerically and tested experimentally. The acoustic field of ultrasound source was registered using a needle hydrophone, which was scanned along a plane surface in front of the transducer. It is shown that the proposed approach enables accurate detection of the normal velocity. The method can be used for a wide variety of acoustically radiating structures. 关Work supported by CRDF, NIH-Fogarty, and RFBR.兴 3:05 5pBB5. Cyclic and radial variation of the echogenicity from human carotid artery and porcine blood. Dong-Guk Paeng, Pei-Jie Cao, K. Kirk Shung 共Penn State Univ., 205 Hallowell Bldg., University Park, PA 16802, [email protected]兲, and Richard Y. Chiao 共GE Medical Systems, EA-54, 4855 W. Electric Ave., Milwaukee, WI 53219兲 The cyclic and radial variation of the echogenicity from human blood and porcine blood was investigated using a linear M12L transducer with a GE LOGIQ 700 Expert system. The bright collapsing ring phenomenon, a bright echogenic ring converging from the periphery to the center of the tube wall and eventually collapsing during a pulsatile cycle from the cross-sectional B mode images, was observed from porcine blood in a mock flow loop with a diameter of 0.95 cm at certain flow conditions. The bright ring phenomenon from porcine blood was stronger as the peak speed increased from 19 to 40 cm/s while the mean echogenicity decreased. As the stroke rate increased from 20 to 60 beats/minute, the phenomenon was weaker. As the hematocrit increased from 12 to 45%, the phenomenon became obvious. The black hole phenomenon was also observed at certain flow conditions. The well-known nonlinear hematocrit dependence on echogenicity was observed near the wall but the dependence pattern was changed at the center of the tube. The similar bright ring phenomenon was also observed at the harmonic images in vivo on 10 human carotid arteries. Aggregation due to the shear rate and acceleration is thought to be the explanation of the phenomena. 3:20 5pBB6. Detection of thrombosis and restenosis in an endovascular stent. Junru Wu 共Dept. of Phys., Univ. of Vermont, Burlington, Burlington, VT 05405, [email protected]兲 and Eric Weissman 共Noveon, Inc., Brecksville, OH 44141兲 Endovascular stents that are implanted in an artery are often used in the interventional treatment of coronary artery disease. Its widespread applications are, however, limited by the development of subacute thrombosis 共clot forming inside of the stent兲. Ex vivo experiments with pigs have shown that the broadband A-mode ultrasound is quite effective in detection thrombosis and restenosis in an endovascular stent. 关Work supported by BFGoodrich and Noveon, Inc.兴

5pBB4. Reconstruction of normal velocity distribution at the face of an ultrasound source in liquid on the base of acoustic waveform measurements along a surface in front of the source. Oleg A. Sapozhnikov, Yuriy A. Pishchalnikov, and Andrey V. Morozov 共Dept. of Acoust., Phys. Faculty, M. V. Lomonosov Moscow State Univ., Moscow 119899, Russia, [email protected]

5pBB7. A robust roughness quantification technique using a standard imaging array transducer. Stanley Samuel 共Univ. of Michigan Medical Ctr., 200 Zina Pitcher Pl., Rm. 3315, Kresge III, Ann Arbor, MI 48109-0553, [email protected]兲, Ronald Adler 共Hospital for Special Surgery, 535 E. 70th St., New York, NY 10021兲, and Charles Meyer 共Univ. of Michigan Medical Ctr., 200 Zina Pitcher Pl., Rm. 3315, Kresge III, Ann Arbor, MI 48109-0553兲

Normal velocity distribution along a vibrating surface is an important characteristic of any acoustic source. When it is known, the acoustic pressure field can be predicted using Rayleigh integral or similar approach. However, up to now there are no reliable methods of the velocity distribution measurement in liquids or solids. Due to strong acousto-optic interaction in condensed medium, the well-developed laser vibrometers can be employed only when the transducer is contacting vacuum or gas. In this work a novel method is developed and tested for evaluation of the velocity distribution along the vibrating surface of a piezoceramic transducer in liquid. The technique consists of measuring acoustic wave amplitude and

Our goal is to measure cartilage roughness using intra-articular ultrasound imaging, thus providing a useful diagnostic tool for the early detection of osteoarthritis. Measuring the effectiveness of possible chondroprotective pharmacological or mechanical interventions depends on the availability of such a device. We have developed an empirical model of roughness using sandpaper for angles ranging from 20 degrees to 60 degrees at distances ranging from 25 mm to 80 mm. Roughness quantification is achieved using a scattering replacement normalization technique. An ultrasound imaging system employing a broadband 7 MHz multielement transducer was used for insonifying the flat sandpaper surface.

2483

J. Acoust. Soc. Am., Vol. 111, No. 5, Pt. 2, May 2002

143rd Meeting: Acoustical Society of America

2483

5p FRI. PM

3:35 2:50

Dynamic focusing was performed at all distances and angles. The broadband transducer facilitates the selection of select bandwidths during analysis, which is beneficial for studying surfaces of varying roughness scales. Sandpaper of 150-, 400-, and 600-grit were examined for this study. The normalized average backscattered power 共normalized with 150-grit兲 for 7– 8 MHz frequency band provides well-behaved roughness characteristics. The students t test showed that the backscattering results for the 150and 400-grit are significantly different with 0.025⬍p⬍0.05. We are extending this work to in vitro cartilage and will report these results as well. 关Research supported by NIH R01-AR42667-01A2.兴 3:50 5pBB8. Design and construction of a high frame rate imaging system. Jing Wang, John L. Waugaman, Anjun Liu, and Jian-yu Lu 共Ultrasound Lab., Dept. of Bioengineering, The Univ. of Toledo, Toledo, OH 43606, [email protected]兲 A new high frame rate imaging method has been developed recently 关Jian-yu Lu, ‘‘2D and 3D high frame rate imaging with limited diffraction beams,’’ IEEE Trans. Ultrason. Ferroelectr. Freq. Control 44, 839– 856 共1997兲兴. This method may have a clinical application for imaging of fast moving objects such as human hearts, velocity vector imaging, and lowspeckle imaging. To implement the method, an imaging system has been designed. The system consists of one main printed circuit board 共PCB兲 and 16 channel boards 共each channel board contains 8 channels兲, in addition to a set-top box for connections to a personal computer 共PC兲, a front panel board for user control and message display, and a power control and distribution board. The main board contains a field programmable gate array 共FPGA兲 and controls all channels 共each channel has also an FPGA兲. We

2484

J. Acoust. Soc. Am., Vol. 111, No. 5, Pt. 2, May 2002

will report the analog and digital circuit design and simulations, multiplayer PCB designs with commercial software 共Protel 99兲, PCB signal integrity testing and system RFI/EMI shielding, and the assembly and construction of the entire system. 关Work supported in part by Grant 5RO1 HL60301 from NIH.兴 4:05 5pBB9. Logic design and implementation of FPGA for a high frame rate ultrasound imaging system. Anjun Liu, Jing Wang, and Jian-yu Lu 共Ultrasound Lab, Dept. of Bioengineering, The Univ. of Toledo, Toledo, OH 43606, [email protected]兲 Recently, a method has been developed for high frame rate medical imaging 关Jian-yu Lu, ‘‘2D and 3D high frame rate imaging with limited diffraction beams,’’ IEEE Trans. Ultrason. Ferroelectr. Freq. Control 44共4兲, 839– 856 共1997兲兴. To realize this method, a complicated system 关multiplechannel simultaneous data acquisition, large memory in each channel for storing up to 16 seconds of data at 40 MHz and 12-bit resolution, timevariable-gain 共TGC兲 control, Doppler imaging, harmonic imaging, as well as coded transmissions兴 is designed. Due to the complexity of the system, field programmable gate array 共FPGA兲 共Xilinx Spartn II兲 is used. In this presentation, the design and implementation of the FPGA for the system will be reported. This includes the synchronous dynamic random access memory 共SDRAM兲 controller and other system controllers, time sharing for auto-refresh of SDRAMs to reduce peak power, transmission and imaging modality selections, ECG data acquisition and synchronization, 160 MHz delay locked loop 共DLL兲 for accurate timing, and data transfer via either a parallel port or a PCI bus for post image processing. 关Work supported in part by Grant 5RO1 HL60301 from NIH.兴

143rd Meeting: Acoustical Society of America

2484