Abstract

Sound visualization techniques have played a key role in the development of acoustics throughout history. The development of measurement apparatus and techniques for displaying sound and vibration phenomena has provided excellent tools for building understanding about specific problems. Traditional methods, such as step-by-step measurements or simultaneous multichannel systems, have a strong tradeoff between time requirements, flexibility, and cost. However, if the sound field can be assumed time stationary, scanning methods allow us to assess variations across space with a single transducer, as long as the position of the sensor is known. The proposed technique, Scan and Paint, is based on the acquisition of sound pressure and particle velocity by manually moving a P-U probe (pressure-particle velocity sensors) across a sound field whilst filming the event with a camera. The sensor position is extracted by applying automatic color tracking to each frame of the recorded video. It is then possible to visualize sound variations across the space in terms of sound pressure, particle velocity, or acoustic intensity. In this paper, not only the theoretical foundations of the method, but also its practical applications are explored such as scanning transfer path analysis, source radiation characterization, operational deflection shapes, virtual phased arrays, material characterization, and acoustic intensity vector field mapping.

1. Introduction

In the development of acoustics, sound representations have been thought of as a key to aid in its understanding. The necessity to represent sound and vibration information visually triggered many investigations with a common goal: to create tools to build intuition and understanding upon specific problems.

Many alternative methods and apparatus have been proposed over time [1] as is addressed in the following section. Nevertheless, the current measurement procedures for characterizing sound fields can be classified by three major categories, regardless of the postprocessing techniques applied: step-by-step, simultaneous, and scanning measurements. Each of these techniques can be evaluated simply using three main features: measurement time, flexibility, and total cost of the equipment.

Step-by-step is the most common technique to create spatial representations of stationary sound fields. It is based upon the acquisition of data at a set of discrete positions. The flexibility of this method is one of its main advantages since the number of transducers and their spatial distribution are completely customizable. The number of sensors used is directly related to the cost of the equipment but inversely proportional to the time needed to undertake the experiments. In the case that all positions are characterized at the same time, it is necessary to use of a large multichannel system, hence to perform simultaneous measurements.

Measurement solutions based upon sensor arrays conventionally imply a large cost and low flexibility derived from their complexity. An intermediate solution, able to reduce the measurement time without increasing the equipment cost, can be found by using scanning methods. Scan-based techniques have a fundamental difference to the previously cited procedures: data is no longer acquired at discrete spatial positions since the sensor, or set of sensors, is moved continuously during the acquisition stage. This fact implies that the recorded acoustic signal will have an associated tracking path. Thus, evaluation of the time interval when the sensor is passing over the area of interest is required in order to estimate the spectral content at any point.

Far too little attention has been paid to scan-based measurement techniques despite the capabilities they offer. Several attempts have been made to incorporate scanning microphone arrays in combination with acoustic holography algorithms [26]. However, the high cost of the tracking systems used and the complex setup needed for carrying out the experiments have limited the growth in popularity of this powerful measurement technique. The scan-based method introduced in this paper, Scan and Paint, is rather different from the previously proposed methods: instead of using a moving frame array, a single probe is utilized; furthermore, the standard complex tracking system is replaced by a simple manual movement of the sensor, tracked using video processing. These two key features maximize flexibility whilst minimizing the cost of the measurement procedure.

This paper presents the historic and theoretical foundations of the novel scanning measurement technique Scan and Paint. In addition, an overview of the main practical scenarios which make the method an efficient sound visualization tool for a wide range of applications is also introduced.

2. Historical Perspective

Before going into detail on the proposed scanning measurement technique, it is worth highlighting the importance of developing devices for displaying sound phenomena and how it has evolved thus far. This will allow us to understand the value of the proposed method within the current state of the art and previous techniques.

One of the first methods focused upon the visualization of sound and vibration phenomena was introduced by Chladni at the end of the 18th century [13]. The method was based on using sand sprinkled on vibrating plates to show the dynamic behavior of a vibrating body. He generated the so-called Chladni patterns by strewing sand on a vibrating plate excited with a violin bow causing the sand to collect along the nodal lines.

During the second half of the 19th century, Toepler between 1859 and 1964 realized that a probing wave of pulsed light should be able to freeze an expanding spherical sound wave, by amplifying small differences in the optical refraction index of the medium [14]. Thus, he invented a technique to see travelling waves: the schlieren method. The impressive results obtained by Toepler encouraged other scientists to research this area and also expand upon the method for other fields of science. The Dutch scientist Rossing presented a new perspective of the schlieren method which gave him a Nobel Prize in 1953. He improved the method but constrained to microscopy, introducing the phase contrast microscope [15].

Optics-based sound visualization techniques (mainly schlieren methods and the later derived shadowgraphy [16]) were extensively used by many scientists throughout history such as Sabine. He built models of concert halls, fired sparks, and sent weak shock waves reverberating around them [7]. These weak shocks, almost sound waves, revealed themselves in direct shadowgrams. Two example pictures of the emitted waves at different stages in their propagation through the room can be seen in Figure 1. He brought forth the first real understanding of sound in auditoriums at the beginning of the 20th century [17], for which he is now considered the father of modern architectural acoustics.

The first scanning technique for displaying sound was presented by Kock in 1965 [18]. He worked extensively on improving his apparatus which led him to later publish the book Seeing Sound [8]. Kock’s method was not directly based upon visual observation like Toepler’s technique; instead, he used indirect visual observation. The electric signal of the microphone can be made visible by causing it to light an electric neon bulb. The brightness of the lamp at a particular spot is then indicative of the loudness of sound at that spot. In order to record the brightness pattern photographically, a camera is set to a long time exposure and aimed to the area of interest. Consequently, as the microphone-light device scans the area with a fixed speed, the camera records the light intensity variations from spot to spot. In addition, he also developed a subtraction technique for visualizing the wave patterns across a sound field. A combination of the microphone signal and the excitation signal results in a coherent summation of both waves. This reinforces the light output when the two signals are in phase, whereas the brightness is very low when they have opposite phase. A picture of the device is displayed in Figure 2 along with a couple of loudness patterns.

Another alternative measurement technique emerged during the same period, the use of a laser to produce holographic visualizations of a vibrating body. Holographic interferometry was discovered independently by a number of researchers in 1965 [19]. It very soon became popular since it was the first nonintrusive solution to characterize the dynamic behavior of vibrating structures with a high spatial resolution. As an example, Figure 3 presents one of the earliest case studies investigated using holographic interferometry, on the operational deflection shapes of a violin front plate. Laser Doppler Vibrometer (LDV) and Scanning Laser Doppler Vibrometer (SLDV) are the most popular current devices used to undertake holographic interferometry (whereas most of “scanning” methods or apparatus are based upon acquiring data along a continuous path, Scanning Laser Doppler Vibrometer (SLDV) differs from this principle since data is measured at a set of discrete positions forming a step-by-step approach). Nonetheless, despite the potential of this technology, the high price of the measurement systems and setup complexity of current commercial solutions limit the use of LDV for most common applications.

During the 1970s, multichannel microphone arrays were first applied for sound source localization, although the idea of developing such a device was first proposed during World War I [20]. The microphone antenna or so-called “acoustic telescope”, was invented by Billingsley in 1974 [21]. Since then, the use of multichannel products has grown substantially with the improvement of data acquisition systems, computing hardware, and localization algorithms. Since 1999, a group of apparatus catalogued as “acoustic camera” have been presented as a solution for detecting and localizing sound sources in a sound field, able to display, nowadays in real time, where the main excitation sound sources are located [22, 23]. The application of analytical models upon the data acquired with a multichannel system lets us classify this solution as an indirect method for sound visualization, which assumes that the analytical algorithms implemented perfectly fit the nature of the problem assessed.

Theoretical and numerical means to visualize sound fields have been attempted via near-field acoustic holography (NAH), adapted from optics [2426] to acoustics primarily during the 1980s [2730]. Maynard et al. justified the potential of the proposed technique stating that “the great utility of holography arises from its high information content, since data recorded on a two-dimensional surface (the hologram) may be used to reconstruct an entire three-dimensional wave field” [29]. The limitation of an initial planar geometry was removed in 1989, when Veronesi and Maynard introduced the inverse boundary element method (IBEM) [31]. There is a large volume of published studies proposing techniques and methods to improve the initial performance of NAH, but there is a common requirement for most approaches: the use of a large multichannel system to acquire data in the vicinity of the sound sources. This strong practical limitation constrains the range of applications where NAH is able to provide meaningful and accurate results.

As it has been shown, there is notable interest in developing tools to assess the behavior of sound in both qualitative and quantitative terms. Generally, in acoustics, it is often necessary to describe not only the characteristics of location and nature of the sound sources but also the behavior of the sound field that they generate. Consequently, the introduction of a measurement technique which permits the acquisition of such information in an efficient way, without raising the cost or complexity of the measurement setup, has a high potential for a wide range of applications.

3. Scan and Paint

The sound visualization technique proposed as an alternative sound visualization method is called “Scan and Paint” [32, 33]. The acoustic signals of the sound field are acquired by manually moving a single transducer across a measurement plane whilst filming the event with a camera. In the postprocessing stage, the sensor position is extracted by applying automatic color detection to each frame of the video. It is then possible to split the long recording into multiple segments by applying a spatial grid algorithm [34]. Each fragment of the signal will be linked to a grid cell, depending upon the position of the probe during the measurement. Spectral variations across the space are computed by analyzing the signal segments of each grid section. A sketch of the spatial discretization process is illustrated in Figure 4.

Only the 2D location relative to the background image is computed at this point, so it is later required to define the relation between 2D coordinates and 3D coordinates, to establish a relationship between pixels and meters in the measured plane. The camera should be placed perpendicular to the measurement area so as to avoid any visual errors caused by the camera projection.

Additionally, a fixed reference pressure microphone can be used to preserve the relative phase information across the sound field at the different grid positions. The location of this common ground has to be assigned to a position which has a high degree of linearity with the measurement area covered, hence obtaining high coherence values.

4. Mathematical Formulation

4.1. Spatial Grid Method

We begin by defining a continuous 2D dimensional spatial domain with an additional time dimension associated to it. If we discretize the spatial domain and define the limits of time between 0 and certain time length , then where is the union of by subspaces nonoverlapped; that is, where denotes the union operator of the elements to . The area covered by each block can be delimited using a regular spatial grid of cell size by . We can then describe the center of each grid cell depending on the starting point of the discretization grid (), the cell size , and their row and column index () with where and are the total number of rows and columns of the spatial grid. Therefore, each cell will be defined spatially such as Figure 5 illustrates the studied scenario.

Once the spatial domain of interest has been discretized, we can then establish a link between measurement data acquired with a moving transducer and the grid defined previously. Denoting the path followed by the sensor by , it is possible to fragment the continuous route into several segments, using the grid structure contained in as a tool to divide the original signal and associate each segment to a spatial position. As a result, each grid cell will have associated a list of uneven segments such as where is a time interval which links a section of the original time signal to a certain spatial area of the grid and is the number of sweeps within the cell.

Each of these time intervals within a cell can be defined as delimited by certain boundaries , ; that is, By (6), one grid cell can have multiple associated sections of the original signal if the sensor passes across the same area several times. The use of a sound probe which combines a sound pressure microphone along with a particle velocity sensor (a P-U probe) enables the gathering of information for both acoustic quantities across the space. Consequently, if a P-U probe is moved along a measurement plane recording its instantaneous position, applying the grid method proposed will lead to the association of measured time data to the different grid cells. As a result, we can define an array of sound pressure signals and another of particle velocity such as where and are the recorded sound pressure and particle velocity signals, respectively. The route followed by the probe will determine which grid cells have data assigned to them and which, if any, will be empty. Averaging will be therefore needed for the case of multiple time signals being associated to a single cell. Since data is acquired asynchronously at the different cells, the averaging process must be applied in the frequency domain; thus where each and denote the power spectral density estimation of a given segment of the sound pressure and particle velocity signals, respectively; that is, where is a portion of ) passed by an ideal rectangular band-pass filter with a bandwidth of centered at frequency [35].

4.2. The Doppler Effect

A nonlinear effect is introduced in the presence of relative motion between emitting sound source and receiving transducer, Doppler effect. The signal acquired presents a shift in frequency, which differs from the original, directly dependent upon the wavelength. A sketch of the problem can be seen in Figure 6, where denotes the speed of the probe and is the sound propagation speed.

The frequency shift recorded by the moving transducer can be defined as

The maximum frequency shift will occur when the norm of propagation and scanning speed are maximized, hence when both vectors are parallel. In order to have a quantitative assessment upon the impact of the Doppler effect, several scanning speeds have been tested. Figure 7 shows the frequency shift introduced into the recorded signal for different angles of incidences and frequencies. As is shown, non-linear effects are insignificant (average frequency shift is less than 1 Hz) evaluating the spectrum up to 10 kHz if the scanning speed is lower than 5 centimeters per second (0.05 m/s).

5. Practical Applications

The use of scanning methods incorporating pressure and particle velocity sensors together within one single probe allows for a complete study of a sound field. If the excitation is also known, it is then possible to investigate several acoustic properties of a material such as impedance, reflection, or absorption. In this section, a brief overview of several practical applications and advance implementations of the scanning measurement procedure is given.

5.1. Near-Field Source Localization

One of the main challenges in acoustics is the localization of noise sources within the near field. When the conditions are fairly simple, such as monopole sources in free field conditions, there are many different tools capable of providing an accurate answer. Nevertheless, for complex scenarios, and especially when the wavelength is large or the measurement room is reverberant, the price and complexity of conventional methods increase, whilst accuracy is not necessarily ensured. In this situation, particle velocity mapping has been shown to be very useful, since the signal to noise ratio in reverberant conditions is, by definition, higher than for pressure mapping [37].

Figure 8 shows an experimental example of leakage detection in constructive material using a broadband mapping of sound pressure, particle velocity, and acoustic intensity. It can be seen that, while the pressure roughly indicates where the noise is coming from, particle velocity mapping shows even the weaknesses at the door profile, demonstrating the high spatial resolution of the measurement method. Sound intensity mapping produces an intermediate answer, which fulfills the theoretical expectations, since it is defined as the product of pressure and velocity.

In addition, Figure 9 shows an example of the very high spatial resolution achievable with Scan and Paint when using a miniature version of a P-U probe, called a P-U match. In this case, it is possible to distinguish between 0.6 mm holes with 4 mm spacing between them.

5.2. Operational Deflection Shapes Visualization

Understanding the dynamic behavior of a component, machine, or structure is a key factor for controlling noise, vibration, fatigue, or wear problems. Conventionally, analytical modal analysis is used to characterize resonant vibration in machinery and structures from a theoretical point of view. However, it is often required to study a structure under a number of different operating conditions. For particular scenarios, it has been proven that direct measurements are faster, simpler, and more accurate than analytical predictions [38]. Experimental modal analysis can be performed by measuring operational deflection shapes (ODSs) and then interpreting or postprocessing them in a specific manner to define mode shapes [39, 40]. Figure 10 shows the ODSs at the first four resonant frequencies which clearly coincide with the horizontal natural modes of the vibrating panel.

Particle velocity plots are presented with relative phase information referred to a fixed accelerometer attached to the surface (ODS FRF). These results show very clear ODSs of the structure, supporting the potential of using scanning P-U intensity probes for vibroacoustic applications.

5.3. Nonstationary Directivity Patterns

Characterizing directivity patterns of musical instruments implicitly requires measuring nonstationary sound fields. A priori, this fact implies using multichannel methods to assess any temporal change at all regarded positions. However, it is possible to incorporate a static reference sensor close to the musical instrument to track any variations in the excitation during the measurement, allowing scanning techniques to be performed. The method is based on taking transfer functions between the scanning transducer and the fixed reference sensor. Then, the number of measured positions is limited depending on the dynamic range acquired, for each frequency independently. Experimental results, such as Figure 11, demonstrate that the violin directivity patterns can be captured by using scanning measurement methods [11].

5.4. Scan and Paint Transfer Path Analysis

Direct sound field visualization is not always the best way to assess complex noise problems. Maps of sound pressure, particle velocity, or sound intensity in the vicinity of a cavity panel might not be directly related to the pressure contribution at a certain position. Transfer path analysis (TPA) has been implemented for many years to evaluate this case scenario.

The most common measurement procedures require the use of large microphone arrays, meaning high cost, time, and frequency limitations. The measurement method Scan and Paint can be adapted for applying TPA techniques to scanning measurement data providing the sound field is stationary. A two-step measurement approach is implemented: first, the cabin interior is scanned under operational conditions, and then the process is repeated, exciting the sound field with a monopole source. Figure 12 shows the measurement setup in a helicopter cabin together with a pressure contribution map at low frequency. In this case, narrowband mapping reveals the location of dominant noise sources even at low frequencies, expanding the conventional frequency limits of pressure-based techniques.

5.5. Impedance and Absorption Mappings

The characterization of acoustic properties of different materials via in situ methods is a topic of interest since it provides a nondestructive approach which can be performed under realistic mounting conditions. There are many references in the literature addressing the issue of impedance deduction [4144] in techniques that calculate the surface impedance from measurements of pressure and/or particle velocity above a surface. Scanning methods can also be applied together with impedance and absorption estimation algorithms so as to extract the local variation of these properties across the surface material [45]. The generated incoming sound field is known if using a sound source that moves along with the probe. This allows us to distinguish between the acoustic energy which impinges into the assessed material and the energy that is reflected back. The acoustic signals acquired can then be introduced into an analytical model to compute the local acoustic properties of the material. An experimental example of absorption measurements in a plane seat is presented in Figure 13.

5.6. Intensity Vector Field Mapping

Energy distribution images in acoustic fields, connected with the graphical presentation of the energy flow (derived from direct measurements), are not very common elements in acoustic metrology [46]. For traditional acoustic measurements, the analysis of acoustic fields concerns only the distribution of pressure levels (scalar variable).

Sound intensity possibilities have greatly changed the approach to examining many acoustic phenomena. This measurement technique has been applied to various studies on theoretical and applied acoustics, greatly simplifying the methods of research.

Visualization of sound intensity may involve depicting various acoustic phenomena, depending on the area of interest. In sound engineering, it may be an acoustic wave power density distribution in space, the wave dissipations, the evaluation of its motion within the medium, spatial diffusion, and frequency irregularities of sound velocity. For technical acoustics, directional characteristics of industrial sources and the variables connected with reflection, scattering, and diffractions on obstacles could prove interesting, which are used to draw maps of the noise levels and to evaluate the effectiveness of antinoise monitors in industrial premises.

The scanning methodology used has been adapted to use a three-dimensional sound probe which incorporates three orthogonal particle velocity sensors along with a pressure microphone. Direct measurements of the intensity vector field of the evaluated scenario can be performed if the orientation of the probe is maintained during the acquisition process. Two experimental examples are presented: the front radiation of a sport car (Figure 14) and the loudspeaker high-frequency radiation pattern in a conventional room (Figure 15). The impressive results allow us to see clearly how the acoustic energy flow propagates away from the excitation points. Both color maps overlaid behind the vectors were computed by calculating the total acoustic intensity at each grid cell and applying interpolation between them.

5.7. Virtual Phased Arrays

Sound localization problems are usually tackled by acquiring data from phased microphone arrays and then applying acoustic holography or beamforming algorithms. However, the number of sensors required to achieve reliable results is often prohibitive, particularly if the frequency range of interest is wide. Previous studies have shown [36] that the number of sensors required can be reduced dramatically provided the sound field is time stationary. Several frequency domain beamforming techniques can be adapted to only use the relative phase between a fixed and a moving transducer, acquired with Scan and Paint. Therefore, the results traditionally obtained using large arrays can be emulated by applying beamforming algorithms to data acquired from two sensors. Figure 16 displays an experimental example of outdoors measurements where the dominant noise source (a burner pipe) is clearly localized.

6. Conclusions

The historical and theoretical foundations of the novel scanning measurement technique Scan and Paint have been introduced. It has been shown that the technique is practically unaffected by Doppler effect as long as the path covered in 1 second is less than 5 centimeters. The high flexibility, high resolution, and low cost characteristics of the proposed measurement methodology, along with its low time requirements, lead to the definition of Scan and Paint as an efficient sound visualization technique, especially for stationary sound fields. A wide range of specialized applications have been overviewed, proving that the measurement technique is not only suitable for near-field source localization purposes but also for vibroacoustic evaluation, material characterization, source radiation assessment, intensity vector field mapping, and far field localization.