Whether sound. How to choose a sound card for your computer and in general - why is it needed? Frequency spectrum of sound and frequency response

February 18, 2016

The world of home entertainment is quite diverse and can include: watching a movie on a good home theater system; fun and exciting gameplay or listening to music. As a rule, everyone finds something of their own in this area, or combines everything at once. But whatever the goals of a person in organizing their leisure time and whatever extreme they go to - all these links are firmly connected by one simple and understandable word - "sound". Indeed, in all these cases, we will be led by the handle by the soundtrack. But this question is not so simple and trivial, especially in those cases when there is a desire to achieve high-quality sound in a room or any other conditions. To do this, it is not always necessary to buy expensive hi-fi or hi-end components (although it will be very useful), but a good knowledge of physical theory is sufficient, which can eliminate most of the problems that arise for everyone who set out to get high quality voice acting.

Further, the theory of sound and acoustics from the point of view of physics will be considered. In this case, I will try to make it as accessible as possible for the understanding of any person who, perhaps, is far from knowledge of physical laws or formulas, but nevertheless passionately dreams of making the dream of creating a perfect speaker system come true. I do not presume that in order to achieve good results in this area at home (or in a car, for example) it is necessary to know these theories thoroughly, but understanding the basics will avoid many stupid and absurd mistakes, and will also allow you to achieve the maximum sound effect from the system. any level.

General sound theory and musical terminology

What is sound? This is the sensation that the auditory organ perceives "an ear" (the phenomenon itself exists without the participation of the "ear" in the process, but it is easier to understand) that occurs when the eardrum is excited by a sound wave. The ear in this case acts as a "receiver" of sound waves of different frequencies.
Sound wave it is essentially a sequential series of seals and discharges of the medium (most often air in normal conditions) of various frequencies. The nature of sound waves is vibrational, caused and produced by the vibration of any body. The appearance and propagation of a classical sound wave is possible in three elastic media: gaseous, liquid, and solid. When a sound wave occurs in one of these types of space, some changes inevitably occur in the environment itself, for example, a change in the density or pressure of air, the movement of particles of air masses, etc.

Since a sound wave has an oscillatory nature, it has such a characteristic as frequency. Frequency measured in hertz (in honor of the German physicist Heinrich Rudolf Hertz), and denotes the number of oscillations over a period of time equal to one second. Those. for example, a frequency of 20 Hz means a cycle of 20 oscillations in one second. The subjective concept of its pitch also depends on the frequency of the sound. The more sound vibrations are made per second, the "higher" the sound seems. The sound wave also has one more important characteristic, which has a name - wavelength. Wavelength it is customary to consider the distance that a sound of a certain frequency travels in a period equal to one second. For example, the wavelength of the lowest sound in the audible range for a human being at 20 Hz is 16.5 meters, and the wavelength of the highest sound 20,000 Hz is 1.7 centimeters.

The human ear is designed in such a way that it is able to perceive waves only in a limited range, approximately 20 Hz - 20,000 Hz (depending on the characteristics of a particular person, someone is able to hear a little more, someone less). Thus, this does not mean that sounds below or above these frequencies do not exist, they are simply not perceived by the human ear, going beyond the audible range. The sound above the audible range is called ultrasound, the sound below the audible range is called infrasound... Some animals are able to perceive ultra and infra sounds, some even use this range for orientation in space (bats, dolphins). If the sound passes through a medium that is not in direct contact with the human hearing organ, then such a sound may not be heard or greatly weakened later.

In musical terminology of sound, there are such important designations as octave, tone and overtone of sound. Octave means an interval in which the frequency ratio between sounds is 1 to 2. The octave is usually very audible, while sounds within this interval can be very similar to each other. An octave can also be called a sound that vibrates twice as much as another sound in the same time period. For example, 800 Hz is nothing more than a higher octave of 400 Hz, and 400 Hz is the next octave of 200 Hz sound. The octave, in turn, consists of tones and overtones. Variable vibrations in a harmonic sound wave of one frequency are perceived by the human ear as musical tone... High-frequency vibrations can be interpreted as high-pitched sounds, low-frequency vibrations as low-pitched sounds. The human ear is able to clearly distinguish sounds with a difference of one tone (up to 4000 Hz). Despite this, the music uses an extremely small number of tones. This is explained from considerations of the principle of harmonic consonance, everything is based on the principle of octaves.

Consider the theory of musical tones using the example of a string stretched in a certain way. Such a string, depending on the tension force, will have a "tuning" to any one specific frequency. When this string is influenced by something with one definite force, which will cause its vibrations, one definite tone of sound will be observed stably, we will hear the desired tuning frequency. This sound is called the pitch. The frequency of the "A" note of the first octave, equal to 440 Hz, is officially accepted as the fundamental tone in the musical sphere. However, most musical instruments never reproduce pure basic tones; they are inevitably accompanied by overtones, called overtones... It is appropriate to recall here an important definition of musical acoustics, the concept of sound timbre. Timbre - this is a feature of musical sounds that give musical instruments and voices their unique recognizable specificity of sound, even if we compare sounds of the same pitch and volume. The timbre of each musical instrument depends on the distribution of sound energy over tones at the moment the sound appears.

Overtones form a specific coloration of the main tone, by which we can easily identify and recognize a specific instrument, as well as clearly distinguish its sound from another instrument. Overtones are of two types: harmonic and non-harmonic. Harmonic overtones by definition are multiples of the pitch frequency. On the contrary, if the overtones are not multiples and deviate noticeably from the values, then they are called inharmonic... In music, operating with non-multiple overtones is practically excluded, therefore the term is reduced to the concept of "overtone", meaning harmonic. In some instruments, for example, a piano, the fundamental tone does not even have time to form; in a short period, the sound energy of the overtones increases, and then decays just as rapidly. Many instruments create the so-called "transition tone" effect, when the energy of certain overtones is maximal at a certain point in time, usually at the very beginning, but then abruptly changes and transitions to other overtones. The frequency range of each instrument can be considered separately and is usually limited to the fundamental frequencies that that particular instrument can reproduce.

In the theory of sound, there is also such a thing as NOISE. Noise is any sound that is created by a set of sources that are inconsistent with each other. Everyone is familiar with the noise of the foliage of trees, swaying by the wind, etc.

What determines the sound volume? Obviously, this phenomenon directly depends on the amount of energy carried by the sound wave. To determine the quantitative indicators of loudness, there is a concept - the intensity of sound. Sound intensity is defined as the flow of energy that has passed through some area of \u200b\u200bspace (for example, cm2) per unit of time (for example, per second). During normal conversation, the intensity is approximately 9 or 10 W / cm2. The human ear is able to perceive sounds of a fairly wide range of sensitivity, while the frequency response is heterogeneous within the audio spectrum. This is the best way to perceive the frequency range 1000 Hz - 4000 Hz, which most widely covers human speech.

Since sounds vary so much in intensity, it is more convenient to think of it as a logarithmic quantity and measure it in decibels (after the Scottish scientist Alexander Graham Bell). The lower threshold of the human ear's auditory sensitivity is 0 dB, the upper one is 120 dB, which is also called the "pain threshold". The upper limit of sensitivity is also perceived by the human ear not in the same way, but depends on a specific frequency. Low-frequency sounds must be much more intense than high-frequency sounds to induce pain threshold. For example, the pain threshold at a low frequency of 31.5 Hz occurs at a sound power level of 135 dB, when at a frequency of 2000 Hz pain sensation appears at already at 112 dB. There is also the concept of sound pressure, which actually expands the usual explanation for the propagation of a sound wave in air. Sound pressure is a variable excess pressure arising in an elastic medium as a result of the passage of a sound wave through it.

The wave nature of sound

To better understand the sound wave generation system, imagine a classic speaker in a tube filled with air. If the speaker makes a sharp forward movement, then the air in the immediate vicinity of the diffuser is momentarily compressed. After that, the air will expand, thereby pushing the compressed air region along the pipe.
This wave movement will subsequently be a sound when it reaches the auditory organ and "excites" the eardrum. When a sound wave occurs in the gas, excess pressure and excess density are created and particles move at a constant speed. It is important to remember about sound waves that matter does not move with the sound wave, but only a temporary disturbance of the air masses.

If we imagine a piston suspended in free space on a spring and performing repetitive movements "back and forth", then such oscillations will be called harmonic or sinusoidal (if we represent a wave in the form of a graph, then we will get in this case the purest sinusoid with repeating drops and rises). If we imagine a speaker in a pipe (as in the example described above) performing harmonic oscillations, then at the moment the speaker moves "forward", the already known effect of air compression is obtained, and when the speaker moves "backward", the opposite effect of vacuum is obtained. In this case, a wave of alternating compression and rarefaction will propagate through the pipe. The distance along the pipe between adjacent maxima or minima (phases) will be called wavelength... If the particles vibrate parallel to the direction of wave propagation, then the wave is called longitudinal... If they vibrate perpendicular to the direction of propagation, then the wave is called transverse... Usually sound waves in gases and liquids are longitudinal, but in solids, waves of both types can arise. Shear waves in solids arise from resistance to shape change. The main difference between these two types of waves is that the shear wave has the property of polarization (oscillations occur in a certain plane), while the longitudinal wave does not.

Sound speed

The speed of sound directly depends on the characteristics of the environment in which it propagates. It is determined (dependent) by two properties of the medium: elasticity and density of the material. The speed of sound in solids, respectively, directly depends on the type of material and its properties. Velocity in gaseous media depends only on one type of deformation of the medium: compression-rarefaction. The change in pressure in a sound wave occurs without heat exchange with surrounding particles and is called adiabatic.
The speed of sound in gas depends mainly on temperature - it increases with increasing temperature and decreases with decreasing temperature. Also, the speed of sound in a gaseous medium depends on the size and mass of the gas molecules themselves - the smaller the mass and size of the particles, the greater the "conductivity" of the wave and the higher the corresponding speed.

In liquid and solid media, the principle of propagation and the speed of sound are similar to how a wave propagates in air: by compression-vacuum. But in these media, in addition to the same dependence on temperature, the density of the medium and its composition / structure are quite important. The lower the density of the substance, the higher the speed of sound and vice versa. The dependence on the composition of the medium is more complicated and is determined in each specific case, taking into account the location and interaction of molecules / atoms.

Sound speed in air at t, ° C 20: 343 m / s
Sound speed in distilled water at t, ° C 20: 1481 m / s
Sound speed in steel at t, ° C 20: 5000 m / s

Standing waves and interference

When a speaker creates sound waves in a confined space, the effect of waves bouncing off the boundaries inevitably occurs. As a result, this most often occurs interference effect - when two or more sound waves are superimposed on each other. Special cases of the phenomenon of interference are the formation of: 1) beating waves or 2) standing waves. Beating waves - this is the case when the addition of waves with close frequencies and amplitudes occurs. Beat pattern: when two waves of similar frequency are superimposed on each other. At some point in time with this overlap, the amplitude peaks may be "out of phase" and the troughs in "out of phase" can also be. This is how sound beats are characterized. It is important to remember that, unlike standing waves, phase coincidences of the peaks do not occur constantly, but at some time intervals. By ear, such a pattern of beats is distinguished quite clearly, and is heard as a periodic increase and decrease in volume, respectively. The mechanism of this effect is extremely simple: at the moment of coincidence of peaks, the volume increases, at the moment of coincidence of drops, the volume decreases.

Standing waves arise in the case of superposition of two waves of the same amplitude, phase and frequency, when when such waves "meet" one moves in the forward direction and the other in the opposite direction. In a section of space (where a standing wave has formed), a picture of the overlap of two frequency amplitudes arises, with alternating maxima (so-called antinodes) and minima (so-called nodes). When this phenomenon occurs, the frequency, phase and attenuation of the wave at the point of reflection are extremely important. Unlike traveling waves, there is no energy transfer in a standing wave due to the fact that the forward and backward waves that form this wave transfer energy in equal amounts both in the forward and in the opposite directions. For a visual understanding of the occurrence of a standing wave, let's present an example from home acoustics. Let's say we have floor-standing speakers in some limited space (room). After getting them to play some song with a lot of bass, let's try to change the location of the listener in the room. Thus, the listener, having got into the zone of minimum (subtraction) of the standing wave, will feel the effect that the bass has become very small, and if the listener falls into the zone of maximum (addition) frequencies, then the opposite effect of a significant increase in the bass region is obtained. The effect is observed in all octaves of the base frequency. For example, if the base frequency is 440 Hz, then the phenomenon of "addition" or "subtraction" will also be observed at frequencies of 880 Hz, 1760 Hz, 3520 Hz, etc.

The phenomenon of resonance

Most solids have their own resonance frequency. It is quite easy to understand this effect using the example of a conventional pipe open only at one end. Imagine a situation that a speaker is connected from the other end of the pipe, which can play some one constant frequency, it can also be changed later. So, the pipe has its own resonance frequency, in simple terms - this is the frequency at which the pipe "resonates" or emits its own sound. If the speaker frequency (as a result of the adjustment) coincides with the resonance frequency of the pipe, then the effect of increasing the volume several times will occur. This is because the loudspeaker excites the vibrations of the air column in the tube with a significant amplitude until the very “resonant frequency” is found and the addition effect occurs. The phenomenon that has arisen can be described as follows: the pipe in this example "helps" the dynamics, resonating at a specific frequency, their efforts add up and "pour out" into an audible loud effect. On the example of musical instruments, this phenomenon can be easily traced, since in the design of most there are elements called resonators. It is not hard to guess what serves the purpose of boosting a certain frequency or musical tone. For example: a guitar body with a resonator in the form of a hole that mates with the volume; Flute tube design (and all tubes in general); The cylindrical shape of the drum body, which itself is a resonator of a certain frequency.

Frequency spectrum of sound and frequency response

Since in practice there are practically no waves of the same frequency, it becomes necessary to decompose the entire audio spectrum of the audible range into overtones or harmonics. For these purposes, there are graphs that display the dependence of the relative energy of sound vibrations on frequency. Such a graph is called an audio frequency spectrum graph. Frequency spectrum of sound there are two types: discrete and continuous. A discrete spectrum graph displays frequencies individually, separated by blank spaces. All sound frequencies are present in the continuous spectrum at once.
In the case of music or acoustics, the usual schedule is most often used. Frequency Response Characteristics (abbreviated as "frequency response"). This graph shows the dependence of the amplitude of sound vibrations on frequency throughout the entire frequency spectrum (20 Hz - 20 kHz). Looking at such a graph, it is easy to understand, for example, the strengths or weaknesses of a particular speaker or speaker system as a whole, the strongest areas of energy return, frequency drops and rises, damping, as well as trace the slope of the decay.

Sound wave propagation, phase and antiphase

The process of propagation of sound waves occurs in all directions from the source. The simplest example for understanding this phenomenon is a pebble thrown into water.
From the place where the stone fell, waves begin to diverge along the surface of the water in all directions. However, let's imagine a situation using a speaker in a certain volume, let's say a closed box, which is connected to an amplifier and reproduces some kind of musical signal. It is easy to notice (especially if you send a powerful low frequency signal, for example, a bass drum) that the speaker makes a rapid forward movement, and then the same rapid movement backward. It remains to be understood that when the speaker moves forward, it emits a sound wave, which we hear later. But what happens when the speaker moves backward? And paradoxically, the same thing happens, the speaker makes the same sound, only it spreads in our example entirely within the volume of the box, without going beyond it (the box is closed). In general, in the above example, one can observe quite a lot of interesting physical phenomena, the most significant of which is the concept of phase.

The sound wave that the speaker, being in the volume, emits in the direction of the listener, is "in phase". The backward wave that goes into the volume of the box will be correspondingly antiphase. It remains only to understand what these concepts mean? Signal phase Is the sound pressure level at the current time at some point in space. The phase is easiest to understand by the example of reproducing musical material with a conventional floor-standing stereo pair of home speakers. Let's imagine that two such floor-standing speakers are installed in a certain room and play. In this case, both acoustic systems reproduce a synchronous signal of variable sound pressure, with the sound pressure of one speaker being added to the sound pressure of the other speaker. A similar effect occurs due to the synchronization of the signal reproduction of the left and right speakers, respectively, in other words, the peaks and troughs of the waves emitted by the left and right speakers coincide.

Now imagine that the sound pressures are still changing in the same way (have not changed), but now they are opposite to each other. This can happen if you connect one of the two speakers in reverse polarity ("+" cable from the amplifier to the "-" speaker terminal, and "-" cable from the amplifier to the "+" speaker terminal). In this case, the opposite signal will cause a pressure difference, which can be represented as numbers as follows: the left speaker will generate a pressure of "1 Pa" and the right speaker will generate a pressure of "minus 1 Pa". As a result, the total sound loudness at the listening position will be equal to zero. This phenomenon is called antiphase. If we consider the example in more detail for understanding, then it turns out that two dynamics playing "in phase" - create the same areas of compaction and vacuum of air, which actually help each other. In the case of idealized antiphase, the area of \u200b\u200bairspace compaction created by one speaker will be accompanied by the area of \u200b\u200bairspace depression created by the second speaker. It looks like a phenomenon of mutual synchronous damping of waves. True, in practice, the volume does not drop to zero, and we will hear a highly distorted and attenuated sound.

In the most accessible way, this phenomenon can be described as follows: two signals with the same oscillations (frequency), but shifted in time. In view of this, it is more convenient to present these displacement phenomena using the example of an ordinary round analog clock. Let's imagine that there are several identical round clocks hanging on the wall. When the second hands of this watch run synchronously, on one watch 30 seconds and on the other 30, this is an example of a signal that is in phase. If the seconds hands are running with an offset, but the speed is still the same, for example, on some watches 30 seconds, and on others 24 seconds, then this is a classic example of a phase shift (shift). In the same way, the phase is measured in degrees, within the virtual circle. In this case, when the signals are shifted relative to each other by 180 degrees (half the period), a classical antiphase is obtained. Often, in practice, slight phase displacements occur, which can also be determined in degrees and successfully eliminated.

Waves are flat and spherical. A plane wavefront propagates in only one direction and is rarely seen in practice. A spherical wavefront is a simple type of wave that emanates from a single point and travels in all directions. Sound waves have the property diffraction, i.e. the ability to bend around obstacles and objects. The degree of bending depends on the ratio of the sound wavelength to the size of the obstacle or hole. Diffraction also occurs when there is an obstacle in the path of sound. In this case, two scenarios are possible: 1) If the dimensions of the obstacle are much larger than the wavelength, then the sound is reflected or absorbed (depending on the degree of absorption of the material, the thickness of the obstacle, etc.), and a zone of "acoustic shadow" is formed behind the obstacle ... 2) If the dimensions of the obstacle are comparable to the wavelength or even less than it, then the sound diffracts to some extent in all directions. If a sound wave while moving in one medium hits the interface with another medium (for example, an air medium with a solid medium), then three scenarios can arise: 1) the wave will be reflected from the interface 2) the wave can pass into another medium without changing direction 3) a wave can pass into another medium with a change in direction at the boundary, this is called "wave refraction".

The ratio of the excess pressure of a sound wave to the vibrational volume velocity is called the wave resistance. In simple terms, wave impedance of the medium can be called the ability to absorb sound waves or "resist" them. The reflection and transmission coefficients directly depend on the ratio of the characteristic impedances of the two media. The characteristic impedance in a gas medium is much lower than in water or solids. Therefore, if a sound wave in air falls on a solid object or on the surface of deep water, then the sound is either reflected from the surface or absorbed to a large extent. It depends on the thickness of the surface (water or solid) on which the desired sound wave falls. With a low thickness of a solid or liquid medium, sound waves almost completely "pass", and vice versa, with a large thickness of the medium, waves are more often reflected. In the case of reflection of sound waves, this process takes place according to the well-known physical law: "The angle of incidence is equal to the angle of reflection." In this case, when a wave from a medium with a lower density falls on the border with a medium of higher density, the phenomenon occurs refraction... It consists in bending (refraction) of a sound wave after "meeting" an obstacle, and is necessarily accompanied by a change in speed. Refraction also depends on the temperature of the environment in which the reflection occurs.

In the process of propagation of sound waves in space, a decrease in their intensity inevitably occurs, one can say attenuation of waves and attenuation of sound. In practice, it is quite simple to encounter such an effect: for example, if two people stand in a field at a certain close distance (a meter or closer) and begin to say something to each other. If you subsequently increase the distance between people (if they begin to move away from each other), the same level of conversational volume will become less and less audible. This example clearly demonstrates the phenomenon of a decrease in the intensity of sound waves. Why is this happening? The reason for this is the various processes of heat transfer, molecular interaction and internal friction of sound waves. Most often, in practice, the transformation of sound energy into heat occurs. Such processes inevitably arise in any of the 3 sound propagation media and they can be characterized as absorption of sound waves.

The intensity and degree of absorption of sound waves depends on many factors, such as: pressure and temperature of the medium. Also, absorption depends on the specific frequency of the sound. When a sound wave propagates in liquids or gases, the effect of friction between different particles, which is called viscosity, occurs. As a result of this friction at the molecular level, the process of transformation of the wave from sound to heat occurs. In other words, the higher the thermal conductivity of the medium, the lower the degree of wave absorption. Sound absorption in gaseous media also depends on pressure (atmospheric pressure changes with increasing altitude relative to sea level). As for the dependence of the degree of absorption on the frequency of sound, taking into account the above-mentioned dependences of viscosity and thermal conductivity, the higher its frequency, the higher the sound absorption. For example, at normal temperature and pressure, in air the absorption of a wave with a frequency of 5000 Hz is 3 dB / km, and the absorption of a wave with a frequency of 50,000 Hz will already be 300 dB / m.

In solid media, all the above dependencies (thermal conductivity and viscosity) are preserved, but several more conditions are added to this. They are associated with the molecular structure of solid materials, which can be different, with its own inhomogeneities. Depending on this internal solid molecular structure, the absorption of sound waves in this case can be different, and depends on the type of specific material. When sound passes through a solid, the wave undergoes a series of transformations and distortions, which most often leads to the dispersion and absorption of sound energy. At the molecular level, a dislocation effect can occur, when a sound wave causes a displacement of atomic planes, which then return to their original position. Or, the movement of dislocations leads to collisions with dislocations perpendicular to them or defects of the crystal structure, which causes their deceleration and, as a consequence, some absorption of the sound wave. However, the sound wave can also resonate with these defects, which will distort the original wave. The energy of the sound wave at the moment of interaction with the elements of the molecular structure of the material is dissipated as a result of the processes of internal friction.

In I will try to make out the features of human auditory perception and some of the subtleties and features of sound propagation.

Sounds belong to the phonetics section. The study of sounds is included in any Russian language school curriculum. Acquaintance with sounds and their main characteristics occurs in the lower grades. A more detailed study of sounds with complex examples and nuances takes place in middle and high school. On this page are given only basic knowledge by the sounds of the Russian language in a compressed form. If you need to study the structure of the speech apparatus, the tonality of sounds, articulation, acoustic components and other aspects beyond the scope of the modern school curriculum, refer to specialized manuals and textbooks on phonetics.

What is sound?

Sound, like a word and a sentence, is the basic unit of language. However, the sound does not express any meaning, but reflects the sound of the word. Thanks to this, we distinguish words from each other. Words differ in the number of sounds (port - sport, crow - funnel), a set of sounds (lemon - estuary, cat - mouse), a sequence of sounds (nose - sleep, bush - knock) up to a complete mismatch of sounds (boat - boat, forest - park).

What sounds are there?

In Russian, sounds are divided into vowels and consonants. In Russian there are 33 letters and 42 sounds: 6 vowels, 36 consonants, 2 letters (b, b) do not denote a sound. The discrepancy in the number of letters and sounds (not counting b and b) is caused by the fact that there are 6 sounds for 10 vowels, and 36 sounds for 21 consonants (if all combinations of consonant sounds are voiceless / voiced, soft / hard). In writing, the sound is indicated in square brackets.
There are no sounds: [e], [e], [y], [i], [b], [b], [g '], [sh'], [c '], [y], [h] , [u].

Scheme 1. Letters and sounds of the Russian language.

How are sounds pronounced?

We make sounds when we exhale (only in the case of the interjection "ah-ah", which expresses fear, the sound is pronounced when inhaling.). The division of sounds into vowels and consonants is related to how a person pronounces them. Vowel sounds are pronounced by the voice due to exhaled air passing through tense vocal cords and freely leaving through the mouth. Consonant sounds consist of noise or a combination of voice and noise due to the fact that the exhaled air meets an obstacle in its path in the form of a bow or teeth. Vowel sounds are pronounced loudly, consonants are muffled. A person is able to sing vowel sounds with a voice (exhaled air), raising or lowering the timbre. You cannot sing consonants, they are pronounced equally muffled. Hard and soft marks do not denote sounds. They cannot be pronounced as an independent sound. When pronouncing a word, they influence the consonant in front of them, make it soft or hard.

Word transcription

Word transcription is a recording of sounds in a word, that is, in fact, a recording of how the word is pronounced correctly. Sounds are enclosed in square brackets. Compare: a - letter, [a] - sound. The softness of consonants is indicated by an apostrophe: п - letter, [п] - hard sound, [п '] - soft sound. Voiced and voiceless consonants are not indicated in the letter in any way. The transcription of a word is written in square brackets. Examples: door → [dv'er '], thorn → [kal'uch'ka]. Sometimes the stress is indicated in the transcription - with an apostrophe before a vowel stressed.

There is no clear juxtaposition of letters and sounds. In the Russian language, there are many cases of substitution of vowel sounds depending on the place of stress of a word, substitution of consonants or loss of consonants in certain combinations. When compiling a transcription, words take into account the rules of phonetics.

Color scheme

In phonetic parsing, words are sometimes drawn with color schemes: letters are painted in different colors depending on what sound they mean. Colors reflect the phonetic characteristics of sounds and help you visualize how a word is pronounced and what sounds it consists of.

All vowels (stressed and unstressed) are marked with a red background. Jotated vowels are marked with green and red: green means a soft consonant sound [’‘], red means the next vowel. Consonants with solid sounds are colored blue. Consonants with soft sounds are colored green. Soft and hard marks are painted gray or not painted at all.

Legend:
- vowel, - iotated, - hard consonant, - soft consonant, - soft or hard consonant.

Note. Blue-green color is not used in phonetic parsing schemes, since a consonant sound cannot be soft and hard at the same time. The blue-green color in the table above is only used to demonstrate that the sound can be either soft or hard.

We are figuring out whether to buy discrete or external sound cards. For Mac and Win platforms.

We often write about quality sound. In a portable wrapper, but bypass the desktop interfaces. Why?

Stationary home acoustics - subject creepy holivars... Especially in cases where computers are used as a sound source.

Most PC users consider a discrete or external audio card the guarantee of high-quality sound... “Conscientious” is to blame marketing, persistently convincing us of the need to purchase an additional device.

What is used in the PC to output the audio stream


The built-in sound of modern motherboards and laptops vastly outperforms the auditory analysis capabilities of the average mentally healthy, tech-savvy listener. The platform does not matter.

Some motherboards have enough high-quality integrated sound... At the same time, they are based on the same funds as in budget fees. Improvement is achieved by separating the sound part from other elements, using a higher quality element base.


And yet, most boards use the same codec from Realtek. Apple desktops are no exception. At least a decent part of them are equipped with Realtek A8xx.

This codec (a set of logic embedded in a microcircuit) and its modifications are typical for almost all motherboards designed for Intel processors. Marketers call it Intel HD Audio.

Measurements of the quality of the Realtek audio path


The implementation of audio interfaces is highly dependent on the motherboard manufacturer. Quality examples show very good numbers. For example, the RMAA test for the audio path Gigabyte G33M-DS2R:

Frequency response (from 40 Hz to 15 kHz), dB: +0.01, -0.09
Noise level, dB (A): -92.5
Dynamic range, dB (A): 91.8
Harmonic distortion,%: 0.0022
Intermodulation distortion + noise,%: 0.012
Interpenetration of channels, dB: -91.9
Intermodulation at 10 kHz,%: 0.0075

All the figures obtained deserve the ratings "Very good" and "Excellent". Not every external map can show such results.

Comparison test results


Unfortunately, time and equipment do not allow us to conduct our own comparative testing of various embedded and external solutions.

Therefore, let's take what has already been done for us. In the vastness of the network, for example, you can find data on double internal resampling of the most popular discrete cards of the series Creative X-Fi... Since they relate to circuitry, let's leave the check on your shoulders.

And here are the materials published one large hardware project allow you to understand a lot. In the conducted testing of several systems from the built-in codec to 2 dollars before the audiophile decision in 2000, very interesting results were obtained.

It turned out that Realtek ALC889 shows not the most even frequency response, and gives a decent tone difference - 1.4 dB at 100 Hz. However, in reality this figure is not critical.


And in some implementations (that is, motherboard models) it is completely absent - see the figure above. It can only be noticed when listening to one frequency. In a musical composition, after the correct equalizer setting, even an avid audiophile will not be able to distinguish between a discrete card and an embedded solution.

Expert opinion

In all of our blind tests, we were unable to distinguish between 44.1 and 176.4 kHz or 16 and 24 bit recordings. In our experience, the 16 bit / 44.1 kHz ratio provides the best sound quality you can experience. The formats above just eat up space and money.

Downsampling a track from 176.4 kHz to 44.1 kHz with a high quality resampler prevents loss of detail. If such a recording fell into your hands, change the frequency to 44.1 kHz and enjoy.

The main advantage of 24-bit over 16-bit is its greater dynamic range (144dB versus 98dB), but it doesn't really matter. Many modern tracks are fighting a battle for loudness, in which the dynamic range is artificially reduced even in the production stage to 8-10 bits.

My card sounds bad. What to do?


All this is very convincing. During my work with hardware, I managed to test a lot of devices - desktop and portable. Despite this, as a home player I use a computer with built-in chip Realtek.

What if the sound has artifacts and problems? Follow the instructions:

1) Turn off all effects in the control panel, put “line-out” on the green hole in “2 channels (stereo)” mode.

2) In the OS mixer, turn off all unnecessary inputs, the volume sliders - to the maximum. Make adjustments only with the control on the speaker / amplifier.

3) Install the correct player. For Windows - foobar2000.

4) In it we set “Kernel Streaming Output” (you need to download an additional plugin), 24 bits, software resampling (via PPHS or SSRC) at 48 kHz. For output, we use WASAPI Output. Turn off the volume control.

Everything else is the work of your audio system (speakers or headphones). After all, a sound card is, first of all, a DAC.

What's the bottom line?


The reality is that, in the general case, a discrete card does not give a significant gain in the quality of music playback (this is at least). Its advantages are only in convenience, functionality, and, perhaps, stability.

Why do all publications still recommend expensive solutions? Simple psychology - people believe that to change the quality of a computer system, you need to buy something advanced, expensive... In fact, you need to apply your head to everything. And the result can be amazing.

There was a time when the question of the need for a sound card was not at all raised. If you need a computer sound a little better than the grunting of a speaker in the case, buy a sound card. Not needed - don't buy. However, the cards were quite expensive, especially while they were being made for the prehistoric port of ISA.

With the transition to PCI, it became possible to transfer part of the calculations to the central processor, as well as to use RAM to store music samples (in ancient times, such a need was not only among professional musicians, but also among normal people, because the most popular music format on computers was 20 years ago there was MIDI). So soon, entry-level sound cards became much cheaper, and then built-in sound appeared in high-end motherboards. Bad, of course, but free. And this dealt a severe blow to the manufacturers of sound cards.

Today, all motherboards have built-in sound. And in expensive ones, it is even positioned as high-quality. That's right Hi-Fi. But in reality, unfortunately, this is far from the case. Last year I was assembling a new computer, where I put one of the most expensive and objectively the best motherboards. And, of course, they promised high-quality sound on discrete chips, and even with gold-plated connectors. They wrote so deliciously that they decided not to install a sound card, to do with the built-in one. And he did. About a week. Then he disassembled the case, put the card down and didn't do any more nonsense.

Why is the embedded audio not very good?

First, the question of price. A decent sound card costs 5-6 thousand rubles. And it's not about the greed of manufacturers, it's just that the components are not cheap, and the requirements for build quality are high. A serious motherboard costs 15-20 thousand rubles. Is the manufacturer ready to add at least three thousand more to them? Will the user get scared not having time to evaluate the sound quality? Better not to risk it. And they do not risk it.

Secondly, for a really high-quality sound, without extraneous noise, interference and distortion, the components must be located at a known distance from each other. If you look at the sound card, you will see how unusually large it is on it. And on the motherboard it is barely enough, everything has to be put very tightly. And, alas, there is simply nowhere to do really well.

Twenty years ago, consumer sound cards cost more than any other computer and had memory slots (!) For storing music samples. In the photo, the dream of all computer scientists of the mid-nineties - Sound Blaster AWE 32.32 is not a bit depth, but the maximum number of simultaneously reproduced streams in MIDI

Therefore, integrated audio is always a compromise. I saw boards with a kind of built-in sound, which, in fact, floated from above in the form of a separate platform connected to the “mother” only by a connector. And yes, that sounded good. But can this sound be called integrated? Not sure.

The reader who has not tried discrete sound solutions may have a question - what, in fact, means “good sound in a computer”?

1) He's corny louder... An amplifier is built into a sound card even at a budget level, which can “pump” even large speakers or high-impedance headphones. Many are surprised that the speakers stop wheezing and choking at maximum. This is also a "side" of a normal amplifier.

2) The frequencies complement each other, and do not mix, turning into a mess... A normal digital-to-analog converter (DAC) solidly “draws” bass, middle and highs, allowing you to very accurately tune them using software to your liking. When listening to music, you suddenly hear each instrument separately. And films will delight you with the effect of presence. In general, the impression is as if the speakers used to be covered with a thick blanket, and then it was removed.

3) The difference is felt especially clearly in games.... You will be surprised that the noise of the wind and dripping water does not drown out the quiet footsteps of competitors around the corner. That in headphones, not necessarily expensive, there is an understanding - who is moving from where and at what distance. This directly affects performance. Sneaking up / driving up on the sly to you simply will not work.

What sound cards are there?

When this type of component began to interest only connoisseurs of good sound, of which, unfortunately, there are very few, very few manufacturers remained. Only two - Asus and Creative. The latter is generally a mastodon of the market, who created it and set all the standards. Asus entered it relatively late, but it still does not leave it.

New models come out extremely rarely, and old ones are sold for a long time, for 5-6 years. The fact is that in terms of sound, nothing can be improved there without a radical increase in price. And few people are ready to pay for audiophile perversions in the computer. I would say no one is ready. The quality bar is already raised too high.

The first difference is the interface. There are cards that are intended only for stationary computers, and they are installed in the motherboard via the PCI-Express interface. Others are USB connected and can be used with both large computers and laptops. In the latter, by the way, the sound is disgusting in 90% of cases, and the upgrade will certainly not hurt him.

The second difference is the price. If we are talking about internal cards, then for 2-2.5 thousand models are sold that are almost identical to the built-in sound. They are usually bought in cases when the connector on the motherboard died (a phenomenon, alas, common). An unpleasant feature of cheap cards is their low resistance to interference. If you put them close to the video card, background sounds will be very annoying.

The golden mean for embedded cards is 5-6 thousand rubles... There is already everything to please a normal person: anti-interference protection, high-quality components and flexible software.

Per 8-10 thousandthe latest models are sold that can reproduce 32-bit sound in the 384 kHz range. This is right here top-top. If you know where to get files and games in this quality - by all means buy :)

Even more expensive sound cards differ little in hardware from the already mentioned options, but they acquire additional body kit - external modules for connecting devices, companion boards with outputs for professional sound recording, etc. It already depends on the real needs of the user. Personally, the body kit never came in handy, although it seemed to be needed in the store.

The price range of USB cards is about the same: from 2 thousand alternative to built-in sound, 5-7 thousand strong middle peasants, 8-10 high end and above that, everything is the same, but with a rich body kit.

Personally, I stop hearing the difference in the middle. Simply because cooler solutions require high-end speakers with headphones, and, to be honest, I don't see much point in playing World of Tanks with headphones for a thousand dollars. Probably, each problem has its own solutions.

Several good options

Several sound cards and adapters that I tried and liked.

PCI-Express interface

Creative Sound Blaster Z... It has been on sale for 6 years already, it costs about the same in different computers, and it is still very happy. The CS4398 DAC used in this product is already old, but audiophiles compare it to $ 500 CD players. The average price is 5500 rubles.

Asus Strix Soar... While Creative's product is shamelessly geared towards games, Asus has taken care of music lovers as well. The ESS SABER9006A DAC is comparable in sound to the CS4398, but Asus offers more fine tuning for those who like to listen to Pink Floyd in HD quality on their computer. The price is comparable, about 5500 rubles.

USB interface

Asus Xonar U3 - a small box, when inserted into a laptop port, takes the sound quality in it to the next level. Despite its compact dimensions, there was even a place for a digital output. And the software is surprisingly flexible. An interesting option to try is why you need a sound card at all. The price is 2000 rubles.

Creative Sound BlasterX G5.A device the size of a pack of cigarettes (smoking is evil) is almost indistinguishable from the internal Sound Blaster Z, but you don't have to climb anywhere, just plug the plug into the USB port. And right away you will receive seven-channel sound of impeccable quality, all sorts of gadgets for music and games, as well as a built-in USB port in case they are not enough for you. The presence of space made it possible to add an additional headphone amplifier, and, once hearing it in action, it is difficult to wean. The main functions of the software are duplicated by hardware buttons. The issue price is 10 thousand rubles.

Play and listen to music with pleasure! There are not so many of them, these pleasures.

If we talk about the objective parameters that can be used to characterize quality, then of course not. Recording on vinyl or cassette always means adding additional distortion and noise. But the fact is that such distortions and noise do not subjectively spoil the impression of the music, and often even vice versa. Our hearing and the sound analysis system work quite difficult, what is important for our perception, and what can be assessed as quality from a technical point of view are slightly different things.

MP3 is a completely separate topic, it is an unambiguous deterioration of quality in order to reduce the file size. MP3 encoding involves removing quieter harmonics and blurring edges, which means loss of detail, "blurring" of the sound.

The ideal option in terms of quality and honest transmission of everything that happens is a digital recording without compression, and the CD quality is 16 bits, 44100 Hz is not the limit for a long time, you can increase both the bit rate - 24, 32 bits, and the frequency - 48000, 82200, 96000, 192000 Hz. The bitness affects the dynamic range, and the sample rate affects the frequency range. Despite the fact that the human ear hears at best up to 20,000 Hz and according to Nyquist's theorem a sampling rate of 44,100 Hz should be enough, but in reality, for a sufficiently accurate transmission of complex short sounds, such as drum sounds, it is better to have a higher frequency. It is also better to have more dynamic range so that quieter sounds can be recorded without distortion. Although in reality, the more these two parameters increase, the less changes you will notice.

At the same time, you will be able to appreciate all the delights of high-quality digital sound if you have a good sound card. What is built into most PCs is generally awful, Macs with built-in cards are better, but it's better to have something external. And the question is, of course, where will you get these digital recordings with a quality higher than CD :) Although even the most sloppy MP3 on a good sound card will sound much better.

Returning to analog stuff - here we can say that people continue to use them not because they are really better and more accurate, but because high-quality and accurate recording without distortion is usually not the desired result. Digital distortion, which can arise from poor sound processing algorithms, low bit or sample rate, digital clipping - they certainly sound much nastier than analog ones, but they can be avoided. And it turns out that a really high-quality and accurate digital recording sounds too sterile, lacks saturation. And if, for example, you record drums on tape, this saturation appears and is preserved, even if you then digitize this recording. And vinyl sounds cooler too, even if the tracks were recorded entirely on the computer. And of course, external attributes and associations are embedded in this, the way it all looks, the emotions of the people who do it. It is quite possible to understand the desire to hold a record in your hands, listen to a cassette on an old tape recorder, and not record from a computer, or understand those who now use multitrack tape recorders in studios, although this is much more complicated and costly. But this has its own certain fun.