When the mobile phone continuously integrates various functions including camera, game, data, video, etc., it has transformed into a multimedia application playing platform, which can be said to develop towards a meticulous and compact portable mini computer. In terms of positioning, such a mobile phone is different from the existing pure voice mobile phone (Voice phone) or a mobile phone (Feature phone) with certain functions, but it should be a smart phone (Smart phone).

In addition to strong data editing and management capabilities, smart phones can also provide multimedia application services such as audio, video, and games, and can also handle multiple tasks at the same time. Looking further, its functional side covers communication, information and multimedia functions, namely:

1. Communication function: voice, message (messaging), authentication (AuthenTIcaTIon), billing (Billing) and other communication processing functions;

2. Information functions: Email, calendar, information management, Sync, security and other information processing functions;

3. Multimedia functions: video, camera, game, TV, streaming, music, DRM and other multimedia application functions;

In addition to the information function, in communication and multimedia applications, audio is a necessary processing task. In the past, mobile phones only needed to process pure voice call signals, but today's smart phones have to deal with heavy audio tasks. In addition to multi-tone ringing and MP3 music, there may be FM radio and game sound effects, and not just The mono effect now requires a stereo presence experience.

In the past, the world of digital audio was completely dichotomous: one side was the world of Hi-Fi, and the other side was the world of speech. Generally speaking, Hi-Fi refers to 16bit stereo quality audio sampled at 44.1kHz, which is the specification of CD music; telephone voice is 8bit and 8kHz mono, low quality audio. However, in the era of smart phones, the two audio worlds began to collide. How to integrate the audio subsystem with the application and communication processing platform has become a key challenge for portable equipment engineers when developing new products. .

Audio encoding format and interface

Before entering the discussion of system architecture, let's take a look at the current status of audio coding. At present, there are many audio encoding formats. The encodings for sound include PCM, ADPCM, DM, PWM, WMA, OGG, AMR, ACC, MP3Pro, and MP3. For human speech, there are LPC, CELP, and ACELP. 2. The encoding format of MPEG-4, H.264, VC-1 and other audiovisual programs.

Here are three commonly used audio formats:

AMR format

AMR is an adaptive multi-rate speech transmission codec (AdpaTIve MulTI-Rate Speech Codec). The initial version is the speech codec standard developed by the European Telecommunications Standardization Institute (ETSI) for the GMS system. Species-AMR-NB (AMR Narrowband) and AMR-WB (AMR Wideband). For Nokia, the largest brand in the market, most of its mobile phones support the above two formats of audio files.

MP3 format

MP3 is the abbreviation of MPEG AudioLayer3, which is an audio compression technology. Its encoding has a high compression ratio of 10: 1-12: 1, which can keep the low frequency part undistorted, but sacrifices the high frequency part of 12KHz -16KHz in audio To reduce the file size, its ".mp3" format files are generally only 10% of ".wav". In addition, one of the reasons why MP3 is popular is that it is not a technology protected by copyright, so anyone can use it.

There are many sampling frequencies for compressed music in MP3 format. 64kbps or lower encoding can be used to save space, and 320kbps can also be used to achieve extremely high compression sound quality. In terms of encoding rate, MP3 is divided into "CBR" (fixed encoding) and "VBR" (variable bit rate) technology. Some mobile phones cannot play the downloaded music because there is no MP3 that supports "VBR" format. music.

AAC format

AAC stands for Advanced Audio Coding, which uses a different calculation method than MP3. AAC can support up to 48 audio tracks, 15 low-frequency audio tracks, more sampling rates and transmission rates, and a variety of Speech compatibility, and higher decoding efficiency. In summary, AAC can provide better sound quality by 30% smaller than the MP3 format, and the sound fidelity is better and closer to the original sound, so it is regarded as the best audio encoding format by the mobile phone industry. AAC is a large family. They are divided into 9 specifications to meet the needs of different occasions:

(1) MPEG-2AAC LC Low Complexity Specification (Low Complexity)

(2) MPEG-2 AAC Main specifications

(3) MPEG-2 AAC SSR Variable Sample Rate Specification (Scaleable Sample Rate)

(4) MPEG-4 AAC LC Low Complexity Specification (LowComplexity), the audio part of the MP4 file that is more common in mobile phones now includes audio files of this specification

(5) MPEG-4AAC Main specifications

(6) MPEG-4 AAC SSR Variable Sample Rate Specification (Scaleable Sample Rate)

(7) MPEG-4 AAC LTP Long Term Prediction

(8) MPEG-4 AAC LD Low Delay Specification (Low Delay)

(9) MPEG-4 AAC HE high efficiency specification (High Efficiency

Among the above specifications, the main specification (Main) contains all functions other than gain control, and its sound quality is the best, while the low complexity specification (LC) is relatively simple, without gain control, but improves coding efficiency, as The SSR and LC specifications are roughly the same, but there are more gain control functions. In addition, LTP / LD / HE are used for encoding at low bit rates. HE uses NeroACC encoder support, which is a commonly used encoding rate method recently. . However, generally speaking, the sound quality of the Main specification and the LC specification is not much different. Therefore, considering that the current memory of the mobile phone is still limited, the currently used AAC specification is the LC specification.

audio port

The audio interface is an important issue for designers of smart phones. Digital voice generally adopts PCM (Pulse Code Modulation) interface, while Hi-Fi stereo adopts serial I2S (Inter-IC Sound) interface or AC97 interface. I2S is a bus standard developed by Philips for the transmission of audio data between digital audio devices. It is a commonly used interface in consumer audio products; AC? 7 is used by Intel to improve the sound performance of personal computers and reduce noise The specifications, as formulated in 1997, are called AC97.

Therefore, it is ideal to tailor a set of integrated solutions for specific applications. Under the trend of SoC technology, some manufacturers have integrated stereo digital-to-analog converter (DAC) or codec (CODEC) into ICs with specific functions. However, some functions are suitable for integration, and some may be counterproductive.

For example, when manufacturers integrate power management and audio processing functions, they usually have to compromise on the sound quality, because the noise generated by the power regulator (regulator) will interfere with the nearby audio path; if the audio It is also difficult to integrate functions into digital ICs, because for Hi-Fi components, a 0.35mm process is required to optimize the performance of mixed signal processing, but the current application of digital logic has moved towards 0.18mm or less Higher process development. In terms of the above two integrated chip strategies, if two different circuits exist in one chip at the same time, the final chip size may also be unacceptably large.

In addition, speaker power amplifiers (louDSPeaker amplifiers) are particularly difficult to integrate. The heat it generates is a problem that requires heat dissipation, so it often requires another independent speaker driver IC. There is also a common problem in integration, that is, in order to minimize the IC as much as possible, there may be a problem of insufficient number of analog input or output pins.

A dedicated audio IC can avoid these problems, and there are several ways to achieve audio integration. Sharing ADC and DAC can reduce hardware costs, but it cannot play or record two audio streaming formats at the same time. Arranging a dedicated converter for individual functions can solve this problem, but this approach will increase the cost of the chip. The compromise is to share only the ADC part, but there is an independent DAC. If you do this, you can also play other audio (such as playing the ringtone of another phone or playing music) while the phone is communicating. Cannot record at the same time during communication. ADC power consumption can be controlled at a lower sampling rate by turning off a function. Therefore, it is ideal to tailor a set of integrated solutions for specific applications. Under the trend of SoC technology, some manufacturers have integrated stereo digital-to-analog converter (DAC) or codec (CODEC) into ICs with specific functions. However, some functions are suitable for integration, and some may be counterproductive.

For example, when manufacturers integrate power management and audio processing functions, they usually have to compromise on the sound quality, because the noise generated by the power regulator (regulator) will interfere with the nearby audio path; if the audio It is also difficult to integrate functions into digital ICs, because for Hi-Fi components, a 0.35mm process is required to optimize the performance of mixed signal processing, but the current application of digital logic has moved towards 0.18mm or less Higher process development. In terms of the above two integrated chip strategies, if two different circuits exist in one chip at the same time, the final chip size may also be unacceptably large.

In addition, loudspeaker amplifiers are particularly difficult to integrate. The heat it generates is a problem that requires heat dissipation, so it often requires another independent speaker driver IC. There is also a common problem in integration, that is, in order to minimize the IC as much as possible, there may be a problem of insufficient number of analog input or output pins.

A dedicated audio IC can avoid these problems, and there are several ways to achieve audio integration. Sharing ADC and DAC can reduce hardware costs, but it cannot play or record two audio streaming formats at the same time. Arranging a dedicated converter for individual functions can solve this problem, but this approach will increase the cost of the chip. The compromise is to share only the ADC part, but there is an independent DAC. If you do this, you can also play other audio (such as playing the ringtone of another phone or playing music) while the phone is communicating. Cannot record at the same time during communication. ADC power consumption can be controlled at a lower sampling rate by turning off a function.

The audio requirements of computers are basically similar to the consumer market, but in order to play music files recorded at different sampling rates (8kHz, 44.1kHz, 48kHz), a more efficient and cheap solution is needed, and AC97 has such characteristics. In the broad handheld device market, each of the three formats has its supporters: CD, MD, MP3 Walkman will use I2S interface; mobile phone will use PCM interface; PDA with audio function uses the same AC97 encoding as PC format.

FTTA Patch Cord

FTTA Patch Cord,High Density Mpo Mtp Cabling,Pre Terminated Cable For Data Center

ShenZhen JunJin Technology Co.,Ltd , https://www.jjtcl.com