• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Underwater Acoustic Signal Noise Reduction Based on a Fully Convolutional Encoder-Decoder Neural Network

    2023-12-21 08:10:06SONGYongqiangCHUQianLIUFengWANGTaoandSHENTongsheng
    Journal of Ocean University of China 2023年6期

    SONG Yongqiang, CHU Qian,LIU Feng, WANG Tao, and SHEN Tongsheng

    Underwater Acoustic Signal Noise Reduction Based on a Fully Convolutional Encoder-Decoder Neural Network

    SONG Yongqiang1), 2), CHU Qian3),LIU Feng1), *, WANG Tao1), and SHEN Tongsheng1)

    1),100089,2),100071,3),264000,

    Noise reduction analysis of signals is essential for modern underwater acoustic detection systems. The traditional noise reduction techniques gradually lose efficacy because the target signal is masked by biological and natural noise in the marine environment. The feature extraction method combining time-frequency spectrograms and deep learning can effectively achieve the separation of noise and target signals. A fully convolutional encoder-decoder neural network (FCEDN) is proposed to address the issue of noise reduction in underwater acoustic signals. The time-domain waveform map of underwater acoustic signals is converted into a wavelet low- frequency analysis recording spectrogram during the denoising process to preserve as many underwater acoustic signal characteristics as possible. The FCEDN is built to learn the spectrogram mapping between noise and target signals that can be learned at each time level.The transposed convolution transforms are introduced, which can transform the spectrogram features of the signals into listenable audiofiles. After evaluating the systems on the ShipsEar Dataset, the proposed method can increase SNR and SI-SNR by 10.02 and 9.5dB, re- spectively.

    deep learning; convolutional encoder-decoder neural network; wavelet low-frequency analysis recording spectrogram

    1 Introduction

    The collected signals from underwater targets such as ships and submarines may contain a portion of reverberant noise with complex spectral components due to the interference from complex and variable natural sound sources (Stulov and Kartofelev, 2014; Klaerner, 2019). Thesenoise disturbances can affect the detection, localization, andidentification of underwater acoustic signals. Therefore, help- ing the underwater acoustic signal-to-noise ratio achieve the experimental requirements before processing target source monitoring is necessary.

    Underwater acoustic noise reduction methods can be divided into two main categories after a long development period: traditional and artificial intelligence methods. Traditional methods usually involve experimentalists using manual processing to achieve underwater acoustic signal noise reduction, which is essentially a data preprocessing step based on equational inference, relying on a blind source separation framework and interpretable assertions to construct denoising algorithms. Examples of these algorithms include multi-resolution and high-precision decomposition (Huang., 2012), bark wavelet analysis (Wang and Zeng, 2014), energy significance (Taroudakis, 2017),and empirical mode decomposition. However, as the environment changes, the corresponding biological and marine environmental noise will also change, causing significant dynamic deviations in the noise signal source. Chen(2021) showed that multiple feature extraction me- thods have an important impact on processing the original dataset. Thus, the traditional algorithm fails to learn stable noise features, making it difficult to realize breakthroughs in the signal-to-noise ratio after signal processing (Vincent., 2006). Additionally, these experiments can achieve simplified operations and partial assumptions only under certain types, particular circumstances, or partial sequences of signals (Le., 2020). These traditional methods fail to satisfy the extensive and varied nonlinear feature learn- ing capability of underwater acoustic signals.

    Deep learning-based models (Hao., 2016) have de-monstrated considerable potential for applications in various disciplines compared to traditional approaches (Wu and Wu, 2022). For example, Wang(2020) proposed a novel stacked convolutional sparse denoising autoencoder model to complete the blind denoising task of underwater heterogeneous information data. Zhou and Yang (2020) de- signed a convolutional denoising autoencoder to obtain de- noising features with multiple images segmented by parallel learning and used it to initialize a parallel classifier. Yang(2021) presented deep convolutional autoencoders to denoise the clicks of the finless porpoise. Russo(2021)proposed a deep network for attitude estimation (DANAE), which works on Kalman filter data integration to reduce noise using an autoencoder. Qiu(2021) presented a reinforcement learning-based underwater acoustic signal processing system. However, complex system design and the selection of critical parameters are required to satisfy the oscillation conditions and maintain the nonlinear system of signal and noise balance. Zhou(2022) introduced the PF3SACO to accelerate convergence, improve search capability, enhance local search capability, and avoidpremature. The generative network is used to create highlyspurious data, and the discriminator is utilized to discriminate data availability. Yao(2022) expected that the scarcity problem of noisy data in complex marine environ-ments could be solved. However, the unstable marine environment makes the experiment costly, considering working time and human resources. Xing(2022) used or- thogonal matching pursuit and the optimal direction me- thod (MOD) to eliminate some noise in the underwater acoustic signal. Reaching a high step is difficult for signal-to-noise ratio improvement despite its adaptive capability. The signal reconstruction is completed in accordance with the updated dictionary and sparse coefficients. These intelligent methods mainly extract signal features manually, which causes a considerable amount of detail to be lost in the original signal (Hinton, 2015; Zhao,2021). However, these methods do not have the batch pro- cessing denoising function for underwater acoustic signals. In addition, numerous problems, such as changes in the ocean environment and the mixing of multichannel signals,increase the difficulty of obtaining high-quality signals for the above methods. Thus, realizing a breakthrough in the signal-to-noise ratio of the collected underwater acoustic signals is challenging.

    Existing mature deep learning methods are difficult to reference and apply directly due to the unique characteris- tics of underwater acoustic signals compared to their acous- tic methods. With the application of the Fourier transform, thetime-domain signal can be converted to the time-frequency domain for representation. Compared with familiar images, the conventional time-frequency spectrum has no specific meaning and lacks texture features. However, somespecific correlations exist between the two axes of the spectrum.

    These correlations must be dealt with concurrently du- ring the state analysis, which poses a significant challenge for feature extraction. In addition, traditional feature extraction methods (such as smoothing noise and removing outliers) frequently extract trait values without model training. Indiscriminate processing methods gradually contribute to the loss of some of the detailed information in the feature vector in subsequent module delivery. Therefore, a new low-frequency wavelet analysis of the recorded spectrumand a fully convolutional encoder-decoder neural network is proposed to reduce the noise of underwater acoustic signals. The following three significant contributions are in- cluded in this paper.

    1) A new feature extraction technique is proposed to replace the data preprocessing procedure to extract the spectrogram of underwater acoustic signals effectively. This technique combines wavelet decomposition and low- frequency analysis theories to extract features from under- water acoustic signal spectrograms recorded in the time domain as the input of the denoised model.

    2) The encoder-decoder framework is constructed to build the deep network. The fully convolutional encoder can compress underwater acoustic feature vectors of different lengths into the high-order nonlinear feature of the same dimension and obtain the optimal expression vector by designing different kernel sizes. More importantly, the transposed convolutional decoder can solve the bottleneck of information loss due to long sequence to fixed-length vector conversion.

    3) A mapping-based approach that replaces masking is employed to optimize the fully convolutional encoder-de- coder neural network. The fully convolutional mapping layer is introduced, which contributes to extracting the local characteristics of signals and timing correlation pieces of information without considering the features of the pure natural noise signal.

    2 Methodology

    An overview of the proposed system is provided, and the following two parts of the pipeline are then analyzed: the wavelet low-frequency analysis recording spectrogramextraction and the fully convolutional encoder-decoder neu- ral network structure.

    2.1 System Overview

    First, wavelet low-frequency analysis recording spectrum features are extracted to increase the correlation be- tween adjacent frames. Second, a fully convolutional neural network and encoder-decoder network model is used for signal noise reduction, which extracts the structural and local information of the spectrum and considers contextualknowledge of the timing signal. Moreover, a brief description of the method is shown in Fig.1.

    2.2 Wavelet Low-Frequency Analysis Recording Spectrogram Extraction

    The wavelet low-frequency analysis recording spectrogram can be used to construct a feature map with desired characteristics by modifying different wavelet functions without the required CUDA space. More importantly, the generation of the feature spectrogram is independent of the interval, surface, interval sampling, and signal length. The wavelet low-frequency analysis recording spectrogram extraction is divided into three steps. The first step is to decompose the underwater acoustic signal sequence. The entire decomposition process is shown in Fig.2, and the operation is as follows:

    Given an underwater acoustic signal sequence, as shown in Eq. (1), wherexis a value in the sequence, andis a time node:

    A partial sequence of sequenceis selected and divided into two parts according to the series of parity samples, as shown in Eqs. (2) and (3), whereXandXare sequences of even and odd segments, respectively:

    Fig.2 Waveforms of underwater acoustic signals under different states.

    Among them,(?) is the predictor, as shown in Eqs. (6) and (7):

    In the process of transformation, the frequency characteristic ofXis maintained, and the updater(?) is introduced. Thus, Eq. (8) holds

    Among them, the update method can be selected from the following two functions, as shown in Eqs. (9) and (10):

    The second step is the underwater acoustic signal sequenceextraction, as shown in Fig.2. This step reflects the temporal state transformation of a segment of the signal (Hu.,2007). Different decomposition methods can be established to obtain highly detailed information regarding this signal.

    The thresholdof coefficients is determined by Eqs. (11) and (12). The features of the signal sequence can be effectively extracted by setting the thresholdas follows:

    whereis the data length of the detail signal sequence={(),=1, 2, 3, ???,}and the thresholdprocessing method is shown in Eq. (13):

    whered() is the detail signal after threshold processing and is then reconstructed by Eqs. (14) and (15) to obtain the signal:

    The third step is to extract the spectrogram features of the signal after the first two steps. Different wavelet bases (Bayes, BlockJs, FDR, Minimax, SURE, Universal Thres- hold) are selected to extract signal details, and low-fre- quency analysis recording is then employed to extract the spectrogram features of the signal. Fig.3 shows the waveform features of the input signal. Different waveform features can be obtained by setting different wavelet bases (Li., 2019). Afterward, the spectrogram features are obtained through low-frequency analysis recording, as shownin Fig.4, and the spectrogram features of the signal are usedas the fully convolutional encoder-decoder neural network input to train the model.

    Fig.3 Waveform characteristics based on different wavelet bases. (a), Bayes; (b), BlockJS; (c), FDR; (d), Minimax; (e), SURE; (f), Universal threshold.

    2.3 Fully Convolutional Encoder-Decoder Neural Network

    A fully convolutional encoder-decoder neural network structure is constructed as a denoising base model. This structure improves the performance of the denoising modelby altering the network architecture or configuring various hyperparameters. Different network layers play various roles in the denoising process. The convolutional layer can be set with different kernel sizes to extract the local inva- riant features of the spectrogram. The encoder-decoder can be introduced to increase the weights of the relevant vectors and the feature aggregation of the local features extracted from the network. First, the acquired wavelet low- frequency analysis recording spectrogram features are used as input to the model. The encoding phase of the signal involves extracting its high-order features using successive one-dimensional convolutional networks that have been previously defined. Afterward, the input is fed into the fully convolutional mapping structure. The structure is then used to learn the high-dimensional mapping relationship between the noise and target signals. Finally, the acquired mapping features are converted into a time-series vector that can be used to generate audio files through a transposed convolution operation. The specific model architecture is shown in Fig.5.

    Fig.4 Spectrogram features based on different wavelet bases. (a), Bayes; (b), BlockJS; (c), FDR; (d), Minimax; (e), SURE; (f), Universal Threshold.

    Fig.5 Fully convolutional encoder-decoder network.

    The fully convolutional encoder-decoder neural networkhas three primary operations: 1) Encoder. The convolutional layers and activation functions reduce the size of the feature map. Therefore, the input spectrogram can become a low-dimensional representation and introduce a normalization method to prevent gradient disappearance. 2) Network separation module. The intermediate network layers can be adapted to any size of the input by removing the fully connected layer and replacing it with a convolutional layer. 3) Decoder. The transposed operation progressively recovers the spatial dimension. The decoder extracted the fixed length feature during the encoder-decoder process to complete the same size input and output with the least amount of information loss possible. The different parameters are described in Table 1. Where FCEDN3-8 means choosing the convolution kernel of size 3×3 and repeating the convolution operation eight times. FCEDN3-16 meanschoosing the convolution kernel of size 3×3 and repeatingthe convolution operation 16 times. FCEDN5-8 means choo- sing the convolution kernel of size 5×5 and repeating the convolution operation eight times. FCEDN5-16 means choo- sing the convolution kernel of size 5×5 and repeating the convolution operation 16 times.

    Table 1 Fully convolutional encoder-decoder network (FCEDN) structure

    3 Experiment

    The ShipsEar dataset is presented in this section, and the experimental findings of underwater acoustic signal de- noising using it as test data are discussed (Santos., 2016). Different evaluation metrics are used to represent the effect of the noise reduction experiment (Yaman., 2021). Various outcomes from the investigations intothe reduction of underwater acoustic signal noise are shown in some ablation experiments.

    3.1 Dataset

    The dataset was collected with recordings made by hy- drophones deployed from docks to capture different vessel speeds and cavitation noises corresponding to docking or undocking maneuvers. The recordings are of actual vessel sounds captured in a real environment. Therefore, the an- thropogenic and natural background noise and vocalization of marine mammals are present. The dataset comprises90 recordings in .wav format with five major classes. Each major class contains one or more subclasses; the duration of each audio segment varies from 15s to 10min, and the appearance of different ships is shown in Fig.6.

    Fig.6 Ships.

    Each class is divided, as shown in Table 2. Class A com- prises dredgers, fishing ships, mussel ships, trawlers, and tug ships. Class B comprises motorboats, pilot ships, and sailboats. Class C comprises passenger ferries. Class D com- prises ocean liners and RORO vessels. Class E is the natural noise, and we mix it with the first four classes to construct targets containing noise, and the numbers represent the length of the signal time. A noise-laden data set containing a mixture of two acoustic signals was constructed to validate the denoising performance of the model effectively. All signals were segmented at a fixed time of 5s, re- sulting in a total of 1956 labeled sound samples. Sample without the noise class were randomly selected from the data and fused with the target samples of the noise class. Therefore, the signal-to-noise ratio of fused signals was 0dB. Afterward, the dataset was divided into validation, testing, and training sets in the ratio of 1:1:8, respectively, to verify the denoising performance of the model.

    Table 2 Datasets of ShipsEar

    3.2 Configuration

    All networks are trained using backpropagation and gra-dient descent for batch normalization added after each convolutional layer in the mapping network. The optimization algorithm chooses an adaptive moment estimation algorithm that combines the first and second gradients(Wang, 2020). This article sets the exponential decay rates of the first- and second-order moment estimations to 0.9 and 0.999, respectively, for the setting of some specific parameters based on experience. The rates frequently lie infinitely close to 1 in sparse gradients. The sample rate is set to 44100, and the epoch is set to 50. The learning rate is reduced to 0.0001, which is then minimized by 25% from its initial value. The sampling rate is reduced by 25%, and the learning rate is reduced by 10% when the entire experiment is overfitted.

    3.3 Experimental Evaluation

    Training the end-to-end learning framework aims to ma- ximize the source-to-distortion ratioand the scale- invariant source-to-noise ratio. These ratios are commonly used as an evaluation metric for signal noise reduction. Therequires knowledge of the target and enhanced signals. It is an energy ratio, expressed in dB, between the energy of the target signal contained in the enhanced signal and the energy of the errors. Compared to the,uses a single coefficient to account for scaling discrepancies. The scale invariance is ensured by normalizing the signal to zero-mean before the calculation. A large scale of invariance is reasonable.andare defined as

    3.4 Results

    Some ablation experiments were performed to compare the efficiency of denoising by introducing various models to confirm the validity of the proposed model. Tables 3 and 4 show the results achieved when applying the FCEDN- 3-8 construction to implement noise reduction for different target classes. Table 4 demonstrates the use of Class B as a test set to verify the noise reduction performance of the different base models. The base model mainly comprises interval-dependent denoising (IDD) (Yan, 2019), fully connected network (FCN) (Russo., 2021), con- volutional network (CNN), and wavelet denoising (WD) (Huang., 2012).

    Table 3 Results of different targets

    Table 4 Results of different noise reduction methods

    The experimental results reveal the following:

    1) A random selection of ambient natural sounds as perturbations was used to verify whether the model has a de- noising performance. Table 3 reveals that using FCEDN3-8to reduce natural environment noise can increase targetsignal SNR and SI-SNR by an average of 8.3 and 6dB, re- spectively. Therefore, using FCEDN3-8 significantly improves the target signal after processing various classes of the noisy signal.

    2) The denoising of underwater acoustic signals can be significantly influenced by various network layer depths, layer structures, number of filters, filter width design me- thods, and filtering methods. Furthermore, the characteris-tic expression of energy transfer is significantly attenuatedwhen the number of layers reaches a particular range. There- fore, the impact of 3×3 and 5×5 convolution kernels and the iterations on experimental outcomes were compared, as shown in Table 4. The best results were achieved during the construction of the network with eight iterations of the 3×3 convolutional kernel. The model can extract local fea- tures precisely using small convolutional kernels, increasing the generalizability of the model. However, the model may be overfitted when the number of iterations is too high.Therefore, considering various factors, model sizes, and de- noising effects, FCEDN3-8 was selected as the model ar- chitecture method.

    3) The full convolutional layer and the small kernel are used to construct the denoised network, which produces the best performance when compared to other base models. As shown in Fig.7, the FCEDN3-8 can steadily improve the signal-to-noise ratio during model training when convergence is reached after approximately 45 epochs. The FCN converges slowly during model training (Sutskever and Hinton, 2014). Therefore, the convolutional layer decreases the resource requirements of the model compared with the connected layer when the parameters are trained. Therefore, the full convolution operation of other models is replaced with successive convolution kernels, which is a practical and innovative step. Moreover, the continuous convolution kernel deepens the stack of network layers, allowing the parameters to grow linearly rather than exponentially during the forward pass.

    Fig.7 Noise reduction process for different methods.

    4) As shown in Figs.8–11, the noise reduction effect of FCEDN3-8 can be confirmed by observing the change in the waveform and spectrogram of the signal. The original signal sampling rate was too high, and the feature information was not readily apparent. Thus, the classification network cannot accept the original signal directly. The signal features are usually extracted in spectrograms for classification experiments. The time-frequency analysis method provides joint information in the time and frequency domains, which can describe the relationship between the signal frequency and time change and thus determine the type of signal. However, spectrogram analysis cannot re- liably identify the signal class due to the inherent environmental noise (the top side of the figure shows the waveform map and spectrum of the target signal covered by noise). The low side of the figure shows the waveform and spectrogram of the signal after processing with FCEDN3-8. Different classes of underwater acoustic signals have been distinguished due to the noise reduction effect. Afterward, the acoustic generation mechanism and propagation law of ship noise were analyzed in combination with the underwater acoustic channel, and the particular time-frequency distribution difference was used for further signal detection and classification tasks.

    5) The effects of different noise reduction signals on the classification results are verified, as shown in Table 5. In addition, the classification confusion matrix of FCEDN3-8 models before and after noise reduction is presented, as shown in Figs.12 and 13, respectively.

    We adopt statistical sampling method where the model randomly selects different classes of denoised acoustic signals to validate the effectiveness of the noise reduction method. The validation model was chosen as the classical LSTM (Liu, 2021). The confusion matrix is used to describe the experimental results of the classification. According to Table 5, the accuracy of the LSTM model can reach 76.18% when the signal has been reduced for noise using the FCEDN3-8 method. The horizontal and vertical coordinates represent the predicted and true classes, respectively. The findings indicate that denoising improves accuracy by approximately 8%. In particular, the classification accuracy for Classes D and A increased from 67.9% to 79.4% and from 49.2% to 60.6%, as shown in Figs.12 and 13,respectively.

    Fig.8 Class A signal waveform diagram and spectrogram.

    Fig.9 Class B signal waveform diagram and spectrogram.

    Fig.10 Class C signal waveform diagram and spectrogram.

    Fig.11 Class D signal waveform diagram and spectrogram.

    Table 5 Classification results after noise reduction

    Fig.12 Classification results before underwater acoustic sig- nal noise reduction.

    4 Conclusions

    Noise reduction processing for underwater acoustic signals is implemented in this paper using deep learning techniques, and the FCEDN is proposed. The model is an end- to-end underwater acoustic signal denoising algorithm witha noise-containing signal at the input and a denoised signal at the output. Wavelet decomposition and low-frequency analysis theories are used to extract the features of the underwater acoustic signal. Deep neural networks are em- ployed to create the separation module between the target and noise signals. Meanwhile, the fully convolutional net- work structure is used to construct the mapping separation module based on an encoder-decoder neural network. This technique can successfully perform robust feature ex- traction and signal-to-noise separation for noisy underwater acoustic targets. The evaluation results were tested on the ShipsEar dataset, which can enhance theandto 10.2 and 9.5, respectively.

    Acknowledgements

    The study is supported by the National Natural Science Foundation of China (No. 41906169), and the PLA Aca- demy of Military Sciences.

    Chen, H., Miao, F., Chen, Y., Xiong, Y., and Chen, T.,2021. A hyperspectral image classification method using multifeature vectors and optimized KELM., 14: 2781- 2795.

    Hao, X., Zhang, G., and Ma, S., 2016. Deep learning., 10 (3): 417-439.

    Hinton, G., Vinyals, O., and Dean, J., 2015. Distilling the knowledge in a neural network., 14 (7): 38-39.

    Hu, Q., He, Z., Zhang, Z., and Zi,Y., 2007. Fault diagnosis of rotating machinery based on improved wavelet package transform and SVMs ensemble., 21 (2): 88-705.

    Huang, H. D., Guo, F., Wang, J. B., and Ren, D. Z., 2012. High precision seismic time-frequency spectrum decomposition me- thod and its application., 47 (5): 773-780.

    Klaerner, M., Wuehrl, M., Kroll, L., and Marburg, S.,2019. Ac- curacy of vibro-acoustic computations using non-equidistant frequency spacing., 145: 60-68.

    Le, C., Zhang, J., Ding, H., Zhang, P., and Wang, G., 2020. Preliminary design of a submerged support structure for floating wind turbines., 19 (6): 49-66.

    Li, H., Zhang, S., Qin, X., Zhang, X., and Zheng, Y., 2019. Enhanced data transmission rate of XCTD profiler based on OFDM., 18 (3): 1-7.

    Liu, F., Shen, T., Luo, Z., Zhao, D., and Guo, S., 2021. Underwater target recognition using convolutional recurrent neural networks with 3-D Mel-spectrogram and data augmentation., 178: 107989.

    Qiu, Y., Yuan, F., Ji, S., and Cheng, E., 2021. Stochastic resonance with reinforcement learning for underwater acoustic com- munication signal., 173: 107688.

    Russo, P., Di Ciaccio, F., and Troisi, S., 2021. DANAE++: A smart approach for denoising underwater attitude estimation., 21: 1526.

    Santos-Domínguez, D., Torres-Guijarro, S., Cardenal-López, A., and Pena-Gimenez, A., 2016. ShipsEar: An underwater vessel noise database., 113: 64-69.

    Stulov, A., and Kartofelev, D., 2014. Vibration of strings with nonlinear supports., 76 (1): 223-229.

    Sutskever, I., and Hinton, G. E., 2014. Deep, narrow sigmoid belief networks are universal approximators., 20 (11): 2629-2636.

    Taroudakis, M., Smaragdakis, C., and Chapman, N. R., 2017. De- noising underwater acoustic signals for applications in acoustical oceanography., 25 (2): 1750015.

    Vincent, E., Gribonval, R., and Févotte, C., 2006. Performance measurement in blind audio source separation., 14 (4): 1462-1469.

    Wang, S., and Zeng, X., 2014. Robust underwater noise targets classification using auditory inspired time-frequency analysis., 78: 68-76.

    Wang, X., Zhao, Y., Teng, X., and Sun,W., 2020. A stacked convolutional sparse denoising autoencoder model for underwater heterogeneous information data., 167: 107391.

    Wu, D., and Wu, C.,2022. Research on the time-dependent split delivery green vehicle routing problem for fresh agricultural products with multiple time windows., 12 (6): 793.

    Xing, C., Wu, Y., Xie, L., and Zhang, D., 2021. A sparse dictionary learning-based denoising method for underwater acoustic sensors., 180:108140.

    Yaman, O., Tuncer, T., and Tasar, B., 2021. DES-Pat: A novel DES pattern-based propeller recognition method using under- water acoustical sounds., 175: 107859.

    Yan, H., Xu, T., Wang, P., Zhang, L., Hu, H., and Bai, Y., 2019. MEMS hydrophone signal denoising and baseline drift removal algorithm based on parameter-optimized variational mode de- composition and correlation coefficient., 19 (21): 4622.

    Yang, W., Chang, W., Song, Z., Zhang, Y., and Wang, X., 2021. Transfer learning for denoising the echolocation clicks of fin- less porpoise () using deepconvolutional autoencoders., 150 (2): 1243-1250.

    Yao, R., Guo, C., Deng, W., and Zhao, H. M., 2022. A novel mathematical morphology spectrum entropy based on scale- adaptive techniques., 126: 691-702.

    Zhao, Y. X., Li, Y., and Wu, N., 2021. Data augmentation and its application in distributed acoustic sensing data denoising., 288 (1): 119-133.

    Zhou, X., and Yang, K., 2020. A denoising representation fra- mework for underwater acoustic signal recognition., 147 (4): 377-383.

    Zhou, X., Ma, H., Gu, J., Chen, H., and Wu, D., 2022. Parameter adaptation-based ant colony optimization with dynamic hybrid mechanism., 114:105139.

    (June 30, 2022;

    August 25, 2022;

    February 15, 2023)

    ? Ocean University of China, Science Press and Springer-Verlag GmbH Germany 2023

    . E-mail: 1609217323@qq.com

    (Edited by Chen Wenwen)

    亚洲国产精品国产精品| 在线播放国产精品三级| 久久久久免费精品人妻一区二区| 人妻夜夜爽99麻豆av| 久久人人爽人人片av| eeuss影院久久| 乱码一卡2卡4卡精品| 午夜亚洲福利在线播放| 精品不卡国产一区二区三区| 一卡2卡三卡四卡精品乱码亚洲| 嘟嘟电影网在线观看| 精品久久久久久成人av| 国产一区有黄有色的免费视频 | 村上凉子中文字幕在线| 精品久久久久久久久亚洲| 欧美日韩精品成人综合77777| 91av网一区二区| 日韩制服骚丝袜av| 97在线视频观看| 久久久久久大精品| 观看美女的网站| 最近中文字幕高清免费大全6| 一夜夜www| 毛片一级片免费看久久久久| 国产真实乱freesex| 国产成人aa在线观看| 99热全是精品| 内地一区二区视频在线| 日本一本二区三区精品| 亚洲人成网站在线观看播放| 女人久久www免费人成看片 | 欧美一区二区亚洲| videossex国产| 1024手机看黄色片| 亚洲欧美精品综合久久99| 国产 一区 欧美 日韩| 少妇熟女aⅴ在线视频| 精品人妻偷拍中文字幕| 3wmmmm亚洲av在线观看| 国产黄a三级三级三级人| 最近视频中文字幕2019在线8| 校园人妻丝袜中文字幕| 大香蕉97超碰在线| 国产69精品久久久久777片| 中文字幕亚洲精品专区| 精品免费久久久久久久清纯| 在现免费观看毛片| 亚洲av不卡在线观看| 精品久久久久久久久久久久久| 亚洲三级黄色毛片| 国产精品.久久久| 我的老师免费观看完整版| 人人妻人人澡欧美一区二区| 国产成年人精品一区二区| 国产亚洲午夜精品一区二区久久 | 女人久久www免费人成看片 | 国产v大片淫在线免费观看| 99久久成人亚洲精品观看| 成人欧美大片| .国产精品久久| 亚洲精品日韩在线中文字幕| 亚洲一级一片aⅴ在线观看| 欧美性猛交╳xxx乱大交人| 最后的刺客免费高清国语| 非洲黑人性xxxx精品又粗又长| videossex国产| 久久人人爽人人片av| 国产亚洲av片在线观看秒播厂 | 亚洲一区高清亚洲精品| 天堂av国产一区二区熟女人妻| 黄色一级大片看看| 国产av在哪里看| 日韩大片免费观看网站 | 51国产日韩欧美| 亚洲精品乱码久久久v下载方式| a级一级毛片免费在线观看| av天堂中文字幕网| 男女啪啪激烈高潮av片| 午夜福利高清视频| 国产成年人精品一区二区| 亚洲av熟女| 99热全是精品| 国产在线男女| 中文天堂在线官网| 黑人高潮一二区| 亚洲欧美日韩卡通动漫| 麻豆精品久久久久久蜜桃| 中文在线观看免费www的网站| 日韩视频在线欧美| av在线亚洲专区| 少妇熟女aⅴ在线视频| 又粗又爽又猛毛片免费看| 草草在线视频免费看| 噜噜噜噜噜久久久久久91| 亚洲综合精品二区| 成人亚洲精品av一区二区| 国产色婷婷99| 波野结衣二区三区在线| 国产 一区精品| 又粗又硬又长又爽又黄的视频| 91在线精品国自产拍蜜月| 亚洲国产色片| 国产精品蜜桃在线观看| 97超碰精品成人国产| 亚洲电影在线观看av| 亚洲欧美日韩无卡精品| 九九热线精品视视频播放| 国模一区二区三区四区视频| 日本黄色片子视频| 日韩视频在线欧美| 三级国产精品欧美在线观看| 国产成年人精品一区二区| 免费电影在线观看免费观看| 婷婷色综合大香蕉| 免费看a级黄色片| 中文字幕av成人在线电影| 欧美日韩综合久久久久久| 岛国毛片在线播放| 欧美成人午夜免费资源| 少妇熟女欧美另类| 欧美成人a在线观看| 日本爱情动作片www.在线观看| 免费观看性生交大片5| 一级二级三级毛片免费看| 国产一区二区在线av高清观看| 日韩欧美三级三区| 美女国产视频在线观看| 免费播放大片免费观看视频在线观看 | 国产激情偷乱视频一区二区| 三级国产精品欧美在线观看| 黑人高潮一二区| av卡一久久| 简卡轻食公司| 最近中文字幕高清免费大全6| 日本一本二区三区精品| 色视频www国产| 久热久热在线精品观看| 水蜜桃什么品种好| 国产中年淑女户外野战色| 成人午夜高清在线视频| 美女内射精品一级片tv| 99国产精品一区二区蜜桃av| 激情 狠狠 欧美| 国产黄色小视频在线观看| av国产久精品久网站免费入址| av在线播放精品| 一区二区三区乱码不卡18| 亚洲va在线va天堂va国产| 国内精品宾馆在线| 久久久色成人| 大香蕉97超碰在线| 亚洲国产色片| 国语对白做爰xxxⅹ性视频网站| 能在线免费看毛片的网站| 一级毛片我不卡| 观看美女的网站| 少妇裸体淫交视频免费看高清| 色哟哟·www| 深爱激情五月婷婷| 最近中文字幕高清免费大全6| 男女视频在线观看网站免费| av国产久精品久网站免费入址| 国产黄色小视频在线观看| 亚洲国产精品成人综合色| 国产片特级美女逼逼视频| 日韩 亚洲 欧美在线| 国产一级毛片在线| 男人狂女人下面高潮的视频| 美女国产视频在线观看| 欧美bdsm另类| 婷婷六月久久综合丁香| 久久久久久久久久黄片| 少妇裸体淫交视频免费看高清| 亚洲av一区综合| 听说在线观看完整版免费高清| 观看美女的网站| 99久久精品热视频| 日日啪夜夜撸| 啦啦啦韩国在线观看视频| 亚洲人成网站高清观看| 国产伦精品一区二区三区四那| 国产亚洲精品av在线| 少妇高潮的动态图| 亚洲色图av天堂| 老司机影院毛片| 亚洲国产高清在线一区二区三| 2022亚洲国产成人精品| 中文字幕av成人在线电影| 少妇熟女aⅴ在线视频| 我的老师免费观看完整版| 插逼视频在线观看| 在线天堂最新版资源| 三级男女做爰猛烈吃奶摸视频| 九九在线视频观看精品| 国产高清不卡午夜福利| 日本黄色视频三级网站网址| 久久久亚洲精品成人影院| 国产免费一级a男人的天堂| 亚洲av中文av极速乱| 日韩欧美三级三区| 久久久精品欧美日韩精品| 麻豆成人午夜福利视频| 乱码一卡2卡4卡精品| .国产精品久久| 深夜a级毛片| 一级毛片久久久久久久久女| 乱系列少妇在线播放| 国产成人a区在线观看| 国产一级毛片七仙女欲春2| 国产精品精品国产色婷婷| 国产精品久久久久久久电影| 欧美一区二区精品小视频在线| 国产高清有码在线观看视频| 波多野结衣巨乳人妻| 久久精品久久久久久噜噜老黄 | 一级黄片播放器| 亚洲天堂国产精品一区在线| 久久久欧美国产精品| 最新中文字幕久久久久| 91精品伊人久久大香线蕉| 内地一区二区视频在线| 亚洲国产高清在线一区二区三| 国产精品人妻久久久影院| 99久久精品国产国产毛片| 国产高清三级在线| 插逼视频在线观看| 日本色播在线视频| 亚洲电影在线观看av| 成人二区视频| 久久久久九九精品影院| 欧美xxxx性猛交bbbb| 免费黄网站久久成人精品| 欧美一级a爱片免费观看看| 麻豆成人午夜福利视频| 国产精品伦人一区二区| 亚洲av电影不卡..在线观看| 中文字幕免费在线视频6| 夜夜爽夜夜爽视频| 日日摸夜夜添夜夜添av毛片| 一级毛片aaaaaa免费看小| 亚洲三级黄色毛片| 一级毛片aaaaaa免费看小| 亚洲最大成人中文| 一个人看的www免费观看视频| 国产精品一区二区在线观看99 | 国产亚洲精品久久久com| 亚洲图色成人| 国产精品久久久久久久电影| 国产精品,欧美在线| 男插女下体视频免费在线播放| 99久国产av精品| 99热精品在线国产| 欧美最新免费一区二区三区| 日日摸夜夜添夜夜添av毛片| 精品熟女少妇av免费看| 亚洲国产欧美人成| 国产一区有黄有色的免费视频 | 国产高清有码在线观看视频| 亚洲欧美中文字幕日韩二区| 女人久久www免费人成看片 | 欧美高清性xxxxhd video| 欧美人与善性xxx| 午夜福利高清视频| 1024手机看黄色片| 嘟嘟电影网在线观看| 亚洲av中文字字幕乱码综合| 欧美+日韩+精品| 亚洲第一区二区三区不卡| 国产三级中文精品| 欧美日韩精品成人综合77777| 亚州av有码| 亚洲精品国产成人久久av| 精品人妻偷拍中文字幕| 搡女人真爽免费视频火全软件| 日本猛色少妇xxxxx猛交久久| 久久韩国三级中文字幕| 国产精品久久久久久精品电影小说 | 亚洲中文字幕日韩| 国产三级在线视频| 少妇被粗大猛烈的视频| 久久久久久国产a免费观看| 中国国产av一级| 日韩成人av中文字幕在线观看| 我要搜黄色片| 亚洲最大成人手机在线| 久久草成人影院| 精品无人区乱码1区二区| 国产综合懂色| 两性午夜刺激爽爽歪歪视频在线观看| 日本免费在线观看一区| 国产午夜精品久久久久久一区二区三区| 亚洲熟妇中文字幕五十中出| 免费搜索国产男女视频| 国产片特级美女逼逼视频| 99热这里只有精品一区| 联通29元200g的流量卡| 精品久久久久久久人妻蜜臀av| 欧美色视频一区免费| 亚洲人与动物交配视频| 亚洲国产欧美在线一区| 亚洲av熟女| 我要搜黄色片| 好男人在线观看高清免费视频| 高清毛片免费看| 国产伦理片在线播放av一区| av.在线天堂| 免费播放大片免费观看视频在线观看 | 在线天堂最新版资源| 国产一区二区在线观看日韩| 女人十人毛片免费观看3o分钟| 亚洲国产精品久久男人天堂| 亚洲成色77777| 日日撸夜夜添| 天美传媒精品一区二区| 春色校园在线视频观看| 能在线免费观看的黄片| 欧美成人免费av一区二区三区| 国产高清三级在线| 国产成人午夜福利电影在线观看| 国产av码专区亚洲av| 日韩人妻高清精品专区| 久久精品综合一区二区三区| 毛片一级片免费看久久久久| 麻豆精品久久久久久蜜桃| 久久久久久久久久久丰满| 国内精品一区二区在线观看| 国产 一区精品| 狂野欧美激情性xxxx在线观看| 亚洲熟妇中文字幕五十中出| 欧美性猛交╳xxx乱大交人| 卡戴珊不雅视频在线播放| 好男人在线观看高清免费视频| 国产久久久一区二区三区| 伦精品一区二区三区| 我的老师免费观看完整版| 亚洲av免费在线观看| 嫩草影院新地址| 老司机影院成人| 亚洲久久久久久中文字幕| 特级一级黄色大片| 亚洲精品久久久久久婷婷小说 | 变态另类丝袜制服| 狠狠狠狠99中文字幕| 天堂网av新在线| 又黄又爽又刺激的免费视频.| 永久免费av网站大全| 天美传媒精品一区二区| 成人美女网站在线观看视频| 插逼视频在线观看| 少妇的逼好多水| 色尼玛亚洲综合影院| 中文天堂在线官网| 六月丁香七月| 赤兔流量卡办理| 男人舔女人下体高潮全视频| 五月伊人婷婷丁香| 六月丁香七月| 成人欧美大片| 男女下面进入的视频免费午夜| 欧美激情在线99| 干丝袜人妻中文字幕| 在线观看66精品国产| 哪个播放器可以免费观看大片| 99久久无色码亚洲精品果冻| 小蜜桃在线观看免费完整版高清| 久久婷婷人人爽人人干人人爱| 成人性生交大片免费视频hd| 久久亚洲国产成人精品v| 亚洲av成人精品一二三区| 禁无遮挡网站| 国产毛片a区久久久久| 国产黄片视频在线免费观看| 国产欧美日韩精品一区二区| 少妇的逼好多水| 欧美一级a爱片免费观看看| 久久国产乱子免费精品| 久99久视频精品免费| 久久久久久九九精品二区国产| 一区二区三区四区激情视频| 国产精品野战在线观看| 国产精品日韩av在线免费观看| 成人欧美大片| 中国国产av一级| av专区在线播放| 少妇的逼好多水| 内地一区二区视频在线| 波多野结衣高清无吗| 国模一区二区三区四区视频| 国产日韩欧美在线精品| 日本色播在线视频| 在现免费观看毛片| 亚洲精品国产成人久久av| 高清在线视频一区二区三区 | 国产精品av视频在线免费观看| 久久久久网色| 欧美不卡视频在线免费观看| 高清av免费在线| 久久久国产成人精品二区| 国内精品美女久久久久久| 丝袜美腿在线中文| 国产在线男女| 日韩大片免费观看网站 | 国产黄片美女视频| 国语对白做爰xxxⅹ性视频网站| 夜夜看夜夜爽夜夜摸| 久久国内精品自在自线图片| 久久热精品热| 国产视频内射| 久久精品久久久久久噜噜老黄 | 精品午夜福利在线看| 人妻系列 视频| 成人亚洲欧美一区二区av| 高清av免费在线| 99久国产av精品国产电影| 不卡视频在线观看欧美| 精品午夜福利在线看| 国产精品久久电影中文字幕| 国产精品国产三级专区第一集| 久久久久九九精品影院| 黄色一级大片看看| 18禁在线播放成人免费| 又爽又黄a免费视频| 欧美精品一区二区大全| 亚洲国产最新在线播放| 白带黄色成豆腐渣| 精品免费久久久久久久清纯| 亚洲在线观看片| 国产不卡一卡二| 少妇熟女欧美另类| 国产高潮美女av| 性色avwww在线观看| 成人高潮视频无遮挡免费网站| 国产av不卡久久| 久久久久久久久久久丰满| 国产高潮美女av| 中国国产av一级| 久久99热6这里只有精品| av在线蜜桃| 亚洲婷婷狠狠爱综合网| 亚洲国产精品合色在线| 免费观看人在逋| 伊人久久精品亚洲午夜| 国模一区二区三区四区视频| 一区二区三区高清视频在线| 最近2019中文字幕mv第一页| 国产爱豆传媒在线观看| 男人和女人高潮做爰伦理| 亚洲国产精品合色在线| 色播亚洲综合网| 中国美白少妇内射xxxbb| 九色成人免费人妻av| 女的被弄到高潮叫床怎么办| 免费播放大片免费观看视频在线观看 | 久99久视频精品免费| 又粗又爽又猛毛片免费看| 亚洲av免费高清在线观看| 国产精品1区2区在线观看.| 久久久精品欧美日韩精品| 全区人妻精品视频| or卡值多少钱| 欧美成人午夜免费资源| 少妇被粗大猛烈的视频| 亚洲国产精品成人久久小说| 国产黄片美女视频| 五月玫瑰六月丁香| 国产在线一区二区三区精 | 免费看美女性在线毛片视频| 美女被艹到高潮喷水动态| www.色视频.com| 99久国产av精品| 日韩av不卡免费在线播放| 99久国产av精品国产电影| 人人妻人人澡人人爽人人夜夜 | 亚洲成色77777| 综合色丁香网| 久久亚洲国产成人精品v| 久久久久久久午夜电影| 亚洲三级黄色毛片| 色吧在线观看| 麻豆精品久久久久久蜜桃| 国产精品1区2区在线观看.| 3wmmmm亚洲av在线观看| 69人妻影院| 最近最新中文字幕大全电影3| 午夜福利高清视频| 欧美激情在线99| 在线播放国产精品三级| 国产老妇女一区| 蜜臀久久99精品久久宅男| 五月伊人婷婷丁香| 毛片一级片免费看久久久久| 特大巨黑吊av在线直播| 在线观看66精品国产| av免费在线看不卡| 久久久色成人| 日韩视频在线欧美| 最近中文字幕2019免费版| 国产精品av视频在线免费观看| 欧美xxxx黑人xx丫x性爽| 精品久久久噜噜| 日韩av不卡免费在线播放| 久久久精品欧美日韩精品| 中文字幕久久专区| 亚洲一级一片aⅴ在线观看| 天堂影院成人在线观看| 久久99热这里只有精品18| 久久精品国产亚洲av天美| 亚洲精品日韩av片在线观看| 秋霞在线观看毛片| 国产极品天堂在线| 国产精品一二三区在线看| 午夜日本视频在线| 亚洲成人久久爱视频| av在线蜜桃| 国产精品熟女久久久久浪| 天堂中文最新版在线下载 | 久久精品综合一区二区三区| 天堂av国产一区二区熟女人妻| 一边亲一边摸免费视频| 丝袜喷水一区| 我的女老师完整版在线观看| 内地一区二区视频在线| 国产精品福利在线免费观看| 一级黄片播放器| av免费在线看不卡| 国产av一区在线观看免费| 日本色播在线视频| 国产亚洲一区二区精品| 在线播放无遮挡| 99久久中文字幕三级久久日本| 日韩一区二区三区影片| 中文字幕av成人在线电影| 日本欧美国产在线视频| 3wmmmm亚洲av在线观看| 国产视频内射| 午夜日本视频在线| 一级黄片播放器| 亚洲精品,欧美精品| 久久久久性生活片| 日日啪夜夜撸| 国产黄色视频一区二区在线观看 | 男女边吃奶边做爰视频| 国产色婷婷99| 变态另类丝袜制服| 精品国内亚洲2022精品成人| 美女国产视频在线观看| 别揉我奶头 嗯啊视频| 免费av毛片视频| 纵有疾风起免费观看全集完整版 | 亚洲成人久久爱视频| 中文亚洲av片在线观看爽| 亚洲欧美一区二区三区国产| av免费观看日本| 精品国内亚洲2022精品成人| 色播亚洲综合网| 91午夜精品亚洲一区二区三区| 国产精品久久电影中文字幕| 男女国产视频网站| 精品国产露脸久久av麻豆 | 肉色欧美久久久久久久蜜桃| √禁漫天堂资源中文www| 在线观看一区二区三区激情| 亚洲伊人久久精品综合| 国产深夜福利视频在线观看| 成人国产麻豆网| 精品亚洲成a人片在线观看| 一本色道久久久久久精品综合| 久久精品熟女亚洲av麻豆精品| 激情五月婷婷亚洲| 日韩,欧美,国产一区二区三区| 制服人妻中文乱码| 精品少妇黑人巨大在线播放| 高清不卡的av网站| 日本免费在线观看一区| 9热在线视频观看99| 欧美日韩成人在线一区二区| 久久精品夜色国产| 久久免费观看电影| 亚洲人与动物交配视频| 亚洲精品aⅴ在线观看| 亚洲欧美一区二区三区国产| 国产老妇伦熟女老妇高清| 男女啪啪激烈高潮av片| 国产日韩欧美亚洲二区| 人人妻人人添人人爽欧美一区卜| 纵有疾风起免费观看全集完整版| 日韩制服丝袜自拍偷拍| 丰满迷人的少妇在线观看| 亚洲欧美精品自产自拍| 亚洲国产成人一精品久久久| av一本久久久久| 狠狠婷婷综合久久久久久88av| 欧美日韩亚洲高清精品| 精品一区在线观看国产| 狠狠婷婷综合久久久久久88av| 欧美 日韩 精品 国产| 黑人猛操日本美女一级片| 熟女电影av网| 蜜桃国产av成人99| 多毛熟女@视频| 国产av码专区亚洲av| 香蕉精品网在线| av一本久久久久| 天堂中文最新版在线下载| 中文字幕免费在线视频6| 久久影院123| 日本-黄色视频高清免费观看| 中文字幕亚洲精品专区| 国产精品一区二区在线不卡| 亚洲欧美一区二区三区黑人 | 久热这里只有精品99| 美国免费a级毛片| 中文字幕人妻熟女乱码| 热99久久久久精品小说推荐| 日本欧美国产在线视频|