• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Semantic image annotation based on GMM and random walk model①

    2017-06-27 08:09:22TianDongping田東平
    High Technology Letters 2017年2期
    關(guān)鍵詞:東平

    Tian Dongping (田東平)

    (*Institute of Computer Software, Baoji University of Arts and Sciences, Baoji 721007, P.R.China) (**Institute of Computational Information Science, Baoji University of Arts and Sciences, Baoji 721007, P.R.China)

    Semantic image annotation based on GMM and random walk model①

    Tian Dongping (田東平)②***

    (*Institute of Computer Software, Baoji University of Arts and Sciences, Baoji 721007, P.R.China) (**Institute of Computational Information Science, Baoji University of Arts and Sciences, Baoji 721007, P.R.China)

    Automatic image annotation has been an active topic of research in computer vision and pattern recognition for decades. A two stage automatic image annotation method based on Gaussian mixture model (GMM) and random walk model (abbreviated as GMM-RW) is presented. To start with, GMM fitted by the rival penalized expectation maximization (RPEM) algorithm is employed to estimate the posterior probabilities of each annotation keyword. Subsequently, a random walk process over the constructed label similarity graph is implemented to further mine the potential correlations of the candidate annotations so as to capture the refining results, which plays a crucial role in semantic based image retrieval. The contributions exhibited in this work are multifold. First, GMM is exploited to capture the initial semantic annotations, especially the RPEM algorithm is utilized to train the model that can determine the number of components in GMM automatically. Second, a label similarity graph is constructed by a weighted linear combination of label similarity and visual similarity of images associated with the corresponding labels, which is able to avoid the phenomena of polysemy and synonym efficiently during the image annotation process. Third, the random walk is implemented over the constructed label graph to further refine the candidate set of annotations generated by GMM. Conducted experiments on the standard Corel5k demonstrate that GMM-RW is significantly more effective than several state-of-the-arts regarding their effectiveness and efficiency in the task of automatic image annotation.

    semantic image annotation, Gaussian mixture model (GMM), random walk, rival penalized expectation maximization (RPEM), image retrieval

    0 Introduction

    With the advent and popularity of world wide web, the number of accessible digital images for various purposes is growing at an exponential speed. To make the best use of these resources, people need an efficient and effective tool to manage them. In such context, content-based image retrieval (CBIR) was introduced in early 1990s, which heavily depends on the low-level features to find images relevant to the query concept that is represented by the query example provided by the user. However, in the field of computer vision and multimedia processing, the semantic gap between low-level visual features and high-level semantic concepts is a major obstacle to CBIR. As a result, automatic image annotation (AIA) has appeared and become an active topic of research in computer vision for decades due to its potentially large impact on both image understanding and web image search[1]. To be specific, AIA refers to a process to generate textual words automatically to describe the content of a given image, which plays a crucial role in semantic based image retrieval. As can be seen from the literature, the research on AIA has mainly proceeded along two categories. The first one poses image annotation as a supervised classification problem, which treats each semantic keyword or concept as an independent class and assigns each keyword or concept one classifier. More specifically, such kind of approaches predicts the annotations for a new image by computing the similarity at visual level and propagating corresponding keywords subsequently. Representative work includes automatic linguistic index for pictures[2]and supervised formulation for semantic image annotation and retrieval[3]. In contrast, the second category treats the words and visual tokens in each image as equivalent features in different modalities. Followed by image annotation is formalized via modeling the joint distribution of visual and textual features on the training data and predicting the missing textual features for a new image. Representative research includes translation model (TM)[4], cross-media relevance model (CMRM)[5], continuous space relevance model (CRM)[6], multiple Bernoulli relevance model (MBRM)[7], probabilistic latent semantic analysis (PLSA)[8]and correlated topic model[9], etc. By comparison, the former approach is relatively direct and natural to be understood. However, its performance is limited with the increase of the number of the semantic concepts and explosive multimedia data on the web. On the other hand, the latter often requires large-scale parameters to be estimated and the accuracy is strongly affected by the quantity and quality of the training data available.

    The content of this paper is structured as follows. Section 1 summarizes the related work, particularly GMM applied in the fields of automatic image annotation and retrieval. Section 2 elaborates the proposed GMM-RW model, including its parameter estimation, label similarity graph and refining annotation based on the random walk. In Section 3, conducted experiments are reported and analyzed based on the standard Corel5k dataset. Finally, some concluding remarks and potential research directions of GMM in the future are given in Section 4.

    1 Related work

    Gaussian mixture model (GMM), as another kind of supervised learning method, has been extensively applied in machine learning and pattern recognition. As the representative work using GMM for automatic image annotation, Yang, et al.[10]formulate AIA as a supervised multi-class labeling problem. They employ color and texture features to form two separate vectors for which two independent Gaussian mixture models are estimated from the training set as the class densities by means of the EM algorithm in conjunction with a denoising technique. In Ref.[11], an effective visual vocabulary was constructed by applying hierarchical GMM instead of the traditional clustering methods. Meanwhile, PLSA was utilized to explore semantic aspects of visual concepts and to discover topic clusters among documents and visual words so that every image could be projected on to a lower dimensional topic space for more efficient annotation. Besides, Wang, et al.[12]adapted the conventional GMM to a global one estimated by all patches from training images along with an image-specific GMM obtained by adapting the mean vectors of the global GMM and retaining the mixture weights and covariance matrices. Afterwards GMM is embedded into the max-min posterior pseudo-probabilities for AIA, in which the concept-specific visual vocabularies are generated by assuming that the localized features of images with a specific concept satisfy the distribution of GMM[13]. It is generally believed that the spatial relation among objects is very important for image understanding and recognition. In more recent work[14], a new method for automatic image annotation based on GMM by region-based color and coordinate of matching is proposed to be taken into account this factor. To be specific, this method firstly partitions images into disjoint, connected regions with color features and x-y coordinate while a training dataset is modeled through GMM to have a stable annotation result in the later phase.

    As the representative work for CBIR, Sahbi[15]proposed a GMM for clustering and its application to image retrieval. In particular, each cluster of data, modeled as a GMM into an input space, is interpreted as a hyperplane in a high dimensional mapping space where the underlying coefficients are found by solving a quadratic programming problem. In Ref.[16], GMM was leveraged to work on color histograms built with weights delivered by the bilateral filter scheme, which enabled the retrieval system not only to consider the global distribution of the color image pixels but also to take into account their spatial arrangement. In the work of Sayad, et al.[17], a new method was introduced by using multilayer PLSA for image retrieval, which could effectively eliminate the noisiest words generated by the vocabulary building process. Meanwhile, the edge context descriptor is extracted by GMM as well as a spatial weighting scheme is constructed based on GMM to reflect the information about the spatial structure of the images. At the same time, Raju, et al.[18]presented a method for CBIR by making use of the generalized GMM. Wan, et al.[19]proposed a clustering based indexing approach called GMM cluster forest to support multi-features based similarity search in high-dimensional spaces, etc. In addition, GMM has also been successfully applied in the task of other multimedia related fields[20-24].

    As briefly reviewed above, most of these GMM related models can achieve encouraging performance and motivate us to explore better image annotation methods with the help of their excellent experiences and knowledge. So in this paper, a two stage automatic image annotation method is proposed based on Gaussian mixture model and random walk. First, GMM is learned by the rival penalized expectation maximization algorithm to estimate the posterior probabilities of each annotation keyword. In other words, GMM is exploited to capture the initial semantic annotations, which can be seen as the first stage of AIA. Second, a label similarity graph is constructed by a weighted linear combination of label similarity and visual similarity of images associated with the corresponding labels, which can efficiently avoid the phenomena of polysemy and synonym. Third, the random walk is implemented over the constructed label graph to further refine the candidate set of annotations generated by GMM, which can be viewed as the second stage of image annotation. At length, extensive experiments on Corel5k dataset validate the effectiveness and efficiency of the proposed model.

    2 Proposed GMM-RW

    In this section, the scheme of the GMM-RW model proposed in this study is first described (as depicted in Fig.1). Subsequently, GMM-RW is elaborated from three aspects of GMM and its parameter estimation, construction of the label similarity graph and refining annotation based on the random walk, respectively.

    Fig.1 Scheme of the proposed GMM-RW model

    2.1 GMM and its parameter estimation

    A Gaussian mixture model is a parametric probability density function represented as a weighted sum of Gaussian component densities. GMM is commonly used as a parametric model of the probability distribution of continuous measurements. More formally, a GMM is a weighted sum ofMcomponent Gaussian densities as given by the following equation.

    (1)

    where x is aD-dimensional continuous-valued data vector,wi(i=1,2,…,M) denotes the mixture weights,g(x|μi, ∑i),i=1,2,…,M, are the component Gaussian densities. Each component density is aD-variate Gaussian function as follows.

    g(x|μi,Σi)=

    (2)

    withmeanvectorμiand covariance matrix Σi. The mixture weights satisfy the constraint, i.e., sum to 1. The complete GMM is parameterized by the mean vectors, covariance matrices and mixture weights from all the component densities, and these parameters can be collectively represented by the notationλ={wi,μi, Σi},i=1, 2,…,M.

    There are several techniques available for estimating parameters of GMM. By far the most popular and well-established method is the maximum likelihood (ML) estimation, whose aim is to find the model parameters to maximize the likelihood of the GMM given the training data. But in general, the expectation-maximization (EM) algorithm is employed to fit GMM due to the infeasibility of direct maximization for ML. However, there is no penalty for the redundant mixture components based on the EM, which means that the number of components in a GMM cannot be automatically determined and has to be assigned in advance. To this end, the rival penalized expectation maximization (RPEM) algorithm[25]is leveraged to determine the number of components as well as to estimate the model parameters. Since RPEM introduces unequal weights into the conventional likelihood, the weighted likelihood can be written as below:

    (3)

    with

    (4)

    whereh(j|xi,λ)=ωjp(xi|μj, ∑j)/p(xi|λ) is the posterior probability thatxibelongs to thej-th component in the mixture,λis a positive constant,g(j|xi,λ),j=1,2,…,M, are designable weight functions, satisfying the following constraints:

    (5)

    In literature[24], they are constructed as follows:

    g(j|xi,λ)=(1+εi)I(j|xi,λ)-εih(j|xi,λ)

    (6)

    whereI(j|xi,λ) equals to 1 ifj=argmax1≤i≤Mh(i|x,λ) and 0 otherwise.εiis a small positive quantity. The major steps of RPEM algorithm can be summarized as below:

    Algorithm1:TheRPEMalgorithmforGMMmodelingInput:featurevectorx,M,thelearningrateη,themaximumnumberofepochsepochmax,initializeλasλ(0).Process:1. epochcount=0,m=0;2. whileepochcount≤epochmax,do3. fori=1toNdo4. Givenλ(m),calculateh(j|xi,λ(m))toobtaing(j|xi,λ(m))byEq.(6).5. λ(m+1)=λ(m)+Δλ=λ(m)+η?l(xi;λ)?λλ(m),6. m=m+1.7. endfor8. epochcount=epochcount+1;9. endwhileOutput:theconvergedλforGMM.

    Based on Gaussian mixture model and RPEM algorithm described above, GMM can be trained and utilized to characterize the semantic model of the given concepts by Eq.(1). Assume that the training image is represented by both a visual featureX={x1,x2,…,xm} and a keyword listW={w1,w2,…,wn}, wherexi(i=1,2,…,m) denotes the visual feature for regioniandwj(j=1,2,…,n) is thej-th keyword in the annotation. For a test imageIrepresented by its visual feature vector X={x1,x2,…,xm}, according to Bayesian rule, the posterior probabilityp(wi|I) can be calculated based on the conditional probabilityp(I|wi) and prior probabilityp(wi) as follows:

    (7)

    From Eq.(7), the topnkeywords can be selected as the initial annotations for image X.

    2.2 Construction of the label similarity graph

    In the process of automatic image annotation, at least three kinds of relations are involved based on two different modal data. That is, image-to-image, image-to-word and word-to-word relations. How to reasonably reflect these cross-modal relations between images and words plays a critical role in the task of AIA. Note that the most common approaches include WordNet[26]and normalized Google distance (NGD)[27]. From their definitions, it can be easily observed that NGD is actually a measure of the contextual relation while WordNet focuses on the semantic meaning of the keyword itself. Moreover, both of them build word correlations only based on the textual descriptions whereas the visual information of images in the dataset is not considered at all, which can easily lead to the phenomenon that different images with the same candidate annotations would obtain the same annotation results after the refined process. For this reason, an effective pairwise similarity strategy is devised by calculating a weighted linear combination of label similarity and visual similarity of images associated with the corresponding labels, in which the label similarity between wordswiandwjis defined as

    sl(wi,wj)=exp(-d(wi,wj))

    (8)

    whered(wi,wj) represents the distance between two wordswiandwjand it is defined similarly to NGD as below:

    (9)

    wheref(wi) andf(wj) denote the numbers of images containing wordswiandwjrespectively,f(wi,wj) is the number of images containing bothwiandwj,Gis the total number of images in the dataset.

    Similar to Ref.[28], for labelwassociated with imagex, the nearest neighbors ofKarecollectedfromimagescontainingw, and these images can be regarded as the exemplars of labelwwith respect tox. Thus from the point view of labels associated with an image, the visual similarity between labelswiandwjis given as follows:

    (10)

    whereΓwis the representative image collection of wordw,xandydenote image features corresponding to the respective image collections of wordswiandwj,σis the user-defined radius parameter for the Gaussian kernel function. To benefit from each other of the two similarities described above, a weighted linear combination of label similarity and visual similarity is defined as below:

    sij=s(wi,wj) =λsl(wi,wj)+(1-λ)sv(wi,wj)

    (11)

    whereλ∈[0,1] is utilized to control the weights for each measurement.

    2.3 Refining annotation based on random walk

    In the following, the refining image annotation stage is to be elaborated based on the initial annotations generated by GMM and the random walk model. Given that a label graph constructed in subsection 2.2 withnnodes,rk(i) is used to denote the relevance score of nodeiat iterationk,Pdenotes an-by-ntransition matrix, whose elementpijindicates the probability of the transition from nodeito nodejand it is computed as

    (12)

    wheresijis the pairwise label similarity (defined by Eq.(11)) between nodeiand nodej. Then the random walk process can be formulated as

    (13)

    whereα∈(0,1) is a weight parameter to be determined,vjdenotes the initial annotation probabilistic scores calculated by the GMM. In the process of refining image annotation, random walk proceeds until it reaches the steady-state probability distribution and subsequently the top several candidates with the highest probabilities can be seen as the final refining image annotation results.

    3 Experimental results and analysis

    3.1 Dataset and evaluation measures

    The proposed GMM-RW is tested on the Corel5k image dataset obtained from the literature[4]. Corel5k consists of 5,000 images from 50 Corel Stock Photo CD’s. Each CD contains 100 images with a certain theme (e.g. polar bears), of which 90 are designated to be in the training set and 10 in the test set, resulting in 4,500 training images and a balanced 500-image test collection. Alternatively, for the sake of fair comparison, similar features to Ref.[7] are extracted. First of all, images are simply decompose into a set of 32×32-sized blocks, followed by computing a 36-dim feature vector for each block, consisting of 24 color features (auto-correlogram) computed over 8 quantized colors and 3 manhattan distances, 12 texture features (Gabor filter) computed over 3 scales and 4 orientations. As a result, each block is represented as a 36-dim feature vector. Finally, each image is represented as a bag of features, i.e., a set of 36 dimensional vectors. And these features are subsequently employed to train GMM based on the RPEM algorithm. In addition, the value ofλin Eq.(11) is set to be 0.6, and the value ofαin Eq.(13) is set to be 0.5 by trial and error. Without loss of generality, the commonly used metrics precision and recall of every word in the test set are calculated and the mean of these values is utilized to summarize the performance.

    3.2 Results of automatic image annotation

    Matlab 7.0 is applied to implement the proposed GMM-RW model. Specifically, the experiments are carried out on a 1.80GHz Intel Core Duo CPU personal computer (PC) with 2.0G memory running Microsoft windows xp professional. To verify the effectiveness of the proposed model, it is compared with several previous approaches[4-8]. Table 1 reports the experimental results based on two sets of words: the subset of 49 best words and the complete set of all 260 words occur in the training set. From Table 1, it is clear that the model markedly outperforms all the others, especially the first three approaches. Meanwhile, it is also superior to PLSA-WORDS and MBRM by the gains of 21 and 4 words with non-zero recall, 30% and 4% mean per-word recall in conjunction with 79% and 4% mean per-word precision on the set of 260 words respectively. In addition, compared to MBRM on the set of 49 best words, improvement can be get in mean per-word precision despite the mean per-word recall of GMM-RW is the same as that of MBRM.

    Table 1 Performance comparison on Corel5k dataset

    To further illustrate the effect of GMM-RW model for automatic image annotation, Fig.2 displays the average annotation precision of the selected 10 words “flowers”, “mountain”, “snow”, “tree”, “building”, “beach”, “water”, “sky”, “bear” and “cat” based on GMM and GMM-RW models, respectively. As shown in Fig.2, the average precision of the model is obviously higher than that of GMM. The reason lies in that in addition to profit from the calculation strategy of cross-modal relations between images and words. GMM-RW, to a large extent, takes benefit from the random walk process to further mine the correlation of the candidate annotations.

    Fig.2 Average precision based on GMM and GMM-RW

    Alternatively, Table 2 shows some examples of image annotation (only eight cases are listed here due to the limited space) produced by PLSA-WORDS and GMM-RW, respectively. It is clearly observed that the model is able to generate more accurate annotation results compared with the original annotations as well as the ones provided in Ref.[8]. Taking the first image in the first row for example, there exist four tags in the original annotation. However, after annotation by GMM-RW, its annotation is enriched by the other keyword “grass”, which is very appropriate and reasonable to describe the visual content of the image. On the other side, it is important to note that the annotation ranking of the keywords compared to that generated by the PLSA-WORDS is more reasonable, which plays a crucial role in semantic based image retrieval. In addition, as for the complexity of GMM-RW, assuming that there areDtraining images and each image producesRvisual feature vectors, then the complexity of our model isO(DR), which is similar to the classic CRM and MBRM models mentioned in Ref.[3].

    Table 2 Annotation comparison with PLSA-WORDS and GMM-RW

    4 Conclusions and future work

    In this paper, a two stage automatic image annotation method is presented based on GMM and a random walk model. First of all, GMM fitted by the rival penalized expectation maximization is applied to estimate the posterior probabilities of each annotation keyword. Followed by a random walk process over the constructed label similarity graph is implemented to further mine the correlation of the candidate annotations so as to capture the refining results. Particularly, the label similarity graph is constructed by a weighted linear combination of label similarity and visual similarity of images associated with the corresponding labels, which can efficiently avoid the phenomena of polysemy and synonym in the course of automatic image annotation. Extensive experiments on the general-purpose Corel5k dataset validate the feasibility and utility of the proposed GMM-RW model.

    As for future work, a plan is made to explore more powerful GMM related models for automatic image annotation from the following aspects. First, due to the classic GMM has limitation in its modeling abilities as all data points of an object are required to be generated from a pool of mixtures with the same set of mixture weights, so how to determine the weight factors of GMM more appropriately is well worth exploring. Second, how to speed up the GMM estimation with EM algorithm is also an important work for large-scale multimedia processing. In other words, the choice of alternate estimation techniques for the estimation of GMM parameters could also be very valuable. Third, how to introduce semi-supervised learning into the proposed approach to utilize the labeled and unlabeled data simultaneously is a worthy research direction. At the same time, work on web image annotation is continued by refining more relevant semantic information from web pages and building more suitable connection between image content features and available semantic information. Last but not the least, GMM-RW should be expected to be applied in more wider ranges to deal with more multimedia related tasks, such as speech recognition, video recognition and other multimedia event detection tasks, etc.

    [ 1] Tian D P. Exploiting PLSA model and conditional random field for refining image annotation.HighTechnologyLetters, 2015,21(1):78-84

    [ 2] Li J, Wang J. Automatic linguistic indexing of pictures by a statistical modeling approach.IEEETransactionsonPatternAnalysisandMachineIntelligence, 2003,25(19):1075-1088

    [ 3] Carneiro G, Chan A, Moreno P, et al. Supervised learning of semantic classes for image annotation and retrieval.IEEETransactionsonPatternAnalysisandMachineIntelligence, 2007,29(3):394-410

    [ 4] Duygulu P, Barnard K, Freitas N De, et al. Object recognition as machine translation: learning a lexicon for a fixed image vocabulary. In: Proceedings of the 7th European Conference on Computer Vision, Copenhagen, Denmark, 2002. 97-112

    [ 5] Jeon L, Lavrenko V, Manmantha R. Automatic image annotation and retrieval using cross-media relevance models. In: Proceedings of the 26th International ACM SIGIR Conference on Research and Development in Information Retrieval, Toronto, Canada, 2003. 119-126

    [ 6] Lavrenko V, Manmatha R, Jeon J. A model for learning the semantics of pictures. In: Proceedings of the Advances in Neural Information Processing Systems 16, Vancouver, Canada, 2003. 553-560

    [ 7] Feng S, Manmatha R, Lavrenko V. Multiple Bernoulli relevance models for image and video annotation. In: Proceedings of the International Conference on Computer Vision and Pattern Recognition, Washington, USA, 2004. 1002-1009

    [ 8] Monay F, Gatica-Perez D. Modeling semantic aspects for cross-media image indexing.IEEETransactionsonPatternAnalysisandMachineIntelligence, 2007,29(10):1802-1817

    [ 9] Blei D, Lafferty J. Correlated topic models.AnnalsofAppliedStatistics, 2007,1(1):17-35

    [10] Yang F, Shi F, Wang Z. An improved GMM-based method for supervised semantic image annotation. In: Proceedings of the International Conference on Intelligent Computing and Intelligent Systems, Shanghai, China, 2009. 506-510

    [11] Wang Z, Yi H, Wang J, et al. Hierarchical Gaussian mixture model for image annotation via PLSA. In: Proceedings of the 5th International Conference on Image and Graphics, Xi’an, China, 2009. 384-389

    [12] Wang C, Yan S, Zhang L, et al. Multi-label sparse coding for automatic image annotation. In: Proceedings of the International Conference on Computer Vision and Pattern Recognition, Miami, USA, 2009. 1643-1650

    [13] Wang Y, Liu X, Jia Y. Automatic image annotation with cooperation of concept-specific and universal visual vocabularies. In: Proceedings of the 16th International Conference on Multimedia Modeling, Chongqing, China, 2010. 262-272

    [14] Luo X, Kita K. Region-based image annotation using Gaussian mixture model. In: Proceedings of the 2nd International Conference on Information Technology and Software Engineering, Beijing, China, 2013. 503-510

    [15] Sahbi H. A particular Gaussian mixture model for clustering and its application to image retrieval.SoftComputing, 2008, 12(7):667-676

    [16] Luszczkiewicz M, Smolka B. Application of bilateral filtering and Gaussian mixture modeling for the retrieval of paintings. In: Proceedings of the 16th International Conference on Image Processing, Cairo, Egypt, 2009. 77-80

    [17] Sayad I, Martinet J, Urruty T, et al. Toward a higher-level visual representation for content-based image retrieval.MultimediaToolsandApplications, 2012,60(2):455-482

    [18] Raju L, Vasantha K, Srinivas Y. Content based image retrievals based on generalization of GMM.InternationalJournalofComputerScienceandInformationTechnologies, 2012,3(6): 5326-5330

    [19] Wan Y, Liu X, Tong K, et al. GMM-ClusterForest: a novel indexing approach for multi-features based similarity search in high-dimensional spaces. In: Proceedings of the 19th International Conference on Neural Information Processing, Doha, Qatar, 2012. 210-217

    [20] Dixit M, Rasiwasia N, Vasconcelos N. Adapted Gaussian models for image classification. In: Proceedings of the International Conference on Computer Vision and Pattern Recognition, Providence, USA, 2011. 937-943

    [21] Celik T. Image change detection using Gaussian mixture model and genetic algorithm.JournalofVisualCommunicationandImageRepresentation, 2010,21(8):965-974

    [22] Beecks C, Ivanescu A, Kirchhoff S, et al. Modeling image similarity by Gaussian mixture models and the signature quadratic form distance. In: Proceedings of the 13th International Conference on Computer Vision, Barcelona, Spain, 2011. 1754-1761

    [23] Wang Y, Chen W, Zhang J, et al. Efficient volume exploration using the Gaussian mixture model.IEEETransactionsonVisualizationandComputerGraphics, 2011,17(11):1560-1573

    [24] Inoue N, Shinoda K. A fast and accurate video semantic-indexing system using fast MAP adaptation and GMM super-vectors.IEEETransactionsonMultimedia, 2012,14(4):1196-1205

    [25] Cheung Y. Maximum weighted likelihood via rival penalized EM for density mixture clustering with automatic model selection.IEEETransactionsonKnowledgeandDataEngineering, 2005,17(6):750-761

    [26] Fellbaum C. WordNet. Theory and Applications of Ontology: Computer Applications, 2010. 231-243

    [27] Cilibrasi R, Paul M. The Google similarity distance.IEEETransactionsonKnowledgeandDataEngineering, 2007, 19(3):370-383

    [28] Liu D, Hua X, Yang L, et al. Tag ranking. In: Proceedings of the 18th International Conference on World Wide Web, Madrid, Spain, 2009. 351-360

    10.3772/j.issn.1006-6748.2017.02.015

    ①Supported by the National Basic Research Program of China (No.2013CB329502), the National Natural Science Foundation of China (No.61202212), the Special Research Project of the Educational Department of Shaanxi Province of China (No.15JK1038) and the Key Research Project of Baoji University of Arts and Sciences (No.ZK16047).

    ②To whom correspondence should be addressed. E-mail: tdp211@163.com

    on May 25, 2016

    ping, born in 1981. He received his M.Sc. and Ph.D. degrees in computer science from Shanghai Normal University and Institute of Computing Technology, Chinese Academy of Sciences in 2007 and 2014, respectively. His research interests include computer vision, machine learning and evolutionary computation.

    猜你喜歡
    東平
    墾荒
    種絲瓜
    茶藝
    金秋(2020年8期)2020-08-17 08:38:20
    鐵 匠
    詩(shī)四首——東平之《春》、《夏》、《秋》、《冬》
    Semi-supervised learning based probabilistic latent semantic analysis for automatic image annotation①
    讓批評(píng)和自我批評(píng)成為黨內(nèi)政治生活的常態(tài)
    全面從嚴(yán)治黨背景下建立健全容錯(cuò)糾錯(cuò)機(jī)制的探討
    A Comparative Study of Buddha in China and God in Western Countries
    東方教育(2017年2期)2017-04-21 04:46:18
    Exploiting PLSA model and conditional random field for refining image annotation*
    精品一区二区三区视频在线| av天堂在线播放| 成人三级黄色视频| 超碰av人人做人人爽久久| 能在线免费观看的黄片| 中出人妻视频一区二区| 在线播放国产精品三级| 国产久久久一区二区三区| 久久精品国产亚洲av天美| 级片在线观看| 在线天堂最新版资源| 久久精品夜夜夜夜夜久久蜜豆| 欧美日韩乱码在线| 男人狂女人下面高潮的视频| 欧美精品啪啪一区二区三区| 国产精品福利在线免费观看| 麻豆国产97在线/欧美| 99久久精品国产国产毛片| 久久精品国产亚洲av天美| 亚洲欧美日韩东京热| av在线观看视频网站免费| 女生性感内裤真人,穿戴方法视频| 国产精华一区二区三区| 欧美一级a爱片免费观看看| 日本成人三级电影网站| 久久精品久久久久久噜噜老黄 | 久久久精品大字幕| 国产高清视频在线播放一区| 成年女人永久免费观看视频| 非洲黑人性xxxx精品又粗又长| 国产在线男女| 色哟哟哟哟哟哟| 亚洲av美国av| 99久久精品国产国产毛片| 中文在线观看免费www的网站| 看十八女毛片水多多多| 国产不卡一卡二| 69人妻影院| 啦啦啦啦在线视频资源| 国产亚洲精品综合一区在线观看| 国产精品福利在线免费观看| 少妇猛男粗大的猛烈进出视频 | 精品无人区乱码1区二区| 99精品在免费线老司机午夜| 国产乱人视频| 久久久久久大精品| 亚洲成人久久爱视频| 亚洲av中文字字幕乱码综合| 精品人妻偷拍中文字幕| a级毛片免费高清观看在线播放| 午夜精品久久久久久毛片777| 在线播放国产精品三级| av专区在线播放| 日日摸夜夜添夜夜添av毛片 | 亚洲av电影不卡..在线观看| 亚洲专区国产一区二区| 久久久久久久久久黄片| 狂野欧美白嫩少妇大欣赏| 日韩一区二区视频免费看| av在线蜜桃| 亚洲国产欧洲综合997久久,| 一进一出抽搐动态| 亚洲成av人片在线播放无| 一进一出抽搐gif免费好疼| 免费高清视频大片| 夜夜爽天天搞| 成年女人毛片免费观看观看9| 日韩一区二区视频免费看| 日韩国内少妇激情av| 精品久久久久久久久亚洲 | 亚洲内射少妇av| 国产精品久久视频播放| 成人美女网站在线观看视频| ponron亚洲| 国国产精品蜜臀av免费| 在现免费观看毛片| 18+在线观看网站| 久久欧美精品欧美久久欧美| 最近中文字幕高清免费大全6 | 此物有八面人人有两片| 少妇裸体淫交视频免费看高清| 蜜桃亚洲精品一区二区三区| 欧美日韩综合久久久久久 | 亚洲av电影不卡..在线观看| 亚洲国产高清在线一区二区三| 男人舔女人下体高潮全视频| 久久精品国产清高在天天线| 欧美成人性av电影在线观看| 非洲黑人性xxxx精品又粗又长| 免费看日本二区| 国内精品久久久久精免费| 99久久九九国产精品国产免费| 日韩欧美一区二区三区在线观看| 成人无遮挡网站| 级片在线观看| 熟女电影av网| 国产真实伦视频高清在线观看 | 搞女人的毛片| 色噜噜av男人的天堂激情| 欧美在线一区亚洲| 亚洲av成人精品一区久久| 91久久精品电影网| 观看美女的网站| 亚洲欧美精品综合久久99| 伦理电影大哥的女人| 丰满的人妻完整版| 中文字幕精品亚洲无线码一区| 在线国产一区二区在线| 美女大奶头视频| 69av精品久久久久久| 国产久久久一区二区三区| 亚洲成人免费电影在线观看| 国产成人福利小说| 男人狂女人下面高潮的视频| 日本-黄色视频高清免费观看| 欧美精品国产亚洲| 啦啦啦啦在线视频资源| 亚洲成人免费电影在线观看| 亚洲性夜色夜夜综合| 日本撒尿小便嘘嘘汇集6| 熟妇人妻久久中文字幕3abv| 男女做爰动态图高潮gif福利片| 精品久久久久久久末码| 久久人妻av系列| 两个人的视频大全免费| 日本免费a在线| 黄片wwwwww| 91午夜精品亚洲一区二区三区 | 毛片一级片免费看久久久久 | 亚洲天堂国产精品一区在线| 中文字幕高清在线视频| 波野结衣二区三区在线| 精品人妻偷拍中文字幕| 日韩精品有码人妻一区| 国产精品日韩av在线免费观看| 一级毛片久久久久久久久女| 中亚洲国语对白在线视频| 51国产日韩欧美| 日本在线视频免费播放| 日韩国内少妇激情av| 婷婷精品国产亚洲av在线| 色综合站精品国产| 麻豆成人午夜福利视频| 日日摸夜夜添夜夜添av毛片 | 国内精品久久久久精免费| 亚洲精品在线观看二区| 久久午夜福利片| 久久人妻av系列| 精品不卡国产一区二区三区| 成人国产麻豆网| 亚洲精品国产成人久久av| 中亚洲国语对白在线视频| 国产麻豆成人av免费视频| 免费黄网站久久成人精品| 真人做人爱边吃奶动态| 国产精品98久久久久久宅男小说| 欧美又色又爽又黄视频| 五月玫瑰六月丁香| 亚洲七黄色美女视频| 99精品久久久久人妻精品| 99久久成人亚洲精品观看| 国产探花极品一区二区| av天堂在线播放| 欧美性感艳星| 亚洲av不卡在线观看| 亚洲国产高清在线一区二区三| 欧美一区二区国产精品久久精品| 亚洲精品乱码久久久v下载方式| 日本黄色视频三级网站网址| or卡值多少钱| 中文在线观看免费www的网站| 中文字幕av成人在线电影| а√天堂www在线а√下载| 久久久久精品国产欧美久久久| 1000部很黄的大片| 亚洲精品影视一区二区三区av| 欧美最黄视频在线播放免费| 一级a爱片免费观看的视频| 欧美xxxx黑人xx丫x性爽| 男女那种视频在线观看| 精品人妻视频免费看| 97热精品久久久久久| 欧美另类亚洲清纯唯美| 欧美+亚洲+日韩+国产| 69人妻影院| 尤物成人国产欧美一区二区三区| 亚洲专区国产一区二区| 国产爱豆传媒在线观看| 日韩欧美一区二区三区在线观看| 桃红色精品国产亚洲av| 欧美+日韩+精品| 国产91精品成人一区二区三区| 国产成人aa在线观看| 亚洲最大成人中文| 国产一区二区亚洲精品在线观看| 亚洲图色成人| 色在线成人网| 高清毛片免费观看视频网站| 搞女人的毛片| 亚洲av日韩精品久久久久久密| 日日夜夜操网爽| 国产 一区精品| 桃红色精品国产亚洲av| 亚洲三级黄色毛片| 久久这里只有精品中国| 国产精品一区www在线观看 | 99精品久久久久人妻精品| 看免费成人av毛片| 亚洲性夜色夜夜综合| 中文在线观看免费www的网站| 久久久午夜欧美精品| 嫩草影院入口| 国产精品人妻久久久久久| 热99在线观看视频| 夜夜夜夜夜久久久久| 欧美日韩综合久久久久久 | 精品国内亚洲2022精品成人| 亚洲美女视频黄频| 麻豆av噜噜一区二区三区| 国产视频一区二区在线看| or卡值多少钱| 中文字幕人妻熟人妻熟丝袜美| 此物有八面人人有两片| 午夜精品在线福利| 黄色日韩在线| 九九热线精品视视频播放| 国内久久婷婷六月综合欲色啪| 日韩一本色道免费dvd| 亚洲黑人精品在线| 熟妇人妻久久中文字幕3abv| 国产一区二区激情短视频| 欧美日韩亚洲国产一区二区在线观看| 亚洲精品一区av在线观看| 成人无遮挡网站| 国产午夜精品论理片| 日本精品一区二区三区蜜桃| 久久久久精品国产欧美久久久| 人人妻人人澡欧美一区二区| 亚洲国产色片| 亚洲色图av天堂| 中文字幕熟女人妻在线| 国产色婷婷99| 99热只有精品国产| 九九爱精品视频在线观看| 大型黄色视频在线免费观看| 99久久精品国产国产毛片| 亚洲最大成人中文| 日韩欧美免费精品| 成人欧美大片| 88av欧美| 日本在线视频免费播放| 黄片wwwwww| 99热6这里只有精品| 日韩中文字幕欧美一区二区| 免费人成视频x8x8入口观看| 亚洲va日本ⅴa欧美va伊人久久| 久久人人爽人人爽人人片va| 天堂影院成人在线观看| 一区福利在线观看| xxxwww97欧美| 国产女主播在线喷水免费视频网站 | 欧美在线一区亚洲| 免费看光身美女| 亚洲av日韩精品久久久久久密| 日韩欧美国产在线观看| 在线观看一区二区三区| 日本撒尿小便嘘嘘汇集6| 国内精品美女久久久久久| 免费搜索国产男女视频| 色5月婷婷丁香| 国产欧美日韩精品亚洲av| 日韩欧美在线二视频| 国产精品福利在线免费观看| 一进一出抽搐动态| 老熟妇仑乱视频hdxx| 一边摸一边抽搐一进一小说| 色综合色国产| 欧美zozozo另类| 看黄色毛片网站| 欧美黑人欧美精品刺激| 久久国产精品人妻蜜桃| 九九久久精品国产亚洲av麻豆| 97超级碰碰碰精品色视频在线观看| 三级国产精品欧美在线观看| 综合色av麻豆| 久久久久精品国产欧美久久久| 亚洲精品一区av在线观看| 免费观看的影片在线观看| 在线观看舔阴道视频| 久久亚洲精品不卡| 久久久久国内视频| 天堂av国产一区二区熟女人妻| 2021天堂中文幕一二区在线观| 一级毛片久久久久久久久女| 日韩欧美国产在线观看| 日韩精品有码人妻一区| 中文字幕免费在线视频6| 国产午夜精品久久久久久一区二区三区 | 成人午夜高清在线视频| 欧美日韩国产亚洲二区| 免费看美女性在线毛片视频| 欧美一级a爱片免费观看看| 午夜影院日韩av| 国产欧美日韩一区二区精品| 三级国产精品欧美在线观看| 美女cb高潮喷水在线观看| 国内精品一区二区在线观看| 观看美女的网站| 亚洲午夜理论影院| 国产一区二区在线av高清观看| av黄色大香蕉| 一a级毛片在线观看| xxxwww97欧美| 国产精品福利在线免费观看| 麻豆av噜噜一区二区三区| 嫩草影视91久久| 久久久色成人| 亚洲中文字幕日韩| 久久国内精品自在自线图片| 国产熟女欧美一区二区| 毛片一级片免费看久久久久 | 精品乱码久久久久久99久播| 又爽又黄a免费视频| 午夜福利欧美成人| 特大巨黑吊av在线直播| 搡老岳熟女国产| 欧美色欧美亚洲另类二区| 91久久精品电影网| 欧美色视频一区免费| 99视频精品全部免费 在线| 天天躁日日操中文字幕| 国产乱人伦免费视频| 国产高清有码在线观看视频| 久久久久国产精品人妻aⅴ院| 一个人免费在线观看电影| 人人妻人人看人人澡| 国产av不卡久久| 高清在线国产一区| 国产女主播在线喷水免费视频网站 | 国产伦人伦偷精品视频| 国产精品人妻久久久久久| 国产一区二区亚洲精品在线观看| 赤兔流量卡办理| 久久人人爽人人爽人人片va| 日日摸夜夜添夜夜添小说| 国产精品人妻久久久影院| 又爽又黄a免费视频| 亚洲欧美清纯卡通| av.在线天堂| 亚洲精品久久国产高清桃花| 国产精品亚洲一级av第二区| 俺也久久电影网| 国产精品国产高清国产av| 极品教师在线免费播放| 特级一级黄色大片| 日本a在线网址| 国产精品亚洲美女久久久| 国产精品女同一区二区软件 | 免费搜索国产男女视频| 亚洲色图av天堂| 99久久无色码亚洲精品果冻| 小说图片视频综合网站| 国产成人影院久久av| 国产精品亚洲一级av第二区| 国内精品宾馆在线| 麻豆久久精品国产亚洲av| 给我免费播放毛片高清在线观看| 1000部很黄的大片| 51国产日韩欧美| 日韩一本色道免费dvd| 国产精品美女特级片免费视频播放器| 久久久午夜欧美精品| 一级黄片播放器| 国产三级中文精品| 欧美色欧美亚洲另类二区| 免费看a级黄色片| 久久精品国产亚洲网站| 久久人人精品亚洲av| 久久久久性生活片| 色在线成人网| 精品久久久久久成人av| 高清日韩中文字幕在线| 成人综合一区亚洲| 十八禁网站免费在线| 午夜精品一区二区三区免费看| 国产成年人精品一区二区| av在线老鸭窝| АⅤ资源中文在线天堂| 免费av观看视频| or卡值多少钱| 国产精品乱码一区二三区的特点| 欧美在线一区亚洲| 国产亚洲精品久久久久久毛片| 亚洲精品久久国产高清桃花| 色哟哟哟哟哟哟| 简卡轻食公司| 欧美又色又爽又黄视频| 中出人妻视频一区二区| 国产免费av片在线观看野外av| 乱码一卡2卡4卡精品| 88av欧美| 中文字幕av在线有码专区| 欧美精品国产亚洲| 久久久久精品国产欧美久久久| www日本黄色视频网| 成人亚洲精品av一区二区| 国产不卡一卡二| 日本在线视频免费播放| 国产精品一区二区三区四区久久| 国产精品野战在线观看| 听说在线观看完整版免费高清| 久久久久免费精品人妻一区二区| 嫩草影院新地址| 老司机深夜福利视频在线观看| 欧美激情在线99| 亚洲精品一卡2卡三卡4卡5卡| 精品久久久久久久末码| 色哟哟哟哟哟哟| 在线免费十八禁| 久久欧美精品欧美久久欧美| 中出人妻视频一区二区| 亚洲av.av天堂| 日本一本二区三区精品| 黑人猛操日本美女一级片| 人人妻人人看人人澡| 视频中文字幕在线观看| 黑丝袜美女国产一区| 日本欧美国产在线视频| 男女啪啪激烈高潮av片| 如何舔出高潮| 色吧在线观看| 国产欧美日韩精品一区二区| 亚洲av电影在线观看一区二区三区| 日韩国内少妇激情av| 大片免费播放器 马上看| 免费观看在线日韩| 麻豆成人午夜福利视频| 街头女战士在线观看网站| 国产欧美日韩一区二区三区在线 | 日韩 亚洲 欧美在线| 免费人妻精品一区二区三区视频| 最近2019中文字幕mv第一页| 国产高潮美女av| 在线免费十八禁| 国产极品天堂在线| 少妇 在线观看| 午夜福利网站1000一区二区三区| 欧美成人一区二区免费高清观看| 日韩欧美 国产精品| 水蜜桃什么品种好| 日本av免费视频播放| 亚洲人成网站在线播| 亚洲伊人久久精品综合| 久久精品熟女亚洲av麻豆精品| 中国国产av一级| 亚洲欧美日韩东京热| 夜夜爽夜夜爽视频| 亚洲欧洲日产国产| 日韩国内少妇激情av| 97热精品久久久久久| 边亲边吃奶的免费视频| 美女高潮的动态| 在线观看一区二区三区| 少妇精品久久久久久久| 国产高清国产精品国产三级 | 老司机影院成人| 国产极品天堂在线| 成年免费大片在线观看| 日产精品乱码卡一卡2卡三| 久久久久国产精品人妻一区二区| 国产精品成人在线| 免费黄网站久久成人精品| 日韩中文字幕视频在线看片 | 欧美成人a在线观看| 国产免费福利视频在线观看| 国产乱来视频区| 国产一区二区在线观看日韩| 久久精品国产亚洲av天美| 一级片'在线观看视频| 国产毛片在线视频| 国模一区二区三区四区视频| 国产女主播在线喷水免费视频网站| 国产精品一二三区在线看| 精品国产三级普通话版| 免费人成在线观看视频色| 在线观看美女被高潮喷水网站| 五月开心婷婷网| 最后的刺客免费高清国语| 精品人妻偷拍中文字幕| 中文字幕制服av| 成人国产av品久久久| 高清欧美精品videossex| 亚洲一区二区三区欧美精品| 内地一区二区视频在线| 日本猛色少妇xxxxx猛交久久| 内射极品少妇av片p| 久久午夜福利片| 久久热精品热| 久久久久久久国产电影| 精品午夜福利在线看| 人妻一区二区av| 一级毛片我不卡| 22中文网久久字幕| 国产精品蜜桃在线观看| 国产成人免费观看mmmm| 熟女av电影| 嫩草影院入口| 亚洲精品乱码久久久久久按摩| 欧美极品一区二区三区四区| 亚洲av福利一区| 亚洲精华国产精华液的使用体验| 国产免费一区二区三区四区乱码| 男女边摸边吃奶| 亚洲不卡免费看| 久久久国产一区二区| 国产一区二区三区综合在线观看 | av在线观看视频网站免费| 免费黄网站久久成人精品| 国产男女内射视频| 日本黄色日本黄色录像| 成年人午夜在线观看视频| 成人黄色视频免费在线看| 少妇精品久久久久久久| 色视频在线一区二区三区| 亚洲电影在线观看av| 成年免费大片在线观看| 97超视频在线观看视频| 熟女av电影| 插阴视频在线观看视频| 国产黄频视频在线观看| 亚洲国产毛片av蜜桃av| 免费大片18禁| 夜夜看夜夜爽夜夜摸| 国产69精品久久久久777片| 十分钟在线观看高清视频www | 国产男女超爽视频在线观看| 亚洲欧美一区二区三区国产| 日韩精品有码人妻一区| 赤兔流量卡办理| 国产免费一区二区三区四区乱码| 亚洲av国产av综合av卡| 久久精品国产a三级三级三级| 免费黄网站久久成人精品| 亚洲av.av天堂| 欧美老熟妇乱子伦牲交| 日本一二三区视频观看| 亚洲精品日韩av片在线观看| 99热这里只有是精品50| av国产精品久久久久影院| 交换朋友夫妻互换小说| 亚洲不卡免费看| 日韩强制内射视频| 国产精品久久久久久久电影| 亚洲精品中文字幕在线视频 | 午夜免费鲁丝| av在线蜜桃| 亚洲人成网站在线播| 激情五月婷婷亚洲| 黄色日韩在线| 建设人人有责人人尽责人人享有的 | 成人无遮挡网站| 在线观看免费日韩欧美大片 | 晚上一个人看的免费电影| 少妇人妻 视频| 国产日韩欧美在线精品| 美女xxoo啪啪120秒动态图| 精品国产露脸久久av麻豆| 麻豆精品久久久久久蜜桃| 18禁裸乳无遮挡动漫免费视频| 蜜桃在线观看..| 我要看黄色一级片免费的| 91aial.com中文字幕在线观看| 国产色婷婷99| av卡一久久| 久久久亚洲精品成人影院| 国产欧美日韩一区二区三区在线 | 黑人高潮一二区| 日韩,欧美,国产一区二区三区| 在线 av 中文字幕| 精华霜和精华液先用哪个| 久热这里只有精品99| 美女内射精品一级片tv| 汤姆久久久久久久影院中文字幕| 亚洲国产毛片av蜜桃av| 亚洲,一卡二卡三卡| 亚洲精品自拍成人| 精品亚洲成a人片在线观看 | 免费观看av网站的网址| 免费看日本二区| 男女边吃奶边做爰视频| 人体艺术视频欧美日本| 国产精品久久久久久久久免| 国国产精品蜜臀av免费| 高清午夜精品一区二区三区| 晚上一个人看的免费电影| a级毛片免费高清观看在线播放| 熟女电影av网| 亚洲国产成人一精品久久久| 国产精品欧美亚洲77777| 九九在线视频观看精品| 丝袜脚勾引网站| 在线观看免费日韩欧美大片 | 狂野欧美激情性xxxx在线观看| 中文资源天堂在线| 久久97久久精品| 精品国产一区二区三区久久久樱花 | 国产精品一区www在线观看| 男女啪啪激烈高潮av片| 国产v大片淫在线免费观看| 色吧在线观看| 亚洲人成网站在线观看播放| 日日摸夜夜添夜夜爱| 免费观看av网站的网址| 国产中年淑女户外野战色| 成人黄色视频免费在线看|