• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Semantic image annotation based on GMM and random walk model①

    2017-06-27 08:09:22TianDongping田東平
    High Technology Letters 2017年2期
    關(guān)鍵詞:東平

    Tian Dongping (田東平)

    (*Institute of Computer Software, Baoji University of Arts and Sciences, Baoji 721007, P.R.China) (**Institute of Computational Information Science, Baoji University of Arts and Sciences, Baoji 721007, P.R.China)

    Semantic image annotation based on GMM and random walk model①

    Tian Dongping (田東平)②***

    (*Institute of Computer Software, Baoji University of Arts and Sciences, Baoji 721007, P.R.China) (**Institute of Computational Information Science, Baoji University of Arts and Sciences, Baoji 721007, P.R.China)

    Automatic image annotation has been an active topic of research in computer vision and pattern recognition for decades. A two stage automatic image annotation method based on Gaussian mixture model (GMM) and random walk model (abbreviated as GMM-RW) is presented. To start with, GMM fitted by the rival penalized expectation maximization (RPEM) algorithm is employed to estimate the posterior probabilities of each annotation keyword. Subsequently, a random walk process over the constructed label similarity graph is implemented to further mine the potential correlations of the candidate annotations so as to capture the refining results, which plays a crucial role in semantic based image retrieval. The contributions exhibited in this work are multifold. First, GMM is exploited to capture the initial semantic annotations, especially the RPEM algorithm is utilized to train the model that can determine the number of components in GMM automatically. Second, a label similarity graph is constructed by a weighted linear combination of label similarity and visual similarity of images associated with the corresponding labels, which is able to avoid the phenomena of polysemy and synonym efficiently during the image annotation process. Third, the random walk is implemented over the constructed label graph to further refine the candidate set of annotations generated by GMM. Conducted experiments on the standard Corel5k demonstrate that GMM-RW is significantly more effective than several state-of-the-arts regarding their effectiveness and efficiency in the task of automatic image annotation.

    semantic image annotation, Gaussian mixture model (GMM), random walk, rival penalized expectation maximization (RPEM), image retrieval

    0 Introduction

    With the advent and popularity of world wide web, the number of accessible digital images for various purposes is growing at an exponential speed. To make the best use of these resources, people need an efficient and effective tool to manage them. In such context, content-based image retrieval (CBIR) was introduced in early 1990s, which heavily depends on the low-level features to find images relevant to the query concept that is represented by the query example provided by the user. However, in the field of computer vision and multimedia processing, the semantic gap between low-level visual features and high-level semantic concepts is a major obstacle to CBIR. As a result, automatic image annotation (AIA) has appeared and become an active topic of research in computer vision for decades due to its potentially large impact on both image understanding and web image search[1]. To be specific, AIA refers to a process to generate textual words automatically to describe the content of a given image, which plays a crucial role in semantic based image retrieval. As can be seen from the literature, the research on AIA has mainly proceeded along two categories. The first one poses image annotation as a supervised classification problem, which treats each semantic keyword or concept as an independent class and assigns each keyword or concept one classifier. More specifically, such kind of approaches predicts the annotations for a new image by computing the similarity at visual level and propagating corresponding keywords subsequently. Representative work includes automatic linguistic index for pictures[2]and supervised formulation for semantic image annotation and retrieval[3]. In contrast, the second category treats the words and visual tokens in each image as equivalent features in different modalities. Followed by image annotation is formalized via modeling the joint distribution of visual and textual features on the training data and predicting the missing textual features for a new image. Representative research includes translation model (TM)[4], cross-media relevance model (CMRM)[5], continuous space relevance model (CRM)[6], multiple Bernoulli relevance model (MBRM)[7], probabilistic latent semantic analysis (PLSA)[8]and correlated topic model[9], etc. By comparison, the former approach is relatively direct and natural to be understood. However, its performance is limited with the increase of the number of the semantic concepts and explosive multimedia data on the web. On the other hand, the latter often requires large-scale parameters to be estimated and the accuracy is strongly affected by the quantity and quality of the training data available.

    The content of this paper is structured as follows. Section 1 summarizes the related work, particularly GMM applied in the fields of automatic image annotation and retrieval. Section 2 elaborates the proposed GMM-RW model, including its parameter estimation, label similarity graph and refining annotation based on the random walk. In Section 3, conducted experiments are reported and analyzed based on the standard Corel5k dataset. Finally, some concluding remarks and potential research directions of GMM in the future are given in Section 4.

    1 Related work

    Gaussian mixture model (GMM), as another kind of supervised learning method, has been extensively applied in machine learning and pattern recognition. As the representative work using GMM for automatic image annotation, Yang, et al.[10]formulate AIA as a supervised multi-class labeling problem. They employ color and texture features to form two separate vectors for which two independent Gaussian mixture models are estimated from the training set as the class densities by means of the EM algorithm in conjunction with a denoising technique. In Ref.[11], an effective visual vocabulary was constructed by applying hierarchical GMM instead of the traditional clustering methods. Meanwhile, PLSA was utilized to explore semantic aspects of visual concepts and to discover topic clusters among documents and visual words so that every image could be projected on to a lower dimensional topic space for more efficient annotation. Besides, Wang, et al.[12]adapted the conventional GMM to a global one estimated by all patches from training images along with an image-specific GMM obtained by adapting the mean vectors of the global GMM and retaining the mixture weights and covariance matrices. Afterwards GMM is embedded into the max-min posterior pseudo-probabilities for AIA, in which the concept-specific visual vocabularies are generated by assuming that the localized features of images with a specific concept satisfy the distribution of GMM[13]. It is generally believed that the spatial relation among objects is very important for image understanding and recognition. In more recent work[14], a new method for automatic image annotation based on GMM by region-based color and coordinate of matching is proposed to be taken into account this factor. To be specific, this method firstly partitions images into disjoint, connected regions with color features and x-y coordinate while a training dataset is modeled through GMM to have a stable annotation result in the later phase.

    As the representative work for CBIR, Sahbi[15]proposed a GMM for clustering and its application to image retrieval. In particular, each cluster of data, modeled as a GMM into an input space, is interpreted as a hyperplane in a high dimensional mapping space where the underlying coefficients are found by solving a quadratic programming problem. In Ref.[16], GMM was leveraged to work on color histograms built with weights delivered by the bilateral filter scheme, which enabled the retrieval system not only to consider the global distribution of the color image pixels but also to take into account their spatial arrangement. In the work of Sayad, et al.[17], a new method was introduced by using multilayer PLSA for image retrieval, which could effectively eliminate the noisiest words generated by the vocabulary building process. Meanwhile, the edge context descriptor is extracted by GMM as well as a spatial weighting scheme is constructed based on GMM to reflect the information about the spatial structure of the images. At the same time, Raju, et al.[18]presented a method for CBIR by making use of the generalized GMM. Wan, et al.[19]proposed a clustering based indexing approach called GMM cluster forest to support multi-features based similarity search in high-dimensional spaces, etc. In addition, GMM has also been successfully applied in the task of other multimedia related fields[20-24].

    As briefly reviewed above, most of these GMM related models can achieve encouraging performance and motivate us to explore better image annotation methods with the help of their excellent experiences and knowledge. So in this paper, a two stage automatic image annotation method is proposed based on Gaussian mixture model and random walk. First, GMM is learned by the rival penalized expectation maximization algorithm to estimate the posterior probabilities of each annotation keyword. In other words, GMM is exploited to capture the initial semantic annotations, which can be seen as the first stage of AIA. Second, a label similarity graph is constructed by a weighted linear combination of label similarity and visual similarity of images associated with the corresponding labels, which can efficiently avoid the phenomena of polysemy and synonym. Third, the random walk is implemented over the constructed label graph to further refine the candidate set of annotations generated by GMM, which can be viewed as the second stage of image annotation. At length, extensive experiments on Corel5k dataset validate the effectiveness and efficiency of the proposed model.

    2 Proposed GMM-RW

    In this section, the scheme of the GMM-RW model proposed in this study is first described (as depicted in Fig.1). Subsequently, GMM-RW is elaborated from three aspects of GMM and its parameter estimation, construction of the label similarity graph and refining annotation based on the random walk, respectively.

    Fig.1 Scheme of the proposed GMM-RW model

    2.1 GMM and its parameter estimation

    A Gaussian mixture model is a parametric probability density function represented as a weighted sum of Gaussian component densities. GMM is commonly used as a parametric model of the probability distribution of continuous measurements. More formally, a GMM is a weighted sum ofMcomponent Gaussian densities as given by the following equation.

    (1)

    where x is aD-dimensional continuous-valued data vector,wi(i=1,2,…,M) denotes the mixture weights,g(x|μi, ∑i),i=1,2,…,M, are the component Gaussian densities. Each component density is aD-variate Gaussian function as follows.

    g(x|μi,Σi)=

    (2)

    withmeanvectorμiand covariance matrix Σi. The mixture weights satisfy the constraint, i.e., sum to 1. The complete GMM is parameterized by the mean vectors, covariance matrices and mixture weights from all the component densities, and these parameters can be collectively represented by the notationλ={wi,μi, Σi},i=1, 2,…,M.

    There are several techniques available for estimating parameters of GMM. By far the most popular and well-established method is the maximum likelihood (ML) estimation, whose aim is to find the model parameters to maximize the likelihood of the GMM given the training data. But in general, the expectation-maximization (EM) algorithm is employed to fit GMM due to the infeasibility of direct maximization for ML. However, there is no penalty for the redundant mixture components based on the EM, which means that the number of components in a GMM cannot be automatically determined and has to be assigned in advance. To this end, the rival penalized expectation maximization (RPEM) algorithm[25]is leveraged to determine the number of components as well as to estimate the model parameters. Since RPEM introduces unequal weights into the conventional likelihood, the weighted likelihood can be written as below:

    (3)

    with

    (4)

    whereh(j|xi,λ)=ωjp(xi|μj, ∑j)/p(xi|λ) is the posterior probability thatxibelongs to thej-th component in the mixture,λis a positive constant,g(j|xi,λ),j=1,2,…,M, are designable weight functions, satisfying the following constraints:

    (5)

    In literature[24], they are constructed as follows:

    g(j|xi,λ)=(1+εi)I(j|xi,λ)-εih(j|xi,λ)

    (6)

    whereI(j|xi,λ) equals to 1 ifj=argmax1≤i≤Mh(i|x,λ) and 0 otherwise.εiis a small positive quantity. The major steps of RPEM algorithm can be summarized as below:

    Algorithm1:TheRPEMalgorithmforGMMmodelingInput:featurevectorx,M,thelearningrateη,themaximumnumberofepochsepochmax,initializeλasλ(0).Process:1. epochcount=0,m=0;2. whileepochcount≤epochmax,do3. fori=1toNdo4. Givenλ(m),calculateh(j|xi,λ(m))toobtaing(j|xi,λ(m))byEq.(6).5. λ(m+1)=λ(m)+Δλ=λ(m)+η?l(xi;λ)?λλ(m),6. m=m+1.7. endfor8. epochcount=epochcount+1;9. endwhileOutput:theconvergedλforGMM.

    Based on Gaussian mixture model and RPEM algorithm described above, GMM can be trained and utilized to characterize the semantic model of the given concepts by Eq.(1). Assume that the training image is represented by both a visual featureX={x1,x2,…,xm} and a keyword listW={w1,w2,…,wn}, wherexi(i=1,2,…,m) denotes the visual feature for regioniandwj(j=1,2,…,n) is thej-th keyword in the annotation. For a test imageIrepresented by its visual feature vector X={x1,x2,…,xm}, according to Bayesian rule, the posterior probabilityp(wi|I) can be calculated based on the conditional probabilityp(I|wi) and prior probabilityp(wi) as follows:

    (7)

    From Eq.(7), the topnkeywords can be selected as the initial annotations for image X.

    2.2 Construction of the label similarity graph

    In the process of automatic image annotation, at least three kinds of relations are involved based on two different modal data. That is, image-to-image, image-to-word and word-to-word relations. How to reasonably reflect these cross-modal relations between images and words plays a critical role in the task of AIA. Note that the most common approaches include WordNet[26]and normalized Google distance (NGD)[27]. From their definitions, it can be easily observed that NGD is actually a measure of the contextual relation while WordNet focuses on the semantic meaning of the keyword itself. Moreover, both of them build word correlations only based on the textual descriptions whereas the visual information of images in the dataset is not considered at all, which can easily lead to the phenomenon that different images with the same candidate annotations would obtain the same annotation results after the refined process. For this reason, an effective pairwise similarity strategy is devised by calculating a weighted linear combination of label similarity and visual similarity of images associated with the corresponding labels, in which the label similarity between wordswiandwjis defined as

    sl(wi,wj)=exp(-d(wi,wj))

    (8)

    whered(wi,wj) represents the distance between two wordswiandwjand it is defined similarly to NGD as below:

    (9)

    wheref(wi) andf(wj) denote the numbers of images containing wordswiandwjrespectively,f(wi,wj) is the number of images containing bothwiandwj,Gis the total number of images in the dataset.

    Similar to Ref.[28], for labelwassociated with imagex, the nearest neighbors ofKarecollectedfromimagescontainingw, and these images can be regarded as the exemplars of labelwwith respect tox. Thus from the point view of labels associated with an image, the visual similarity between labelswiandwjis given as follows:

    (10)

    whereΓwis the representative image collection of wordw,xandydenote image features corresponding to the respective image collections of wordswiandwj,σis the user-defined radius parameter for the Gaussian kernel function. To benefit from each other of the two similarities described above, a weighted linear combination of label similarity and visual similarity is defined as below:

    sij=s(wi,wj) =λsl(wi,wj)+(1-λ)sv(wi,wj)

    (11)

    whereλ∈[0,1] is utilized to control the weights for each measurement.

    2.3 Refining annotation based on random walk

    In the following, the refining image annotation stage is to be elaborated based on the initial annotations generated by GMM and the random walk model. Given that a label graph constructed in subsection 2.2 withnnodes,rk(i) is used to denote the relevance score of nodeiat iterationk,Pdenotes an-by-ntransition matrix, whose elementpijindicates the probability of the transition from nodeito nodejand it is computed as

    (12)

    wheresijis the pairwise label similarity (defined by Eq.(11)) between nodeiand nodej. Then the random walk process can be formulated as

    (13)

    whereα∈(0,1) is a weight parameter to be determined,vjdenotes the initial annotation probabilistic scores calculated by the GMM. In the process of refining image annotation, random walk proceeds until it reaches the steady-state probability distribution and subsequently the top several candidates with the highest probabilities can be seen as the final refining image annotation results.

    3 Experimental results and analysis

    3.1 Dataset and evaluation measures

    The proposed GMM-RW is tested on the Corel5k image dataset obtained from the literature[4]. Corel5k consists of 5,000 images from 50 Corel Stock Photo CD’s. Each CD contains 100 images with a certain theme (e.g. polar bears), of which 90 are designated to be in the training set and 10 in the test set, resulting in 4,500 training images and a balanced 500-image test collection. Alternatively, for the sake of fair comparison, similar features to Ref.[7] are extracted. First of all, images are simply decompose into a set of 32×32-sized blocks, followed by computing a 36-dim feature vector for each block, consisting of 24 color features (auto-correlogram) computed over 8 quantized colors and 3 manhattan distances, 12 texture features (Gabor filter) computed over 3 scales and 4 orientations. As a result, each block is represented as a 36-dim feature vector. Finally, each image is represented as a bag of features, i.e., a set of 36 dimensional vectors. And these features are subsequently employed to train GMM based on the RPEM algorithm. In addition, the value ofλin Eq.(11) is set to be 0.6, and the value ofαin Eq.(13) is set to be 0.5 by trial and error. Without loss of generality, the commonly used metrics precision and recall of every word in the test set are calculated and the mean of these values is utilized to summarize the performance.

    3.2 Results of automatic image annotation

    Matlab 7.0 is applied to implement the proposed GMM-RW model. Specifically, the experiments are carried out on a 1.80GHz Intel Core Duo CPU personal computer (PC) with 2.0G memory running Microsoft windows xp professional. To verify the effectiveness of the proposed model, it is compared with several previous approaches[4-8]. Table 1 reports the experimental results based on two sets of words: the subset of 49 best words and the complete set of all 260 words occur in the training set. From Table 1, it is clear that the model markedly outperforms all the others, especially the first three approaches. Meanwhile, it is also superior to PLSA-WORDS and MBRM by the gains of 21 and 4 words with non-zero recall, 30% and 4% mean per-word recall in conjunction with 79% and 4% mean per-word precision on the set of 260 words respectively. In addition, compared to MBRM on the set of 49 best words, improvement can be get in mean per-word precision despite the mean per-word recall of GMM-RW is the same as that of MBRM.

    Table 1 Performance comparison on Corel5k dataset

    To further illustrate the effect of GMM-RW model for automatic image annotation, Fig.2 displays the average annotation precision of the selected 10 words “flowers”, “mountain”, “snow”, “tree”, “building”, “beach”, “water”, “sky”, “bear” and “cat” based on GMM and GMM-RW models, respectively. As shown in Fig.2, the average precision of the model is obviously higher than that of GMM. The reason lies in that in addition to profit from the calculation strategy of cross-modal relations between images and words. GMM-RW, to a large extent, takes benefit from the random walk process to further mine the correlation of the candidate annotations.

    Fig.2 Average precision based on GMM and GMM-RW

    Alternatively, Table 2 shows some examples of image annotation (only eight cases are listed here due to the limited space) produced by PLSA-WORDS and GMM-RW, respectively. It is clearly observed that the model is able to generate more accurate annotation results compared with the original annotations as well as the ones provided in Ref.[8]. Taking the first image in the first row for example, there exist four tags in the original annotation. However, after annotation by GMM-RW, its annotation is enriched by the other keyword “grass”, which is very appropriate and reasonable to describe the visual content of the image. On the other side, it is important to note that the annotation ranking of the keywords compared to that generated by the PLSA-WORDS is more reasonable, which plays a crucial role in semantic based image retrieval. In addition, as for the complexity of GMM-RW, assuming that there areDtraining images and each image producesRvisual feature vectors, then the complexity of our model isO(DR), which is similar to the classic CRM and MBRM models mentioned in Ref.[3].

    Table 2 Annotation comparison with PLSA-WORDS and GMM-RW

    4 Conclusions and future work

    In this paper, a two stage automatic image annotation method is presented based on GMM and a random walk model. First of all, GMM fitted by the rival penalized expectation maximization is applied to estimate the posterior probabilities of each annotation keyword. Followed by a random walk process over the constructed label similarity graph is implemented to further mine the correlation of the candidate annotations so as to capture the refining results. Particularly, the label similarity graph is constructed by a weighted linear combination of label similarity and visual similarity of images associated with the corresponding labels, which can efficiently avoid the phenomena of polysemy and synonym in the course of automatic image annotation. Extensive experiments on the general-purpose Corel5k dataset validate the feasibility and utility of the proposed GMM-RW model.

    As for future work, a plan is made to explore more powerful GMM related models for automatic image annotation from the following aspects. First, due to the classic GMM has limitation in its modeling abilities as all data points of an object are required to be generated from a pool of mixtures with the same set of mixture weights, so how to determine the weight factors of GMM more appropriately is well worth exploring. Second, how to speed up the GMM estimation with EM algorithm is also an important work for large-scale multimedia processing. In other words, the choice of alternate estimation techniques for the estimation of GMM parameters could also be very valuable. Third, how to introduce semi-supervised learning into the proposed approach to utilize the labeled and unlabeled data simultaneously is a worthy research direction. At the same time, work on web image annotation is continued by refining more relevant semantic information from web pages and building more suitable connection between image content features and available semantic information. Last but not the least, GMM-RW should be expected to be applied in more wider ranges to deal with more multimedia related tasks, such as speech recognition, video recognition and other multimedia event detection tasks, etc.

    [ 1] Tian D P. Exploiting PLSA model and conditional random field for refining image annotation.HighTechnologyLetters, 2015,21(1):78-84

    [ 2] Li J, Wang J. Automatic linguistic indexing of pictures by a statistical modeling approach.IEEETransactionsonPatternAnalysisandMachineIntelligence, 2003,25(19):1075-1088

    [ 3] Carneiro G, Chan A, Moreno P, et al. Supervised learning of semantic classes for image annotation and retrieval.IEEETransactionsonPatternAnalysisandMachineIntelligence, 2007,29(3):394-410

    [ 4] Duygulu P, Barnard K, Freitas N De, et al. Object recognition as machine translation: learning a lexicon for a fixed image vocabulary. In: Proceedings of the 7th European Conference on Computer Vision, Copenhagen, Denmark, 2002. 97-112

    [ 5] Jeon L, Lavrenko V, Manmantha R. Automatic image annotation and retrieval using cross-media relevance models. In: Proceedings of the 26th International ACM SIGIR Conference on Research and Development in Information Retrieval, Toronto, Canada, 2003. 119-126

    [ 6] Lavrenko V, Manmatha R, Jeon J. A model for learning the semantics of pictures. In: Proceedings of the Advances in Neural Information Processing Systems 16, Vancouver, Canada, 2003. 553-560

    [ 7] Feng S, Manmatha R, Lavrenko V. Multiple Bernoulli relevance models for image and video annotation. In: Proceedings of the International Conference on Computer Vision and Pattern Recognition, Washington, USA, 2004. 1002-1009

    [ 8] Monay F, Gatica-Perez D. Modeling semantic aspects for cross-media image indexing.IEEETransactionsonPatternAnalysisandMachineIntelligence, 2007,29(10):1802-1817

    [ 9] Blei D, Lafferty J. Correlated topic models.AnnalsofAppliedStatistics, 2007,1(1):17-35

    [10] Yang F, Shi F, Wang Z. An improved GMM-based method for supervised semantic image annotation. In: Proceedings of the International Conference on Intelligent Computing and Intelligent Systems, Shanghai, China, 2009. 506-510

    [11] Wang Z, Yi H, Wang J, et al. Hierarchical Gaussian mixture model for image annotation via PLSA. In: Proceedings of the 5th International Conference on Image and Graphics, Xi’an, China, 2009. 384-389

    [12] Wang C, Yan S, Zhang L, et al. Multi-label sparse coding for automatic image annotation. In: Proceedings of the International Conference on Computer Vision and Pattern Recognition, Miami, USA, 2009. 1643-1650

    [13] Wang Y, Liu X, Jia Y. Automatic image annotation with cooperation of concept-specific and universal visual vocabularies. In: Proceedings of the 16th International Conference on Multimedia Modeling, Chongqing, China, 2010. 262-272

    [14] Luo X, Kita K. Region-based image annotation using Gaussian mixture model. In: Proceedings of the 2nd International Conference on Information Technology and Software Engineering, Beijing, China, 2013. 503-510

    [15] Sahbi H. A particular Gaussian mixture model for clustering and its application to image retrieval.SoftComputing, 2008, 12(7):667-676

    [16] Luszczkiewicz M, Smolka B. Application of bilateral filtering and Gaussian mixture modeling for the retrieval of paintings. In: Proceedings of the 16th International Conference on Image Processing, Cairo, Egypt, 2009. 77-80

    [17] Sayad I, Martinet J, Urruty T, et al. Toward a higher-level visual representation for content-based image retrieval.MultimediaToolsandApplications, 2012,60(2):455-482

    [18] Raju L, Vasantha K, Srinivas Y. Content based image retrievals based on generalization of GMM.InternationalJournalofComputerScienceandInformationTechnologies, 2012,3(6): 5326-5330

    [19] Wan Y, Liu X, Tong K, et al. GMM-ClusterForest: a novel indexing approach for multi-features based similarity search in high-dimensional spaces. In: Proceedings of the 19th International Conference on Neural Information Processing, Doha, Qatar, 2012. 210-217

    [20] Dixit M, Rasiwasia N, Vasconcelos N. Adapted Gaussian models for image classification. In: Proceedings of the International Conference on Computer Vision and Pattern Recognition, Providence, USA, 2011. 937-943

    [21] Celik T. Image change detection using Gaussian mixture model and genetic algorithm.JournalofVisualCommunicationandImageRepresentation, 2010,21(8):965-974

    [22] Beecks C, Ivanescu A, Kirchhoff S, et al. Modeling image similarity by Gaussian mixture models and the signature quadratic form distance. In: Proceedings of the 13th International Conference on Computer Vision, Barcelona, Spain, 2011. 1754-1761

    [23] Wang Y, Chen W, Zhang J, et al. Efficient volume exploration using the Gaussian mixture model.IEEETransactionsonVisualizationandComputerGraphics, 2011,17(11):1560-1573

    [24] Inoue N, Shinoda K. A fast and accurate video semantic-indexing system using fast MAP adaptation and GMM super-vectors.IEEETransactionsonMultimedia, 2012,14(4):1196-1205

    [25] Cheung Y. Maximum weighted likelihood via rival penalized EM for density mixture clustering with automatic model selection.IEEETransactionsonKnowledgeandDataEngineering, 2005,17(6):750-761

    [26] Fellbaum C. WordNet. Theory and Applications of Ontology: Computer Applications, 2010. 231-243

    [27] Cilibrasi R, Paul M. The Google similarity distance.IEEETransactionsonKnowledgeandDataEngineering, 2007, 19(3):370-383

    [28] Liu D, Hua X, Yang L, et al. Tag ranking. In: Proceedings of the 18th International Conference on World Wide Web, Madrid, Spain, 2009. 351-360

    10.3772/j.issn.1006-6748.2017.02.015

    ①Supported by the National Basic Research Program of China (No.2013CB329502), the National Natural Science Foundation of China (No.61202212), the Special Research Project of the Educational Department of Shaanxi Province of China (No.15JK1038) and the Key Research Project of Baoji University of Arts and Sciences (No.ZK16047).

    ②To whom correspondence should be addressed. E-mail: tdp211@163.com

    on May 25, 2016

    ping, born in 1981. He received his M.Sc. and Ph.D. degrees in computer science from Shanghai Normal University and Institute of Computing Technology, Chinese Academy of Sciences in 2007 and 2014, respectively. His research interests include computer vision, machine learning and evolutionary computation.

    猜你喜歡
    東平
    墾荒
    種絲瓜
    茶藝
    金秋(2020年8期)2020-08-17 08:38:20
    鐵 匠
    詩(shī)四首——東平之《春》、《夏》、《秋》、《冬》
    Semi-supervised learning based probabilistic latent semantic analysis for automatic image annotation①
    讓批評(píng)和自我批評(píng)成為黨內(nèi)政治生活的常態(tài)
    全面從嚴(yán)治黨背景下建立健全容錯(cuò)糾錯(cuò)機(jī)制的探討
    A Comparative Study of Buddha in China and God in Western Countries
    東方教育(2017年2期)2017-04-21 04:46:18
    Exploiting PLSA model and conditional random field for refining image annotation*
    欧美一级a爱片免费观看看| 久久久国产成人免费| 国产精品精品国产色婷婷| 色吧在线观看| 九九热线精品视视频播放| 国产一区二区亚洲精品在线观看| 国产一区二区激情短视频| 亚洲av一区综合| 少妇裸体淫交视频免费看高清| 在线观看午夜福利视频| 狠狠狠狠99中文字幕| 插阴视频在线观看视频| 一本久久中文字幕| 日韩 亚洲 欧美在线| 特大巨黑吊av在线直播| aaaaa片日本免费| 精品无人区乱码1区二区| 国产精品久久久久久久久免| 又爽又黄无遮挡网站| 精品久久久久久成人av| 成人美女网站在线观看视频| av在线亚洲专区| 黄色日韩在线| 国产免费一级a男人的天堂| 国产三级在线视频| 少妇人妻一区二区三区视频| 热99在线观看视频| 国产一区亚洲一区在线观看| 九九热线精品视视频播放| 国产成人a∨麻豆精品| av女优亚洲男人天堂| 秋霞在线观看毛片| 日本在线视频免费播放| 淫妇啪啪啪对白视频| av专区在线播放| 在线播放无遮挡| 婷婷精品国产亚洲av| 国产女主播在线喷水免费视频网站 | 一区二区三区高清视频在线| 99热全是精品| 欧美在线一区亚洲| 18禁裸乳无遮挡免费网站照片| 99久国产av精品国产电影| 一级黄片播放器| 午夜精品国产一区二区电影 | 欧美日韩在线观看h| 搡老岳熟女国产| 香蕉av资源在线| .国产精品久久| 99久久久亚洲精品蜜臀av| av在线观看视频网站免费| 看非洲黑人一级黄片| 亚洲自偷自拍三级| a级毛片a级免费在线| 日韩成人av中文字幕在线观看 | 黄色配什么色好看| av天堂中文字幕网| 久久精品人妻少妇| 成人三级黄色视频| av免费在线看不卡| 丰满乱子伦码专区| 日本爱情动作片www.在线观看 | 色综合色国产| 在线观看66精品国产| 日本-黄色视频高清免费观看| 国产一级毛片七仙女欲春2| 亚洲乱码一区二区免费版| 热99在线观看视频| 久久人人爽人人爽人人片va| 成人漫画全彩无遮挡| 一个人观看的视频www高清免费观看| 欧美色视频一区免费| 亚洲图色成人| 亚洲av中文字字幕乱码综合| 中国美白少妇内射xxxbb| 国产精品一区二区性色av| 成人三级黄色视频| 97在线视频观看| 午夜老司机福利剧场| 久久久国产成人免费| 国产91av在线免费观看| 99热这里只有精品一区| 精品久久久久久久久久久久久| 国产私拍福利视频在线观看| 人妻夜夜爽99麻豆av| 中文字幕av成人在线电影| av天堂在线播放| 搡女人真爽免费视频火全软件 | 毛片一级片免费看久久久久| 禁无遮挡网站| 日本撒尿小便嘘嘘汇集6| 精品人妻一区二区三区麻豆 | 成人毛片a级毛片在线播放| 亚洲精品在线观看二区| 最新中文字幕久久久久| 免费不卡的大黄色大毛片视频在线观看 | 精品国产三级普通话版| 在线免费观看的www视频| 黄色视频,在线免费观看| 亚洲av熟女| 在现免费观看毛片| 秋霞在线观看毛片| 精品久久久久久久久久久久久| 精品久久久久久久人妻蜜臀av| 内射极品少妇av片p| 欧美日韩一区二区视频在线观看视频在线 | 俄罗斯特黄特色一大片| 亚洲高清免费不卡视频| 亚洲人与动物交配视频| 老女人水多毛片| 日日摸夜夜添夜夜添av毛片| 国产精品1区2区在线观看.| 12—13女人毛片做爰片一| av中文乱码字幕在线| 国产午夜福利久久久久久| 午夜福利高清视频| 亚洲中文字幕日韩| 久久人人精品亚洲av| 久久99热这里只有精品18| 女生性感内裤真人,穿戴方法视频| 免费看美女性在线毛片视频| 亚洲精品国产av成人精品 | 在线观看一区二区三区| 天堂动漫精品| 一区二区三区免费毛片| 国产精品伦人一区二区| 免费人成在线观看视频色| 内射极品少妇av片p| 成人毛片a级毛片在线播放| 午夜a级毛片| 国产精品爽爽va在线观看网站| 性色avwww在线观看| 晚上一个人看的免费电影| 亚洲中文日韩欧美视频| 亚洲成人av在线免费| 亚洲精品亚洲一区二区| 国产男人的电影天堂91| 丰满乱子伦码专区| 1024手机看黄色片| 欧美成人精品欧美一级黄| 国产亚洲精品综合一区在线观看| 全区人妻精品视频| 亚洲自拍偷在线| 欧美性猛交黑人性爽| 欧美色视频一区免费| 成年女人毛片免费观看观看9| 久久久久久久亚洲中文字幕| 性欧美人与动物交配| 禁无遮挡网站| 日韩成人av中文字幕在线观看 | 国产成人a∨麻豆精品| 我的女老师完整版在线观看| 国产高潮美女av| 欧美日韩综合久久久久久| 99精品在免费线老司机午夜| 精品久久久久久久久久久久久| 色尼玛亚洲综合影院| 看片在线看免费视频| 成人高潮视频无遮挡免费网站| 日本免费一区二区三区高清不卡| 亚洲av.av天堂| 九九热线精品视视频播放| 99在线视频只有这里精品首页| 国产综合懂色| 亚洲国产精品合色在线| 亚洲国产精品sss在线观看| 露出奶头的视频| 热99re8久久精品国产| 亚洲av成人精品一区久久| 日本-黄色视频高清免费观看| 一区福利在线观看| 一级毛片久久久久久久久女| aaaaa片日本免费| 日韩在线高清观看一区二区三区| 嫩草影视91久久| 亚洲精品国产成人久久av| 一级黄片播放器| 免费一级毛片在线播放高清视频| 亚洲成人av在线免费| 免费电影在线观看免费观看| 97热精品久久久久久| 老女人水多毛片| 亚洲国产高清在线一区二区三| 九九久久精品国产亚洲av麻豆| 亚洲av成人av| www.色视频.com| 中国美女看黄片| 日本在线视频免费播放| 一个人看的www免费观看视频| 日本 av在线| 亚洲国产精品成人综合色| 亚洲国产精品久久男人天堂| 天天一区二区日本电影三级| 成人三级黄色视频| 一夜夜www| 中文字幕免费在线视频6| 久久久久国产网址| 欧美日韩精品成人综合77777| 搡老妇女老女人老熟妇| 成年女人毛片免费观看观看9| 伦理电影大哥的女人| 欧美日韩精品成人综合77777| 少妇猛男粗大的猛烈进出视频 | 精品午夜福利在线看| 亚洲婷婷狠狠爱综合网| 亚洲美女搞黄在线观看 | 男人的好看免费观看在线视频| 一本精品99久久精品77| 精品熟女少妇av免费看| 美女黄网站色视频| 一本久久中文字幕| 国产大屁股一区二区在线视频| 变态另类成人亚洲欧美熟女| 国产精品av视频在线免费观看| 一个人免费在线观看电影| 久久草成人影院| 黄色配什么色好看| 日本 av在线| 亚洲五月天丁香| 久久精品国产鲁丝片午夜精品| 九九在线视频观看精品| 国内揄拍国产精品人妻在线| 国产成人a∨麻豆精品| 欧美极品一区二区三区四区| 白带黄色成豆腐渣| eeuss影院久久| 日韩成人av中文字幕在线观看 | 午夜a级毛片| 18禁在线无遮挡免费观看视频 | 久久99热这里只有精品18| 18禁在线无遮挡免费观看视频 | 色综合亚洲欧美另类图片| 亚洲婷婷狠狠爱综合网| 国产精品永久免费网站| 日本成人三级电影网站| 一a级毛片在线观看| 日韩高清综合在线| 国产高清三级在线| 精品不卡国产一区二区三区| 美女黄网站色视频| 一进一出抽搐动态| 搡老熟女国产l中国老女人| 国产 一区精品| 国产高清有码在线观看视频| 欧美极品一区二区三区四区| 老熟妇乱子伦视频在线观看| 久久久久久伊人网av| 午夜免费激情av| 国产在线精品亚洲第一网站| 日本免费一区二区三区高清不卡| 最近视频中文字幕2019在线8| 51国产日韩欧美| 日韩成人伦理影院| 日日啪夜夜撸| 偷拍熟女少妇极品色| 成年免费大片在线观看| 一个人看视频在线观看www免费| 18禁黄网站禁片免费观看直播| 99久久无色码亚洲精品果冻| 久久精品国产鲁丝片午夜精品| 身体一侧抽搐| 有码 亚洲区| 亚洲一区高清亚洲精品| 白带黄色成豆腐渣| 日韩欧美三级三区| 亚洲欧美中文字幕日韩二区| 天堂av国产一区二区熟女人妻| 日产精品乱码卡一卡2卡三| 乱人视频在线观看| 成人亚洲精品av一区二区| 欧美区成人在线视频| 高清日韩中文字幕在线| 亚洲四区av| 日韩欧美在线乱码| 波多野结衣高清无吗| 免费观看人在逋| 国产精品久久电影中文字幕| 国产精品久久久久久久电影| 精品久久国产蜜桃| 麻豆乱淫一区二区| 不卡视频在线观看欧美| 国产精品av视频在线免费观看| 男插女下体视频免费在线播放| 亚洲在线观看片| 欧美激情久久久久久爽电影| 国产精品人妻久久久影院| 国产精品美女特级片免费视频播放器| 在线播放无遮挡| 亚洲美女视频黄频| 亚洲人与动物交配视频| 两性午夜刺激爽爽歪歪视频在线观看| 国产精品福利在线免费观看| 91久久精品国产一区二区成人| 久久精品国产亚洲av天美| av在线蜜桃| 听说在线观看完整版免费高清| 久久久国产成人精品二区| 伦理电影大哥的女人| 精品一区二区免费观看| 最近在线观看免费完整版| av在线亚洲专区| 亚洲av美国av| 成人二区视频| 国产精品亚洲一级av第二区| 长腿黑丝高跟| 中文字幕精品亚洲无线码一区| 少妇熟女aⅴ在线视频| 美女免费视频网站| 夜夜看夜夜爽夜夜摸| 国产一区二区激情短视频| 欧美极品一区二区三区四区| 夜夜看夜夜爽夜夜摸| 美女xxoo啪啪120秒动态图| 成人鲁丝片一二三区免费| 在线天堂最新版资源| 欧美一区二区亚洲| 乱码一卡2卡4卡精品| 免费看a级黄色片| 精品欧美国产一区二区三| 久久鲁丝午夜福利片| 变态另类丝袜制服| 久久久久久久久久久丰满| 无遮挡黄片免费观看| 熟女人妻精品中文字幕| 内射极品少妇av片p| 丰满人妻一区二区三区视频av| 97超碰精品成人国产| 久久久久九九精品影院| 成年免费大片在线观看| 国内精品宾馆在线| 欧美成人精品欧美一级黄| 国产精品一区二区免费欧美| 日韩大尺度精品在线看网址| 99精品在免费线老司机午夜| 寂寞人妻少妇视频99o| 免费av毛片视频| 性色avwww在线观看| 国产高清三级在线| 夜夜看夜夜爽夜夜摸| 五月玫瑰六月丁香| 亚洲国产精品成人综合色| 桃色一区二区三区在线观看| 两性午夜刺激爽爽歪歪视频在线观看| 亚洲国产精品sss在线观看| 女人十人毛片免费观看3o分钟| 亚洲人与动物交配视频| av在线亚洲专区| 一a级毛片在线观看| 成年免费大片在线观看| 日韩欧美精品v在线| 秋霞在线观看毛片| 国内少妇人妻偷人精品xxx网站| 国产精华一区二区三区| 欧美日韩一区二区视频在线观看视频在线 | 国产免费一级a男人的天堂| 国产黄片美女视频| 国产欧美日韩精品一区二区| 国产探花在线观看一区二区| 高清午夜精品一区二区三区 | 一级黄色大片毛片| 两性午夜刺激爽爽歪歪视频在线观看| 3wmmmm亚洲av在线观看| 欧美最新免费一区二区三区| 亚洲国产日韩欧美精品在线观看| 黄色一级大片看看| 波多野结衣高清无吗| 99热网站在线观看| 午夜福利视频1000在线观看| 啦啦啦观看免费观看视频高清| 身体一侧抽搐| 91麻豆精品激情在线观看国产| 国产精品久久视频播放| 久久精品国产亚洲网站| 成年女人毛片免费观看观看9| av在线老鸭窝| 久久精品91蜜桃| 男女边吃奶边做爰视频| 免费看av在线观看网站| 亚洲色图av天堂| 精品午夜福利在线看| 国产成人aa在线观看| 国产在线男女| 日本色播在线视频| 久久天躁狠狠躁夜夜2o2o| 日韩大尺度精品在线看网址| 最好的美女福利视频网| 99国产精品一区二区蜜桃av| 国产又黄又爽又无遮挡在线| 欧美一区二区精品小视频在线| 舔av片在线| 亚洲精品乱码久久久v下载方式| 永久网站在线| 岛国在线免费视频观看| 久久久a久久爽久久v久久| 此物有八面人人有两片| 两个人视频免费观看高清| 午夜久久久久精精品| 精品人妻偷拍中文字幕| 免费看美女性在线毛片视频| 久久久国产成人精品二区| 一进一出抽搐gif免费好疼| 欧美成人一区二区免费高清观看| 中文亚洲av片在线观看爽| 一边摸一边抽搐一进一小说| 日本黄大片高清| 精品欧美国产一区二区三| 亚洲中文字幕一区二区三区有码在线看| 免费观看的影片在线观看| 欧美三级亚洲精品| 在线免费十八禁| 国产高清视频在线播放一区| 午夜爱爱视频在线播放| av在线观看视频网站免费| 97超级碰碰碰精品色视频在线观看| 午夜福利在线在线| 又爽又黄无遮挡网站| 成人三级黄色视频| 精品久久久久久成人av| 国产精品免费一区二区三区在线| 精品久久久噜噜| 国产欧美日韩精品亚洲av| 99热精品在线国产| 露出奶头的视频| 在线播放无遮挡| 日韩欧美免费精品| 黄色一级大片看看| 亚洲精品国产av成人精品 | 亚洲精品粉嫩美女一区| 晚上一个人看的免费电影| 国产精品99久久久久久久久| 欧美日韩精品成人综合77777| 日韩亚洲欧美综合| 看片在线看免费视频| 村上凉子中文字幕在线| 国产精品av视频在线免费观看| 1000部很黄的大片| 美女黄网站色视频| 夜夜夜夜夜久久久久| 午夜日韩欧美国产| 久久久久久九九精品二区国产| 最新中文字幕久久久久| 久久人人爽人人片av| 九九在线视频观看精品| 超碰av人人做人人爽久久| 免费黄网站久久成人精品| 又爽又黄a免费视频| 成年女人看的毛片在线观看| 国产av不卡久久| 99久国产av精品| 国内少妇人妻偷人精品xxx网站| 亚洲婷婷狠狠爱综合网| 97超碰精品成人国产| 国产亚洲91精品色在线| 最近手机中文字幕大全| 亚洲一区高清亚洲精品| 免费无遮挡裸体视频| 精品午夜福利在线看| 精品99又大又爽又粗少妇毛片| 人妻夜夜爽99麻豆av| 超碰av人人做人人爽久久| 99热这里只有是精品50| 国产男靠女视频免费网站| 日韩av在线大香蕉| 亚洲不卡免费看| 亚洲精品日韩av片在线观看| 热99re8久久精品国产| 国产黄片美女视频| 亚洲av一区综合| 亚洲在线自拍视频| 床上黄色一级片| 成人亚洲精品av一区二区| 亚洲中文字幕一区二区三区有码在线看| 日韩成人av中文字幕在线观看 | 欧美+日韩+精品| 国产一级毛片七仙女欲春2| 欧美在线一区亚洲| 搡老熟女国产l中国老女人| 久久99热这里只有精品18| 久久精品国产99精品国产亚洲性色| 亚洲一区二区三区色噜噜| 免费搜索国产男女视频| 91在线观看av| 国产精品日韩av在线免费观看| 校园春色视频在线观看| 亚洲综合色惰| 精品人妻熟女av久视频| 网址你懂的国产日韩在线| 欧美性感艳星| 校园春色视频在线观看| 热99re8久久精品国产| 欧美高清性xxxxhd video| 亚洲真实伦在线观看| 亚洲最大成人手机在线| 亚洲中文日韩欧美视频| 丰满的人妻完整版| 别揉我奶头~嗯~啊~动态视频| 国内精品一区二区在线观看| 听说在线观看完整版免费高清| 尤物成人国产欧美一区二区三区| 日韩精品有码人妻一区| 蜜桃亚洲精品一区二区三区| 日产精品乱码卡一卡2卡三| 日韩 亚洲 欧美在线| 日韩欧美 国产精品| 国内精品宾馆在线| 午夜福利在线观看免费完整高清在 | 国产精品99久久久久久久久| 黄色视频,在线免费观看| 97超碰精品成人国产| 亚洲精品国产成人久久av| av专区在线播放| 婷婷六月久久综合丁香| 91精品国产九色| 国产熟女欧美一区二区| 床上黄色一级片| 国产乱人偷精品视频| 悠悠久久av| 六月丁香七月| 一级毛片aaaaaa免费看小| 男人舔女人下体高潮全视频| 成年免费大片在线观看| 我的老师免费观看完整版| 亚洲精品影视一区二区三区av| 国产精品电影一区二区三区| 草草在线视频免费看| 亚洲av不卡在线观看| 久久久午夜欧美精品| 国产成人影院久久av| 波多野结衣巨乳人妻| 少妇的逼好多水| 亚洲美女搞黄在线观看 | 久久精品夜色国产| 黄色配什么色好看| 久久久精品94久久精品| 老师上课跳d突然被开到最大视频| 日本五十路高清| 性插视频无遮挡在线免费观看| 最后的刺客免费高清国语| 国产单亲对白刺激| 1000部很黄的大片| 国产精品嫩草影院av在线观看| 国产精品亚洲一级av第二区| 亚洲欧美日韩卡通动漫| 麻豆一二三区av精品| 久久久久久久久久黄片| 成人美女网站在线观看视频| av免费在线看不卡| 级片在线观看| 久久久久久久久大av| 天堂影院成人在线观看| 日韩欧美三级三区| 成人一区二区视频在线观看| 色综合站精品国产| 国产成人一区二区在线| 欧美日韩乱码在线| 亚洲,欧美,日韩| 午夜亚洲福利在线播放| 人妻制服诱惑在线中文字幕| 女人十人毛片免费观看3o分钟| 久久鲁丝午夜福利片| 亚洲婷婷狠狠爱综合网| 亚洲国产高清在线一区二区三| 麻豆国产av国片精品| 精品一区二区三区人妻视频| 欧美xxxx黑人xx丫x性爽| 国产老妇女一区| 丝袜美腿在线中文| 内射极品少妇av片p| 国国产精品蜜臀av免费| 色尼玛亚洲综合影院| 少妇人妻一区二区三区视频| 桃色一区二区三区在线观看| 波多野结衣高清无吗| 丰满的人妻完整版| 国产一区二区亚洲精品在线观看| 国产精品久久电影中文字幕| 久久国产乱子免费精品| 国产精品精品国产色婷婷| 精品熟女少妇av免费看| 精华霜和精华液先用哪个| 99在线视频只有这里精品首页| 校园春色视频在线观看| 久久中文看片网| 免费观看精品视频网站| 黑人高潮一二区| 欧美国产日韩亚洲一区| 亚洲一级一片aⅴ在线观看| 成人永久免费在线观看视频| 成人漫画全彩无遮挡| 日韩精品中文字幕看吧| 在线免费观看不下载黄p国产| 久久国产乱子免费精品| 最近最新中文字幕大全电影3| 久久久久久大精品| 三级男女做爰猛烈吃奶摸视频| 亚洲婷婷狠狠爱综合网| 亚洲精品在线观看二区| 国产大屁股一区二区在线视频| 久久久精品94久久精品| 一级黄色大片毛片| 亚洲人与动物交配视频| 亚洲美女视频黄频| 国产亚洲精品av在线| 一边摸一边抽搐一进一小说| 免费看a级黄色片| 精品一区二区三区av网在线观看| 99九九线精品视频在线观看视频| 精品一区二区三区视频在线观看免费| 色视频www国产| 亚洲国产欧洲综合997久久,| 黄色欧美视频在线观看| 91久久精品国产一区二区成人| 久久久久久九九精品二区国产| 亚洲图色成人| 日韩强制内射视频|