• 
    

    
    

      99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

      基于改進(jìn)卷積神經(jīng)網(wǎng)絡(luò)的單幅圖像超分辨率重建方法

      2019-08-01 01:48:57劉月峰楊涵晰蔡爽張晨榮
      計(jì)算機(jī)應(yīng)用 2019年5期

      劉月峰 楊涵晰 蔡爽 張晨榮

      摘 要:針對(duì)健身者在健身過(guò)程中因缺乏監(jiān)督指導(dǎo)而導(dǎo)致姿勢(shì)不正確甚至危及健康的問(wèn)題,提出了一種深蹲姿勢(shì)實(shí)時(shí)檢測(cè)的新方法。通過(guò)Kinect攝像頭提取人體關(guān)節(jié)三維信息,對(duì)健身中最常見(jiàn)的深蹲行為進(jìn)行抽象與建模,解決了計(jì)算機(jī)視覺(jué)技術(shù)對(duì)于細(xì)微動(dòng)作變化難以檢測(cè)的問(wèn)題。首先,通過(guò)Kinect攝像頭捕獲深度圖像,實(shí)時(shí)獲取人體關(guān)節(jié)點(diǎn)的三維坐標(biāo);然后,將深蹲姿勢(shì)抽象為軀干角度、髖部角度、膝部角度和踝部角度,并進(jìn)行數(shù)字化建模,逐幀記錄下角度變化;最后,在深蹲完成后,采用閾值比較的方法,計(jì)算一定時(shí)間段內(nèi)非標(biāo)準(zhǔn)幀比率。如計(jì)算比率大于所給定閾值,則判定此次深蹲為不標(biāo)準(zhǔn);如低于閾值則為標(biāo)準(zhǔn)深蹲姿勢(shì)。通過(guò)對(duì)六種不同類(lèi)型的深蹲姿勢(shì)進(jìn)行實(shí)驗(yàn),結(jié)果表明,該方法可檢測(cè)出不同類(lèi)型的非標(biāo)準(zhǔn)深蹲姿勢(shì),并且在六種不同類(lèi)型的深蹲姿勢(shì)中平均識(shí)別率在90%以上,能夠?qū)∩碚咂鸬教嵝阎笇?dǎo)的作用。

      關(guān)鍵詞:深蹲檢測(cè);姿勢(shì)檢測(cè);Kinect;深度圖像;骨架信息

      中圖分類(lèi)號(hào):TP391.4

      文獻(xiàn)標(biāo)志碼:A

      Abstract: Concerning the problem that the posture is not correct and even endangers the health of body builder caused by the lack of supervision and guidance in the process of bodybuilding, a new method of realtime detection of deep squat posture was proposed. The most common deep squat behavior in bodybuilding was abstracted and modeled by threedimensional information of human joints extracted through Kinect camera, solving the problem that computer vision technology is difficult to detect small movements. Firstly, Kinect camera was used to capture the depth images to obtain threedimensional coordinates of human body joints in real time. Then, the deep squat posture was abstracted as torso angle, hip angle, knee angle and ankle angle, and the digital modeling was carried out to record the angle changes frame by frame. Finally, after completing the deep squat, a threshold comparison method was used to calculate the nonstandard frame ratio in a certain period of time. If the calculated ratio was greater than the given threshold, the deep squat was judged as nonstandard, otherwise judged as standard. The experiment results of six different types of deep squat show that the proposed method can detect different types of nonstandard deep squat, and the average recognition rate is more than 90% of the six different types of deep squat, which can play a role in reminding and guiding bodybuilders.

      英文關(guān)鍵詞Key words: deep squat detection; posture detection; Kinect; depth image; skeleton information

      0 引言

      深蹲被稱(chēng)為力量訓(xùn)練之王,是增加腿部和臀部力量的基本練習(xí)動(dòng)作。保持標(biāo)準(zhǔn)的深蹲姿勢(shì)可以訓(xùn)練到臀部、大腿,并有利于下半身的骨骼、韌帶和肌腱的鍛煉。但是,長(zhǎng)期使用不標(biāo)準(zhǔn)的深蹲姿勢(shì)不僅浪費(fèi)健身者的時(shí)間,而且還會(huì)增加韌帶、半月板和膝蓋受傷的風(fēng)險(xiǎn)。標(biāo)準(zhǔn)的深蹲姿勢(shì)對(duì)于很多運(yùn)動(dòng)員都是較難掌握的[1]。人們通常通過(guò)自己的主觀意識(shí)來(lái)判斷深蹲姿勢(shì)是否標(biāo)準(zhǔn),此方法帶有很強(qiáng)的個(gè)人色彩,難以客觀準(zhǔn)確地對(duì)深蹲姿勢(shì)進(jìn)行判斷;同時(shí),使用昂貴的費(fèi)用聘請(qǐng)私人教練也使得健身成本增加,且大部分健身者都沒(méi)有經(jīng)濟(jì)條件聘請(qǐng)私人教練,使得很多人對(duì)健身望而卻步,因此對(duì)深蹲姿勢(shì)進(jìn)行自動(dòng)檢測(cè)具有重要的實(shí)際意義,能夠使得這項(xiàng)最基本的練習(xí)動(dòng)作被更多人掌握,同時(shí)又可減少鍛煉者因長(zhǎng)期使用錯(cuò)誤姿勢(shì)而導(dǎo)致的嚴(yán)重后果。

      深蹲屬于一種行為動(dòng)作,而關(guān)于行為動(dòng)作領(lǐng)域的研究近些年來(lái)越來(lái)越多。有關(guān)領(lǐng)域目前的研究方法通常是基于可穿戴傳感器和計(jì)算機(jī)視覺(jué)技術(shù)[2]。這些研究方法大多用來(lái)完成手勢(shì)識(shí)別[3-6] 、坐姿檢測(cè)[7]、摔倒檢測(cè)[8]、行為分類(lèi)[9-12]等任務(wù),且都能夠取得較好的效果。

      上述兩類(lèi)方法也有著不容忽視的缺點(diǎn):首先,可穿戴傳感器會(huì)給使用者造成不適;此外,由于受到擠壓碰撞等外部因素,可穿戴設(shè)備會(huì)逐漸損壞,導(dǎo)致無(wú)法收集信息。而基于計(jì)算機(jī)視覺(jué)的方法大多需要經(jīng)過(guò)訓(xùn)練,訓(xùn)練過(guò)程是極度耗時(shí)的,并且此類(lèi)方法嚴(yán)重依賴(lài)于訓(xùn)練數(shù)據(jù)集,而且深蹲是一種順時(shí)動(dòng)作,且動(dòng)作變化快,一般的計(jì)算機(jī)視覺(jué)技術(shù)對(duì)于這種細(xì)微動(dòng)作的變化較難檢測(cè),因此很少有研究對(duì)運(yùn)動(dòng)姿勢(shì)(如深蹲)是否標(biāo)準(zhǔn),行為是否準(zhǔn)確提出疑問(wèn)。

      Kinect深度傳感器能夠自動(dòng)捕獲人體的深度圖像,并實(shí)時(shí)跟蹤人體骨架,檢測(cè)到細(xì)微的動(dòng)作變化:一方面,Kinect獲取的深度圖像不同于彩色圖像,可以提供更多的空間信息,同時(shí)又能保護(hù)個(gè)人隱私, 因此,通過(guò)分析深度圖像來(lái)識(shí)別和檢測(cè)姿勢(shì)的方法一直以來(lái)都備受關(guān)注;另一方面, 人體的骨骼特征也為行為識(shí)別、姿勢(shì)檢測(cè)等任務(wù)提供了重要的行為特征。Kinect因上述功能和其具有的精確性與實(shí)用性等特點(diǎn),已經(jīng)使其成為一種多功能組件,進(jìn)而可以集成到日常生活的各種應(yīng)用中[13-17]。

      本文利用Kinect深度傳感器提出了一種基于骨架信息來(lái)檢測(cè)非標(biāo)準(zhǔn)深蹲姿勢(shì)的方法。首先, 針對(duì)深蹲姿勢(shì),提出了使用軀干角度、髖部角度、膝部角度和踝部角度作為深蹲期間的4個(gè)代表性特征; 然后,把深蹲過(guò)程分為4個(gè)階段,并使用關(guān)鍵幀檢測(cè)技術(shù),對(duì)每一階段的角度特征逐幀的計(jì)算和記錄; 最后,采用閾值比較的方法,對(duì)深蹲姿勢(shì)進(jìn)行檢測(cè)判斷。該方法無(wú)需佩戴任何的可穿戴傳感器,不會(huì)給鍛煉者帶來(lái)不便,且不需要使用訓(xùn)練數(shù)據(jù)集,能夠做到實(shí)時(shí)準(zhǔn)確的檢測(cè)。

      1 深蹲姿勢(shì)檢測(cè)方法

      1.1 特征定義

      在對(duì)所提方法進(jìn)行建模之前,首先需要建立可用于區(qū)分標(biāo)準(zhǔn)深蹲姿勢(shì)和非標(biāo)準(zhǔn)深蹲姿勢(shì)的界限。本文中Winwood等[18]的研究結(jié)果被用來(lái)建模深蹲姿勢(shì)。表1的數(shù)據(jù)顯示了當(dāng)健康個(gè)體深蹲時(shí)關(guān)節(jié)點(diǎn)應(yīng)保持的角度范圍(其中SD(Standard Deviation)為標(biāo)準(zhǔn)差)。

      3 結(jié)語(yǔ)

      本文提出了一種基于計(jì)算機(jī)視覺(jué)技術(shù)的非標(biāo)準(zhǔn)深蹲姿勢(shì)的判斷方法。首先,使用Kinect深度攝像頭捕獲深度圖像并提取人體骨架關(guān)節(jié)點(diǎn)的三維坐標(biāo)信息;然后,利用余弦定理計(jì)算深蹲姿勢(shì)抽象化后的軀干角度、髖部角度、膝部角度和踝部角度四個(gè)代表性特征,并記錄其變化值;最后,深蹲運(yùn)動(dòng)結(jié)束后計(jì)算非標(biāo)準(zhǔn)幀的比率,并與實(shí)驗(yàn)得出的閾值進(jìn)行對(duì)比以判斷姿勢(shì)是否標(biāo)準(zhǔn)。實(shí)驗(yàn)結(jié)果表明,該方法可快速有效地檢測(cè)出不同類(lèi)型的非標(biāo)準(zhǔn)深蹲姿勢(shì),并具有計(jì)算量低、魯棒性高和時(shí)效性好等特點(diǎn)。

      參考文獻(xiàn) (References)

      [1] CHIU L Z. Sitting back in the squat[J]. Strength and Conditioning Journal, 2009, 31(6):25-27.

      [2] YAO L Y, MING W D, Cui H. A new Kinect approach to judge unhealthy sitting posture based on neck angle and torso angle[C]// Proceedings of the 2017 International Conference on Image and Graphics, LNCS 10666. Berlin: SpringerVerlag, 2017:340-350.

      [3] FANG B, SUN F C, LIU H P, et al. 3D human gesture capturing and recognition by the IMMUbased data glove[J]. Neurocomputing, 2017, 277:198-207.

      [4] FERRONE A, JIANG X, MAIOLO L, et al. A fabricbased wearable band for hand gesture recognition based on filament strain sensors: A preliminary investigation[C]// Proceedings of the 2016 IEEE Healthcare Innovation PointofCare Technologies Conference. Piscataway, NJ: IEEE, 2016:113-116.

      [5] WU D, SHAO L. Deep dynamic neural networks for gesture segmentation and recognition[C]// Proceedings of the 2014 European Conference on Computer Vision. Berlin: Springer, 2014:552-571.

      [6] LI Y, WANG X G, LIU W Y, et al. Deep attention network for joint hand gesture localization and recognition using static RGBD images[J]. Information Sciences, 2018, 441:66-78.

      [7] 曾星,孫備, 羅武勝, 等. 基于深度傳感器的坐姿檢測(cè)系統(tǒng)[J]. 計(jì)算機(jī)科學(xué),2018, 45(7):237-242. (ZENG X, SUN B, LUO W S, et al. Sitting posture detection system based on depth sensor[J]. Computer Science, 2018, 45(7):237-242.)

      [8] YAO L Y, MING W D, LU K Q. A new approach to fall detection based on the human torso motion model[J]. Applied Sciences, 2017, 7(10):993.

      [9] BACCOUCHE M, MAMALET F, WOLF C, et al. Sequential deep learning for human action recognition[C]// Proceedings of the 2011 International Workshop on Human Behavior Unterstanding, LNCS 7065. Berlin: SpringerVerlag, 2011:29-39.

      [10] NG J Y, HAUSKNECHT M, VIJAYANARASIMHAN S, et al. Beyond short snippets: deep networks for video classification[C]// Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2015:4694-4702.

      [11] 吳亮, 何毅, 梅雪,等. 基于時(shí)空興趣點(diǎn)和概率潛動(dòng)態(tài)條件隨機(jī)場(chǎng)模型的在線(xiàn)行為識(shí)別方法[J]. 計(jì)算機(jī)應(yīng)用, 2018, 38(6): 1760-1764. (WU L, HE Y, MEI X, et al. Online behavior recognition using spacetime interest points and probabilistic latentdynamic conditional random field model[J]. Journal of Computer Applications, 2018, 38(6): 1760-1764.)

      [12] 姬曉飛, 左鑫孟. 基于關(guān)鍵幀特征庫(kù)統(tǒng)計(jì)特征的雙人交互行為識(shí)別[J]. 計(jì)算機(jī)應(yīng)用, 2016, 36(8):2287-2291. (JI X F, ZUO X M. Human interaction recognition based on statistical features of key frame feature library[J]. Journal of Computer Applications, 2016, 36(8): 2287-2291.)

      [13] KALIATAKIS G, STERGIOY A, VIDAKIS N. Conceiving human interaction by visualising depth data of head pose changes and emotion recognition via facial expressions[J]. Computers, 2017, 6(3):25-37.

      [14] MAITI S, REDDY S, RAHEJA J L. View invariant realtime gesture recognition[J]. Optik—International Journal for Light and Electron Optics, 2015, 126(23):3737-3742.

      [15] 張全貴, 蔡豐, 李志強(qiáng). 基于耦合多隱馬爾可夫模型和深度圖像數(shù)據(jù)的人體動(dòng)作識(shí)別[J]. 計(jì)算機(jī)應(yīng)用, 2018, 38(2): 454-457. (ZHANG Q G, CAI F, LI Z Q. Human action recognition based on coupled multihidden Markov model and depth image data[J]. Journal of Computer Applications, 2018, 38(2): 454-457.)

      [16] 談家譜, 徐文勝. 基于Kinect的指尖檢測(cè)與手勢(shì)識(shí)別方法[J]. 計(jì)算機(jī)應(yīng)用, 2015, 35(6): 1795-1800. (TAN J P, XU W S. Fingertip detection and gesture recognition method based on Kinect[J]. Journal of Computer Applications, 2015, 35(6): 1795-1800.)

      [17] CHOUBIK Y, MAHMOUDI A. Machine learning for real time poses classification using Kinect skeleton data[C]// Proceedings of the 2016 International Conference on Computer Graphics, Imaging and Visualization. Piscataway, NJ: IEEE, 2016:307-311.

      [18] WINWOOD P W, CRONIN J B, BROWN S R, et al. A biomechanical analysis of the heavy sprintstyle sled pull and comparison with the back squat[J]. International Journal of Sports Science and Coaching, 2015, 10(5): 851-868.

      [19] STEVENS W R Jr, KOKOSZKA A Y, ANDEERSON A M, et al. Automated event detection algorithm for two squatting protocols[J]. Gait and Posture, 2018, 59:253-257.

      成武县| 化隆| 鞍山市| 宜阳县| 台南县| 环江| 达孜县| 乌鲁木齐市| 林州市| 达尔| 河源市| 高清| 吴旗县| 淮阳县| 广州市| 崇明县| 涿州市| 监利县| 宝兴县| 泰兴市| 额济纳旗| 海丰县| 阿尔山市| 肃南| 徐汇区| 商洛市| 繁昌县| 刚察县| 阳谷县| 永寿县| 库伦旗| 江津市| 普兰店市| 石嘴山市| 珠海市| 大荔县| 土默特右旗| 慈利县| 洪洞县| 万州区| 大化|