English  |  正體中文  |  简体中文  |  Items with full text/Total items : 886/886 (100%)
Visitors : 7697793      Online Users : 112
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version
    Please use this identifier to cite or link to this item: http://ccur.lib.ccu.edu.tw/handle/A095B0000Q/52


    Title: 以支持向量機學習作雙眼立體視訊顯著性檢測;Stereoscopic Video Saliency Detection Using Support Vector Machine Learning
    Authors: 周庭羽;CHOU, TING-YU
    Contributors: 資訊工程研究所
    Keywords: 雙眼立體視訊顯著檢測;超像素分割;支持向量機;特徵擷取;stereoscopic video saliency detection;superpixel segmentation;support vector machine;feature extraction
    Date: 2018
    Issue Date: 2019-05-23 10:30:18 (UTC+8)
    Publisher: 資訊工程研究所
    Abstract: 近年來,3D技術應用越來越普遍,3D視訊視覺感知技術變得越來越重要。而在3D視訊視覺感知技術中,顯著性檢測是一個重要的議題,主要目的是模擬人類視覺系統,並在大量的視覺資訊中檢測出較為重要的區域。因此,本研究提出一個新的雙眼立體視訊顯著性檢測方法,以支持向量機學習來檢測顯著區域。首先,將超像速分割用於視訊中的每一個畫面來切割,然後擷取空間、時間、深度、與人類意識有關的物件及時間空間特徵來產生特徵圖,其中空間特徵包含顏色對比和顏色轉換,時間特徵包含相對運動和運動邊界,深度特徵包含深度對比和邊界流暢,與人類意識有關的物件特徵包含人臉、人、車輛及動物,時間空間特徵包含定向梯度直方圖、定向流直方圖、運動邊界直方圖、3D離散餘弦變換,接著利用支持向量機學習來結合這些特徵圖以得到初始顯著圖,最後通過使用中心偏差,視覺靈敏度和模糊化來獲得最終顯著性圖。實驗結果顯示,在IRCCyN資料庫中本研究提出方法的性能優於其他三種比較方法。
    Recently, 3D technology application is more widespread, 3D video visual perception technologies become more important. In 3D video visual perception technologies, saliency detection is a significant issue, which simulate human visual system and detect more important regions in scenes. In this study, a novel stereoscopic video saliency detection approach using support vector machine is proposed. First, superpixel segmentation is utilize for each frame of video sequences. Then, spatial, temporal, depth, object, and spatiotemporal features, namely, color contrast, color transform, relative motion, motion boundary, depth contrast, boundary flow, face, person, vehicle, animal, histogram of oriented gradients, histogram of oriented flow, motion boundary histogram, and 3D-discrete cosine transform, are extracted and produced feature maps. Third, these feature maps are fused by support vector machine to get the initial saliency map. Finally, the final saliency map is obtained by using center bias, visual sensitivity, and blurring. Experimental results show that the performance of proposed approach is better than the three comparison approaches on IRCCyN database.
    Appears in Collections:[資訊工程學系] 學位論文

    Files in This Item:

    File Description SizeFormat
    index.html0KbHTML82View/Open


    All items in CCUR are protected by copyright, with all rights reserved.


    版權聲明 © 國立中正大學圖書館網頁內容著作權屬國立中正大學圖書館

    隱私權及資訊安全政策

    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - Feedback