English  |  正體中文  |  简体中文  |  Items with full text/Total items : 888/888 (100%)
Visitors : 12008899      Online Users : 387
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version
    Please use this identifier to cite or link to this item: http://ccur.lib.ccu.edu.tw/handle/A095B0000Q/517


    Title: 基於LDA於雲端上基本表情分類設計與實現;The design & impementations of emotional classifications according to the cloud of LDA
    Authors: 賴恭寬;LAI, KUNG-KUAN
    Contributors: 資訊工程學系碩士在職專班
    Keywords: 影像處理;人臉識別;情緒識別;線性判別分析;Image Processing;Face Recognition;Emotion Recognition;Linear Discriminant Analysis
    Date: 2017
    Issue Date: 2019-07-17
    Publisher: 資訊工程學系碩士在職專班
    Abstract: 摘要以往人臉辨識研究,主要使用圖像蒐集與分類,利用圖像特徵完成身份確認與查找技術。現今該技術研究,已由靜態圖像進階為動態影像,比如先進駕駛輔助系統(Advanced Driver Assistance Systems, ADAS)、學習記錄資料庫(Learning Record Store, LRS)、行人跟蹤系統等等就是此類的應用。因此現代影像辨識技術,不僅僅要達到身分確認而已,更希望從中得到更多的訊息,例如人類精神狀態、情緒起伏、行為表現等不同特徵。本論文目的,為透過遠端多攝影機與多角度辨識人類行為,而這些行為資料是可不斷被記錄下來的,為了將來學習歷程記錄的應用,所以實驗必須達成以下三個目標,以達成動態影像歷程大數據資料儲存分析使用:(1) 遠端攝影機即時影像傳送。(2) 動態影像達成身分確認。(3) 動態影像達成情緒辨識。研究一開始,我們使用的是經典演算法中的主成分分析與線性判別分析,在推導公式過程中,可以很清楚地了解演算法的精髓與由來,因此,之後研究上遇到數據降維與投影矩陣選擇等問題,都可以做很清楚的分析與演算。在實驗中我們使用Raspberry Pi架設串流伺服器,將攝影機錄到的影像,透過視頻串流把影像上傳至雲端,再由遠端或行動等裝置在此串流伺服器上取得即時視頻,便可立即辨識出該人的身分,並以基本表情做情緒辨識,以此做為雲端大數據個人身分與行為表徵的基礎應用。最後,此篇研究對於大數據資料與雲端資料庫的學習應用,做了以下三項貢獻。(1) 透過即時串流,可於任意位置甚至多攝影機下同時蒐集行為資料。(2) 即時影像串流下,可於每分鐘平均蒐集行為圖像112張,行為辨識率達89.4%,透過移動平均濾波可以提高資料蒐集正確性達94.6%。(3) 使用線性判別分析演算法分析人臉,再使用主成分分析來區分表情,有效降低演算時間複雜度,比單單使用線性判別或主成分分析的效果都來得要好。
    AbstractMost of conventional techniques applied by facial recognition include data collection, classification, preprocessing, standardization, identification, searching, etc. Now, ADSA (Advanced Driver Assistance Systems) is a successful and important application based on the technique of facial recognition to detect the dynamic and instant driving situations concerning about the driver, passengers inside or near the car and then successively provides a safe guide for further driving. In this application, the streaming video is required to be processed by the real-time response. Until now, we can find that a couple of conventional applications based on the techniques of facial recognition are required to have the processing ability of real-time responsibility. In this research, we try to propose some real-time solutions not only for finding user’s face, but also user’s characteristics including expression, mood, etc. We build a streaming server to collect the instant data scanning from a Raspberry Pi device. Then, we design some processing algorithms based on PCA and LDA on the server site to realize the proposed tasks of facial recognition. Our results can almost reach the real-time requirement. For further research, we will redesign the structure of projection space to solve the over fitting problem due to the successively coming big data of facial images.
    Appears in Collections:[資訊工程學系] 學位論文

    Files in This Item:

    File Description SizeFormat
    index.html0KbHTML240View/Open


    All items in CCUR are protected by copyright, with all rights reserved.


    版權聲明 © 國立中正大學圖書館網頁內容著作權屬國立中正大學圖書館

    隱私權及資訊安全政策

    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - Feedback