English  |  正體中文  |  简体中文  |  Items with full text/Total items : 888/888 (100%)
Visitors : 13005086      Online Users : 173
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version
    Please use this identifier to cite or link to this item: http://ccur.lib.ccu.edu.tw/handle/A095B0000Q/606


    Title: 基於HADOOP實作動態分析資料表關聯調整資料放置方式;Implementing Correction Detection Between Tables to Adjust Data Placement Strategy Based on Hadoop
    Authors: 黃偉傑;HUANG, WEI-CHIEH
    Contributors: 資訊工程研究所
    Keywords: HDFS;Hadoop;資料區塊;Block;HDFS;Hadoop
    Date: 2017
    Issue Date: 2019-07-17
    Publisher: 資訊工程研究所
    Abstract: 在面對巨量資料(Big Data)的分析時,關聯式資料庫已經無法負荷如此龐大的資訊量。Hadoop可以透過HDFS儲存大量資料,並且透過MapReduce分析巨量資料。由於HDFS的高容錯性和高擴展性,許多企業都開始將資料轉移到Hadoop中,利用雲端運算環境處理大量的資料。而Hadoop對於資料的儲存,是將資料以資料區塊作為區分,以隨機存放的方式,平均分散到叢集中,並沒有考慮資料的相關性。使得在執行分析任務時,有相關性的資料被分散到不同的節點中,增加了需要透過網路交換資料的成本,對於效能有很大的影響。本篇論文提出了在將關聯式資料庫轉移到Hadoop時,當資料從關聯式資料庫轉移到Hadoop時,將儲存的資料區塊從原先只是隨機選擇儲存地點,改進成透過檢視運算節點效能和資料表的參照關係,提供較佳的儲存策略,使得有相關性的資料區塊放置在一起,使得查詢資料時能夠降低資料的找尋成本,讓整體執行效能獲得提升。
    Nowadays, in case of Big Data analysis, traditional database management system is no longer capable of processing such big data. Hadoop could save huge amount of data via HDFS and analyze huge amount of data via MapReduce. Owing to scalability and fault tolerance of HDFS, many enterprises now are transferring data to Hadoop, taking advantage of cloud computing for dealing huge amount of data. But for the method of storage of data of Hadoop is dividing data based on Block and store randomly and evenly into cluster. This method lacks the importance of correlation between data and makes relative data separated to different nodes when executing analysis, which adds up cost of network transmission and affects performance immensely. This paper address that while data are transferred from relational database to Hadoop, data block will be selected from efficacy of computing nodes and correlation of tables, instead of randomly choosing which provides better strategy of storage. More relative data is stored together, less search cost is spent when search performs and more efficiency is elevated for executing performance.
    Appears in Collections:[資訊工程學系] 學位論文

    Files in This Item:

    File Description SizeFormat
    index.html0KbHTML283View/Open


    All items in CCUR are protected by copyright, with all rights reserved.


    版權聲明 © 國立中正大學圖書館網頁內容著作權屬國立中正大學圖書館

    隱私權及資訊安全政策

    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - Feedback