由於光學鏡頭景深的限制，一般數位相機所拍攝出的影像並不能讓所有物體都以清晰的方式成像，造成單張影像只能夠呈現少數資訊。只有位於對焦平面上的物體會清晰的顯示在影像中；反之，在對焦平面外的物體則會以模糊的形態顯示在影像上。為了保留自然場景中清晰的物件及詳細的資訊，可以將多張具有相同場景、不同對焦物件的影像融合為一張全對焦的影像。在本論文中，提出一個以超像素為基礎的多重對焦影像融合方法。首先，將所有多重對焦影像個別作超像素切割，接著計算影像的顯著、深度、及差值影像資訊，並利用這些資訊將超像素分成三類，分別是對焦、離焦、以及不確定是否對焦的區域，對於不確定是否對焦的超像素則使用sum-modified-Laplacian (SML)來做判斷以產生focus maps，接著修正focus maps中對焦及離焦區域的交界處的權重並產生weight maps，最後依據weight maps產生融合影像。依據實驗結果所示，本論文所提出的研究方法優於現有的五種方法。 Due to the finite depth-of-field of optical lenses, it is difficult to capture an image with all clear objects by a common camera that make a signal image only display few information. The finite depth-of-field of optical lenses is the most common limitation which makes a signal image that not all objects are sharp and clear. Only objects within the depth-of-field are captured in focus and sharp, while others are defocus and blurred. To preserve clear objects and detailed information in a natural scene, we can take and integrate a set of images with different focuses to generate the fused image. In this study, a superpixel-based multi-focus image fusion approach is proposed. First, each multi-focus source image is segmented into superpixels. The saliency information, the depth information, and the difference image information are computed to classify the superpixels into three types (focus, defocus, and undefined) of regions. For the undefined superpixels, sum-modified-Laplacian (SML) is computed to determine whether the superpixel is focus or not, and then focus maps are estimated. Subsequently, each boundary of focus and defocus regions in focus maps is refined to generate the weighting maps. According to the weighting maps, the fused image is generated. Based on the experimental results, the performance of the fused images in this study is better than those of five comparison approaches.