GithubHelp home page GithubHelp logo

willbrennan / imagestitching Goto Github PK

View Code? Open in Web Editor NEW
249.0 13.0 58.0 6.09 MB

Conducts image stitching upon an input video to generate a panorama in 3D

Python 100.00%
opencv opencv-python python panorama image-stitching image-processing

imagestitching's Introduction

Image and Video Stitching

This algorithm runs through a video file, or a set of images, and stitches them together to form a single image. It can be used for scanning in large documents where the resolution from a single photo may not be sufficient. Currently this doesnt take into account image blurring, evaluating whether an incoming frame has a better quality than the previous one, or lens distortion.

Quick Start

Getting the app running is pretty simple; clone, install the requirements, and run!

# Clone the repo
git clone https://github.com/WillBrennan/ImageStitching && cd ImageStitching

# install deps
pip install -r requirements.txt

# Run the stitching!
python stitching.py <path to image directory or video files> --display --save

Demonstration

Demo on Video

References

Automatic Panoramic Image Stitching using Invariant Features

imagestitching's People

Contributors

willbrennan avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

imagestitching's Issues

Python is crashing suddenly

I have attached the screenshot of the error. The images are loading from the folder but then soemting is happening right after and the program is crashing.

This is the error

zsh: segmentation fault python3 image_stitching.py --save --quiet

Screen Shot 2021-01-19 at 15 35 09

there's a problem when stitching more than 34 images

video_stitching has a bug:
Traceback (most recent call last):
File "video_stitching.py", line 46, in
cap = cv2.VideoCapture(args.video_path)
TypeError: an integer is required

so I use image_stitching to stitch images captured from video, but I can't stitch 35 images at a time while stitch 34 images are fine. Also when stitching 80 images, No.1-34 disappear and No.35-80 are fine.
I run the program in both opencv2.4 and opencv3.2 and they have the same problem. Is there any limitation in the code? I'm not familiar with opencv so I can't find the limitation myself.

when stitching 34 images:
stitched_34

when stitching 35 images:
stitched_35

when stitching 36 images:
stitched_36

when stitching 80 images:
stitched_80

Mosaic or the panaroma gets blurred

Hi, when I am stitching the video, the output image has blurred the initial part of the image, how can I go
resulted1
about solving this issue? as you can see the bottom of the image is blurred
Here's an example image

Stitching creates blur on invisible parts

Hi Will, I am doing some testing with your project for a school project. You probably stopped developing it, but I was wondering if you could give me a clue about why the invisible images get blurred the more the process continues, and how it could be fixed if you know?

Attached is a picture of Frame 935 (example), that illustrates the issue:
result_935

device freezes while processing the video

Hi, I am using video_stitching.py, after a certain amount of frames my device freezes, I don't think it is because of hardware limitations as my ram and CPU both don't even reach 50% of load.

Panorama stitching for multiple images

I want top create a panorama from a video by splitting it into frames and then stitch them using the conventional method of finding features.

This is my code for reference

import cv2
import numpy as np
import glob
import imutils


def draw_matches(img1, keypoints1, img2, keypoints2, matches):
    r, c = img1.shape[:2]
    r1, c1 = img2.shape[:2]

    # Create a blank image with the size of the first image + second image
    output_img = np.zeros((max([r, r1]), c + c1, 3), dtype='uint8')
    output_img[:r, :c, :] = np.dstack([img1])
    output_img[:r1, c:c + c1, :] = np.dstack([img2])

    # Go over all of the matching points and extract them
    for match in matches:
        img1_idx = match.queryIdx
        img2_idx = match.trainIdx
        (x1, y1) = keypoints1[img1_idx].pt
        (x2, y2) = keypoints2[img2_idx].pt

        # Draw circles on the keypoints
        cv2.circle(output_img, (int(x1), int(y1)), 4, (0, 255, 255), 1)
        cv2.circle(output_img, (int(x2) + c, int(y2)), 4, (0, 255, 255), 1)

        # Connect the same keypoints
        cv2.line(output_img, (int(x1), int(y1)), (int(x2) + c, int(y2)), (0, 255, 255), 1)

    return output_img


def warpImages(img1, img2, H):
    rows1, cols1 = img1.shape[:2]
    rows2, cols2 = img2.shape[:2]

    list_of_points_1 = np.float32([[0, 0], [0, rows1], [cols1, rows1], [cols1, 0]]).reshape(-1, 1, 2)
    temp_points = np.float32([[0, 0], [0, rows2], [cols2, rows2], [cols2, 0]]).reshape(-1, 1, 2)

    # When we have established a homography we need to warp perspective
    # Change field of view
    list_of_points_2 = cv2.perspectiveTransform(temp_points, H)

    list_of_points = np.concatenate((list_of_points_1, list_of_points_2), axis=0)

    [x_min, y_min] = np.int32(list_of_points.min(axis=0).ravel() - 0.5)
    [x_max, y_max] = np.int32(list_of_points.max(axis=0).ravel() + 0.5)

    translation_dist = [-x_min, -y_min]

    H_translation = np.array([[1, 0, translation_dist[0]], [0, 1, translation_dist[1]], [0, 0, 1]])

    output_img = cv2.warpPerspective(img2, H_translation.dot(H), (x_max - x_min, y_max - y_min))
    output_img[translation_dist[1]:rows1 + translation_dist[1], translation_dist[0]:cols1 + translation_dist[0]] = img1
    # print(output_img)

    return output_img


# Main program starts here


input_path = "/Users/akshayacharya/Desktop/Panorama/Bazinga/Test images for final/Highfps/*.jpg"
output_path = "Output/o4.jpg"
#input_path = "/Users/akshayacharya/Desktop/Panorama/Bazinga/Output/*.jpg"
#output_path = "Output/final.jpg"


input_img = glob.glob(input_path)
img_path = sorted(input_img)
print(img_path)
tmp = img_path[0]
flag = True

for i in range(1, len(img_path)):
    if flag:
        img1 = cv2.imread(tmp, cv2.COLOR_BGR2GRAY)
        img2 = cv2.imread(img_path[i], cv2.COLOR_BGR2GRAY)
        flag = False
    img1 = cv2.resize(img1, (1080, 720), fx=1, fy=1)
    img2 = cv2.imread(img_path[i], cv2.COLOR_BGR2GRAY)
    img2 = cv2.resize(img2, (1080, 720), fx=1, fy=1)

    orb = cv2.ORB_create(nfeatures=2000)

    keypoints1, descriptors1 = orb.detectAndCompute(img1, None)
    keypoints2, descriptors2 = orb.detectAndCompute(img2, None)

    # cv2.imshow('1',cv2.drawKeypoints(img1, keypoints1, None, (255, 0, 255)))
    # cv2.imshow('2',cv2.drawKeypoints(img2, keypoints2, None, (255,255, 255)))
    # cv2.waitKey(0)

    # Create a BFMatcher object.
    # It will find all of the matching keypoints on two images
    bf = cv2.BFMatcher_create(cv2.NORM_HAMMING)

    # Find matching points
    matches = bf.knnMatch(descriptors1, descriptors2, k=2)

    # print("Descriptor of the first keypoint: ")
    # print(descriptors1[0])
    # print(type(matches))

    all_matches = []
    for m, n in matches:
        all_matches.append(m)

    img3 = draw_matches(img1, keypoints1, img2, keypoints2, all_matches[:])
    # v2.imshow('Matches',img3)
    # cv2.waitKey(0)

    # Finding the best matches
    good = []
    for m, n in matches:
        if m.distance < 0.9 * n.distance:
            good.append(m)

    # cv2.imshow('Final1',cv2.drawKeypoints(img1, [keypoints1[m.queryIdx] for m in good], None, (255, 0, 255)))
    # cv2.imshow('Final2',cv2.drawKeypoints(img2, [keypoints2[m.queryIdx] for m in good], None, (255, 0, 255)))
    # cv2.waitKey(0)

    MIN_MATCH_COUNT = 10

    if len(good) > MIN_MATCH_COUNT:
        # Convert keypoints to an argument for findHomography
        src_pts = np.float32([keypoints1[m.queryIdx].pt for m in good]).reshape(-1, 1, 2)
        dst_pts = np.float32([keypoints2[m.trainIdx].pt for m in good]).reshape(-1, 1, 2)

        # Establish a homography
        M, _ = cv2.findHomography(src_pts, dst_pts, cv2.RANSAC, 5.0)

        result = warpImages(img2, img1, M)
        img1 = result
        print(f"Succesfully stitched until image{i + 1}")

#writeStatus = cv2.imwrite(output_path, result)
#if writeStatus is True:
#    print("image written")
#else:
#    print("problem")  # or raise exception, handle problem, etc.
#result = cv2.resize(result)
cv2.imshow("Hi", result)
cv2.waitKey(0)
#writeStatus = cv2.imwrite(output_path, result)



stitched = img1
stitched = cv2.copyMakeBorder(stitched, 10, 10, 10, 10,
                              cv2.BORDER_CONSTANT, (0, 0, 0))
# convert the stitched image to grayscale and threshold it
# such that all pixels greater than zero are set to 255
# (foreground) while all others remain 0 (background)
gray = cv2.cvtColor(stitched, cv2.COLOR_BGR2GRAY)
thresh = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY)[1]
# find all external contours in the threshold image then find
# the *largest* contour which will be the contour/outline of
# the stitched image
cnts = cv2.findContours(thresh.copy(), cv2.RETR_EXTERNAL,
                        cv2.CHAIN_APPROX_SIMPLE)
cnts = imutils.grab_contours(cnts)
c = max(cnts, key=cv2.contourArea)
# allocate memory for the mask which will contain the
# rectangular bounding box of the stitched image region
mask = np.zeros(thresh.shape, dtype="uint8")
(x, y, w, h) = cv2.boundingRect(c)
cv2.rectangle(mask, (x, y), (x + w, y + h), 255, -1)
# create two copies of the mask: one to serve as our actual
# minimum rectangular region and another to serve as a counter
# for how many pixels need to be removed to form the minimum
# rectangular region
minRect = mask.copy()
sub = mask.copy()
# keep looping until there are no non-zero pixels left in the
# subtracted image
while cv2.countNonZero(sub) > 0:
    # erode the minimum rectangular mask and then subtract
    # the thresholded image from the minimum rectangular mask
    # so we can count if there are any non-zero pixels left
    minRect = cv2.erode(minRect, None)
    sub = cv2.subtract(minRect, thresh)
# find contours in the minimum rectangular mask and then
# extract the bounding box (x, y)-coordinates
cnts = cv2.findContours(minRect.copy(), cv2.RETR_EXTERNAL,
                        cv2.CHAIN_APPROX_SIMPLE)
cnts = imutils.grab_contours(cnts)
c = max(cnts, key=cv2.contourArea)
(x, y, w, h) = cv2.boundingRect(c)
# use the bounding box coordinates to extract the our final
# stitched image
stitched = stitched[y:y + h, x:x + w]
#cv2.imwrite("cropped.jpg", stitched)
#writeStatus = cv2.imwrite(output_path, stitched)
#if writeStatus is True:
#    print("image written")
#else:
#    print("problem")  # or raise exception, handle problem, etc.
stitched = cv2.resize(stitched, (2000,1500))
cv2.imshow("cropped", stitched)
cv2.waitKey(0)

However, its not giving me the right output. I have attached the image for reference. Can you guide me as to how I could get the right panorama? The images for the source are obtained by splitting a video into frames and then using these as input images.

](url)

error when running program

When i execute the video stitch program from termincal , the following error comes. Wha should I do?

/usr/lib/python3/dist-packages/apport/report.py:13: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
import fnmatch, glob, traceback, errno, sys, atexit, locale, imp
Traceback (most recent call last):
File "image_stitching.py", line 12, in
import image_stitching
File "/home/akshay/VideoStitcher/image_stitching/init.py", line 5, in
from matching import *
AttributeError: module 'matching' has no attribute 'Matching'

requirements.txt is missing

In the makefile pip tries to find requirements.txt, can you include it to the project or am I missing something?

Program not running

Hey Will,

I tried to run the program by following your instructions, but there is no make file and I did output only one image when doing videos.

Can you please help?

Thanks,

cv2.error: /home/yi/opencv-2.4.11/modules/imgproc/src/color.cpp:3739: error: (-215) scn == 3 || scn == 4 in function cvtColor

my opencv version 2.4.11, numpy version python 1.12.1
exe: python image_stitching.py image/phantom3-ieu/ --display --save
then:
INFO:root:beginning sequential matching
('image_path', 'image/phantom3-ieu/')
OpenCV Error: Assertion failed (scn == 3 || scn == 4) in cvtColor, file /home/yi/opencv-2.4.11/modules/imgproc/src/color.cpp, line 3739
Traceback (most recent call last):
File "image_stitching.py", line 54, in
image_gray = cv2.cvtColor(image_colour, cv2.COLOR_RGB2GRAY)
cv2.error: /home/yi/opencv-2.4.11/modules/imgproc/src/color.cpp:3739: error: (-215) scn == 3 || scn == 4 in function cvtColor

Image Stitching stuck while stitching images

Hi Will,

I am attempting to stitch 2 image in path. Seems the stitching is starting, but the process does not finish. Here is what I see on the terminal.

python3 stitching.py testimages --display --save
WARNING:root:skipping .DS_Store...
INFO:root:displaying image 0
INFO:root:saving result image on result_0.jpg

I see the first image open in a window and the shown images is in result_0.jpg file. There is further terminal output indicating the stitching in progress or complete. I have the images named img1.jpeg and img2.jpeg in testimages directory. Any suggestions on resolving the issue is much appreciated. Thank you.

Which version of opencv do I need to run this?

Currently I am running opencv 4.5.1. So when it checks for the version to call the appropriate sift function, it s throwing an error. Wjat do i do?

It is just going to the runtime exception error and stop execution without initializing sift

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.