I’m an enormous fan of interactive visualizations. As a pc imaginative and prescient engineer, I deal nearly each day with picture processing associated duties and most of the time I’m iterating on an issue the place I want visible suggestions to make choices. Let’s consider a quite simple picture processing pipeline with a single step that has some parameters to remodel a picture:

How have you learnt which parameters to regulate? Does the pipeline even work as anticipated? With out visualizing your output, you would possibly miss out on some key insights and make sub optimum decisions.
Generally merely displaying the output picture and/or some calculated metrics will be sufficient to iterate on the parameters. However I’ve discovered myself in lots of conditions the place a device can be immensely useful to iterate rapidly and interactively on my pipeline. So on this article I’ll present you how one can work with easy built-in interactive parts from OpenCV
in addition to how one can construct extra trendy consumer interfaces for Pc Imaginative and prescient tasks utilizing customtkinter
.
Conditions
If you wish to observe alongside, I like to recommend you to arrange your native atmosphere with uv and set up the next packages:
uv add numpy opencv-Python pillow customtkinter
Aim
Earlier than we dive into the code of the challenge, let’s rapidly define what we wish to construct. The appliance ought to use the webcam feed and permit the consumer to pick various kinds of filters that might be utilized to the stream. The processed picture must be proven in real-time within the window. A tough sketch of a possible UI would look as follows:

OpenCV – GUI
Let’s begin with a easy loop that fetches frames out of your webcam and shows them in an OpenCV window.
import cv2
cap = cv2.VideoCapture(0)
whereas True:
ret, body = cap.learn()
if not ret:
break
cv2.imshow("Video Feed", body)
key = cv2.waitKey(1) & 0xFF
if key == ord('q'):
break
cap.launch()
cv2.destroyAllWindows()
Keyboard Enter
The best means so as to add interactivity right here, is by including keyboard inputs. For instance, we are able to cycle via totally different filters with the quantity keys.
...
filter_type = "regular"
whereas True:
...
if filter_type == "grayscale":
body = cv2.cvtColor(body, cv2.COLOR_BGR2GRAY)
elif filter_type == "regular":
move
...
if key == ord('1'):
filter_type = "regular"
if key == ord('2'):
filter_type = "grayscale"
...
Now you may change between the conventional picture and the grayscale model by urgent the quantity keys 1 and a pair of. Let’s additionally rapidly add a caption to the picture so we are able to really see the identify of the filter we’re making use of.
Now we should be cautious right here: in the event you check out the form of the body after the filter, you’ll discover that the dimensionality of the body array has modified. Do not forget that OpenCV picture arrays are ordered HWC (peak, width, colour) with colour as BGR (inexperienced, blue, pink), so the 640×480 picture from my webcam has form (480, 640, 3)
.
print(filter_type, body.form)
# regular (480, 640, 3)
# grayscale (480, 640)
Now as a result of the grayscale operation outputs a single channel picture, the colour dimension is dropped. If we now wish to draw on high of this picture, we both must specify a single channel colour for the grayscale picture or we convert that picture again to the unique BGR format. The second possibility is a bit cleaner as a result of we are able to unify the annotation of the picture.
if filter_type == "grayscale":
body = cv2.cvtColor(body, cv2.COLOR_BGR2GRAY)
elif filter_type == "regular":
move
if len(body.form) == 2: # Convert grayscale to BGR
body = cv2.cvtColor(body, cv2.COLOR_GRAY2BGR)
Caption
I wish to add a black border on the backside of the picture, on high of which the identify of the filter might be proven. We will make use of the copyMakeBorder
perform to pad the picture with a border colour on the backside. Then we are able to add the textual content on high of this border.
# Add a black border on the backside of the body
border_height = 50
border_color = (0, 0, 0)
body = cv2.copyMakeBorder(body, 0, border_height, 0, 0, cv2.BORDER_CONSTANT, worth=border_color)
# Present the filter identify
cv2.putText(
body,
filter_type,
(body.form[1] // 2 - 50, body.form[0] - border_height // 2 + 10),
cv2.FONT_HERSHEY_SIMPLEX,
1,
(255, 255, 255),
2,
cv2.LINE_AA,
)
That is how the output ought to look, and you’ll change between the conventional and grayscale mode and the frames might be captioned accordingly.

Sliders
Now as a substitute of utilizing the keyboard as enter technique, OpenCV provides a primary trackbar slider UI aspect. The trackbar must be initialized firstly of the script. We have to reference the identical window as we might be displaying our photos in later, so I’ll create a variable for the identify of the window. Utilizing this identify, we are able to create the trackbar and let or not it’s a selector for the index within the checklist of filters.
filter_types = ["normal", "grayscale"]
win_name = "Webcam Stream"
cv2.namedWindow(win_name)
tb_filter = "Filter"
# def createTrackbar(trackbarName: str, windowName: str, worth: int, depend: int, onChange: _typing.Callable[[int], None]) -> None: ...
cv2.createTrackbar(
tb_filter,
win_name,
0,
len(filter_types) - 1,
lambda _: None,
)
Discover how we use an empty lambda for the onChange
callback, we are going to fetch the worth manually within the loop. All the things else will keep the identical.
whereas True:
...
# Get the chosen filter sort
filter_id = cv2.getTrackbarPos(tb_filter, win_name)
filter_type = filter_types[filter_id]
...
And voilà, we have now a trackbar to pick our filter.

Now we are able to additionally simply add extra filters simply by extending our checklist and implementing every processing step.
filter_types = [
"normal",
"grayscale",
"blur",
"threshold",
"canny",
"sobel",
"laplacian",
]
...
if filter_type == "grayscale":
body = cv2.cvtColor(body, cv2.COLOR_BGR2GRAY)
elif filter_type == "blur":
body = cv2.GaussianBlur(body, ksize=(15, 15), sigmaX=0)
elif filter_type == "threshold":
grey = cv2.cvtColor(body, cv2.COLOR_BGR2GRAY)
_, thresholded_frame = cv2.threshold(grey, thresh=127, maxval=255, sort=cv2.THRESH_BINARY)
elif filter_type == "canny":
body = cv2.Canny(body, threshold1=100, threshold2=200)
elif filter_type == "sobel":
body = cv2.Sobel(body, ddepth=cv2.CV_64F, dx=1, dy=0, ksize=5)
elif filter_type == "laplacian":
body = cv2.Laplacian(body, ddepth=cv2.CV_64F)
elif filter_type == "regular":
move
if body.dtype != np.uint8:
# Scale the body to uint8 if needed
cv2.normalize(body, body, 0, 255, cv2.NORM_MINMAX)
body = body.astype(np.uint8)

Trendy GUI with CustomTkinter
Now I don’t find out about you however the present consumer interface doesn’t look very trendy to me. Don’t get me fallacious, there’s some magnificence within the fashion of the interface, however I desire cleaner, extra trendy designs. Plus we’re already on the restrict of what OpenCV provides out of the field when it comes to UI parts. Yep, no buttons, textual content fields, dropdowns, checkboxes or radio buttons and no customized layouts. So let’s see how we are able to remodel the look and consumer expertise of this primary software to a contemporary and clear one.

So to get began, we first must create a category for our app. We create two frames: the primary one comprises our filter choice on the left aspect and the second wraps the picture show. For now, let’s begin with a easy placeholder textual content. Sadly there’s no out of the field opencv part from customtkinter straight, so we might want to rapidly construct our personal within the subsequent few steps. However let’s first end the fundamental UI structure.
import customtkinter
class App(customtkinter.CTk):
def __init__(self) -> None:
tremendous().__init__()
self.title("Webcam Stream")
self.geometry("800x600")
self.filter_var = customtkinter.IntVar(worth=0)
# Body for filters
self.filters_frame = customtkinter.CTkFrame(self)
self.filters_frame.pack(aspect="left", fill="each", broaden=False, padx=10, pady=10)
# Body for picture show
self.image_frame = customtkinter.CTkFrame(self)
self.image_frame.pack(aspect="proper", fill="each", broaden=True, padx=10, pady=10)
self.image_display = customtkinter.CTkLabel(self.image_frame, textual content="Loading...")
self.image_display.pack(fill="each", broaden=True, padx=10, pady=10)
app = App()
app.mainloop()

Filter Radio Buttons
Now that the skeleton is constructed, we are able to begin filling in our elements. For the left aspect, I might be utilizing the identical checklist of filter_types
to populate a gaggle of radio buttons to pick the filter.
# Create radio buttons for every filter sort
self.filter_var = customtkinter.IntVar(worth=0)
for filter_id, filter_type in enumerate(filter_types):
rb_filter = customtkinter.CTkRadioButton(
self.filters_frame,
textual content=filter_type.capitalize(),
variable=self.filter_var,
worth=filter_id,
)
rb_filter.pack(padx=10, pady=10)
if filter_id == 0:
rb_filter.choose()

Picture Show Element
Now we are able to get began on the fascinating half, how one can get our OpenCV frames to point out up within the picture part. As a result of there’s no built-in part, let’s create our personal based mostly on the CTKLabel
. This permits us to show a loading textual content whereas the webcam stream is beginning up.
...
class CTkImageDisplay(customtkinter.CTkLabel):
"""
A reusable ctk widget widget to show opencv photos.
"""
def __init__(
self,
grasp: Any,
) -> None:
self._textvariable = customtkinter.StringVar(grasp, "Loading...")
tremendous().__init__(
grasp,
textvariable=self._textvariable,
picture=None,
)
...
class App(customtkinter.CTk):
def __init__(self) -> None:
...
self.image_display = CTkImageDisplay(self.image_frame)
self.image_display.pack(fill="each", broaden=True, padx=10, pady=10)
To date nothing has modified, we merely swapped out the present label with our customized class implementation. In our CTKImageDisplay
class we are able to outline an perform to point out a picture within the part, let’s name it set_frame
.
import cv2
import numpy.typing as npt
from PIL import Picture
class CTkImageDisplay(customtkinter.CTkLabel):
...
def set_frame(self, body: npt.NDArray) -> None:
"""
Set the body to be displayed within the widget.
Args:
body: The brand new body to show, in opencv format (BGR).
"""
target_width, target_height = body.form[1], body.form[0]
# Convert the body to PIL Picture format
frame_rgb = cv2.cvtColor(body, cv2.COLOR_BGR2RGB)
frame_pil = Picture.fromarray(frame_rgb, "RGB")
ctk_image = customtkinter.CTkImage(
light_image=frame_pil,
dark_image=frame_pil,
dimension=(target_width, target_height),
)
self.configure(picture=ctk_image, textual content="")
self._textvariable.set("")
Let’s digest this. First we have to know the way large our picture part might be, we are able to extract that data from the form property of our picture array. To show the picture in tkinter
, we’d like a Pillow Picture
sort, we can not straight use the OpenCV array. To transform an OpenCV array to Pillow, we first must convert the colour area from BGR to RGB after which we are able to use the Picture.fromarray
perform to create the Pillow Picture object. Subsequent we are able to create a CTKImage, the place we use the identical picture irrespective of the theme and set the scale in accordance with our body. Lastly we are able to use the configure technique to set the picture in our body. On the finish, we additionally reset the textual content variable to take away the “Loading…” textual content, regardless that it could theoretically be hidden behind the picture.
To rapidly take a look at this, we are able to set the primary picture of our webcam within the constructor. (We’ll see in a second why this isn’t such a good suggestion)
class App(customtkinter.CTk):
def __init__(self) -> None:
...
cap = cv2.VideoCapture(0)
_, frame0 = cap.learn()
self.image_display.set_frame(frame0)
For those who run this, you’ll discover that the window takes a bit longer to pop up, however after a brief delay it’s best to see a static picture out of your webcam.
NOTE: For those who don’t have a webcam prepared you can too simply use a neighborhood video file by passing the file path to the
cv2.VideoCapture
constructor name.

Now this isn’t very thrilling, for the reason that body doesn’t replace but. So let’s see what occurs if we strive to do that naively.
class App(customtkinter.CTk):
def __init__(self) -> None:
...
cap = cv2.VideoCapture(0)
whereas True:
ret, body = cap.learn()
if not ret:
break
self.image_display.set_frame(body)
Nearly the identical as earlier than, besides now we run the body loop as we did within the earlier chapter with the OpenCV GUI. For those who run this, you will notice… precisely nothing. The window by no means reveals up, since we’re creating an infinite loop within the constructor of the app! That is additionally the explanation why this system solely confirmed up after a delay within the earlier instance, the opening of the Webcam stream is a blocking operation, and the occasion loop for the window can not run, so it doesn’t present up but.
So let’s repair this by including a barely higher implementation that enables the gui occasion loop to run whereas we additionally replace the body each from time to time. We will use the after
technique of tkinter
to schedule a perform name whereas yielding the method in the course of the wait time.
...
self.cap = cv2.VideoCapture(0)
self.after(10, self.update_frame)
def update_frame(self) -> None:
"""
Replace the displayed body.
"""
ret, body = self.cap.learn()
if not ret:
return
self.image_display.set_frame(body)
self.after(10, self.update_frame)
So now we nonetheless arrange the webcam stream within the constructor, so we haven’t solved that downside but. However a minimum of we are able to see a steady stream of frames in our picture part.

Making use of Filters
Now that the body loop is working. we are able to re-implement our filters from the start and apply them to our webcam stream. Within the update_frame perform, we are able to test the present filter variable and apply the corresponding filter perform.
def update_frame(self) -> None:
...
# Get the chosen filter sort
filter_id = self.filter_var.get()
filter_type = filter_types[filter_id]
if filter_type == "grayscale":
body = cv2.cvtColor(body, cv2.COLOR_BGR2GRAY)
elif filter_type == "blur":
body = cv2.GaussianBlur(body, ksize=(15, 15), sigmaX=0)
elif filter_type == "threshold":
grey = cv2.cvtColor(body, cv2.COLOR_BGR2GRAY)
_, body = cv2.threshold(grey, thresh=127, maxval=255, sort=cv2.THRESH_BINARY)
elif filter_type == "canny":
body = cv2.Canny(body, threshold1=100, threshold2=200)
elif filter_type == "sobel":
body = cv2.Sobel(body, ddepth=cv2.CV_64F, dx=1, dy=0, ksize=5)
elif filter_type == "laplacian":
body = cv2.Laplacian(body, ddepth=cv2.CV_64F)
elif filter_type == "regular":
move
if body.dtype != np.uint8:
# Scale the body to uint8 if needed
cv2.normalize(body, body, 0, 255, cv2.NORM_MINMAX)
body = body.astype(np.uint8)
if len(body.form) == 2: # Convert grayscale to BGR
body = cv2.cvtColor(body, cv2.COLOR_GRAY2BGR)
self.image_display.set_frame(body)
self.after(10, self.update_frame)
And now we’re again to the complete performance of the applying, you may choose any filter on the left aspect and it will likely be utilized in real-time to the webcam feed!

Multithreading and Synchronization
Now though the applying runs as is, there are some issues with the present means we run our body loop. At present all the things runs in a single thread, the principle GUI thread. This is the reason to start with, we don’t instantly see the window pop up, our webcam initialization blocks the principle thread. Now think about, if we did some heavier picture processing, perhaps working the photographs via neural community, you wouldn’t need your consumer interface to at all times be blocked whereas the community is working inference. This may result in a really unresponsive consumer expertise when clicking the UI parts!

A greater technique to deal with this in our software is to separate the picture processing from the consumer interface. Typically that is nearly at all times a good suggestion to separate your GUI logic from any sort of non-trivial processing. So in our case, we are going to run a separate thread that’s chargeable for the picture loop. It would learn the frames from the webcam stream and apply the filters.

NOTE: Python threads usually are not “actual” threads in a way that they don’t have the potential to run on totally different logical cpu cores and therefore is not going to actually run in parallel. In Python multithreading the context will change between the threads, however as a result of GIL, the worldwide interpreter lock, a single python course of can solely run one bodily thread. If you need “actual” parallel processing, you’ll need to make use of multiprocessing. Since our course of right here isn’t CPU sure however really I/O sure, multithreading suffices.
class App(customtkinter.CTk):
def __init__(self) -> None:
...
self.webcam_thread = threading.Thread(goal=self.run_webcam_loop, daemon=True)
self.webcam_thread.begin()
def run_webcam_loop(self) -> None:
"""
Run the webcam loop in a separate thread.
"""
self.cap = cv2.VideoCapture(0)
if not self.cap.isOpened():
return
whereas True:
ret, body = self.cap.learn()
if not ret:
break
# Filters
...
self.image_display.set_frame(body)
For those who run this, you’ll now see that our window opens up instantly and we even see our loading textual content whereas the webcam stream is opening up. Nonetheless, as quickly because the stream begins, the frames begin to flicker. Relying on a whole lot of elements, you would possibly expertise totally different visible artifacts or errors at this stage.
Warning: flashing picture

Now why is that this occurring? The issue is that we’re concurrently attempting to replace the brand new body whereas the inner refresh loop of the consumer interface is likely to be utilizing the data of the array to attract it on the display screen. They’re each competing for a similar body array.
It’s usually not a good suggestion to straight replace the UI parts from a distinct thread, in some frameworks this would possibly even be prevented and can increase exceptions. In Tkinter, we are able to do it, however we are going to get bizarre outcomes. We want some sort of synchronization between our threads. That’s the place the Queue
comes into play.

You’re in all probability accustomed to queues from the grocery retailer or theme parks. The idea of the queue right here could be very related: the primary aspect that goes into the queue additionally leaves first (First In First Out).
On this case, we really simply desire a queue with a single aspect, a single slot queue. The queue implementation in Python is thread-safe, which means we are able to put and get objects from the queue from totally different threads. Good for our use case, the processing thread will put the picture arrays to the queue and the GUI thread will attempt to get a component, however not block if the queue is empty.
class App(customtkinter.CTk):
def __init__(self) -> None:
...
self.queue = queue.Queue(maxsize=1)
self.webcam_thread = threading.Thread(goal=self.run_webcam_loop, daemon=True)
self.webcam_thread.begin()
self.frame_loop_dt_ms = 16 # ~60 FPS
self.after(self.frame_loop_dt_ms, self._update_frame)
def _update_frame(self) -> None:
"""
Replace the body within the picture show widget.
"""
strive:
body = self.queue.get_nowait()
self.image_display.set_frame(body)
besides queue.Empty:
move
self.after(self.frame_loop_dt_ms, self._update_frame)
def run_webcam_loop(self) -> None:
...
whereas True:
...
self.queue.put(body)
Discover how we transfer the direct name to the set_frame
perform from the webcam loop which runs in its personal thread to the _update_frame
perform that’s working on the principle thread, repeatedly scheduled in 16ms intervals.
Right here it’s necessary to make use of the get_nowait
perform in the principle thread, in any other case if we’d use the get perform, we’d be blocking there. This name does not block, however raises a queue.Empty
exception if there’s no aspect to fetch so we have now to catch this and ignore it. Within the webcam loop, we are able to use the blocking put perform as a result of it doesn’t matter that we block the run_webcam_loop
, there’s nothing else needing to run there.

And now all the things is working as anticipated, no extra flashing frames!
Conclusion
Combining a UI framework like Tkinter with OpenCV permits us to construct trendy trying functions with an interactive graphical consumer interface. Because of the UI working in the principle thread, we run the picture processing in a separate thread and synchronize the information between the threads utilizing a single-slot queue. You could find a cleaned up model of this demo within the repository under with a extra modular construction. Let me know in the event you construct one thing fascinating with this method. Take care!
Checkout the complete supply code within the GitHub repo:
https://github.com/trflorian/ctk-opencv