On Simple Logo Detection with TouchDesigner using OpenCV

miwa_maroon
5 min readNov 27, 2021

--

What we created this time

When you press the button, it detects the Starbucks logo and adds a “LOGO!
This time, we will create this from scratch.

Tutorial

First of all, there may be things in this article that are wrong or there may be better ways to do things. If so, please let me know!
Personally, I’ve been doing a lot of tutorials on Youtube rather than writing about it, because it’s easier to understand.
Please check them out here!

It’s a bit long with a lot of content (hehe).

summary

I will focus on the important parts of the video.

Logo for this project

We will use the Starbucks logo here.

It’s just something I picked up off the internet.
I’m just using it for personal use, so you used it without permission. Arrest! Don’t say that!

project file

And the project file with the logo photo is up on github.
Please download it from the link in the video description.

Basic Setup

  • video device in -> TOP to find logo
  • movie file in -> see logo

Connect each to null.
Then train the one to be detected
The original logo is named query

  • script SOP -> For checking the result “matching_result
  • button COMP -> Logo detection start
  • chop execute DAT ->We will use it Off To On. And here is where we write the code!

All codes

There’s a bit of explaining to do, so I’ll release the whole code first!
Then I’ll explain it point by point!

And the rest of the nodes are all open.

A series of steps to logo detection

Now, I’m sure you’re expecting to use OpenCV in logo detection.
So how exactly do you do it?
This is the question.
To put it simply…

1.Change TOP to numpy.
2.Detect the feature points of train and query.
3.Match each feature points.
4. If there are many matching feature points — it’s a logo!

like that.

1.TOPをnumpyに変える

You might think I’m lying, but…
TOP is numpy!
TOP is numpy!

I’m sorry for pushing it.
Oh, what’s numpy? For those of you who are wondering.

NumPy is a library for fast processing of vector and matrix calculations.
It is!

And since TOP is a numpy array, we can use libraries like OpenCV!

In this case, the code here converts TOP to numpy

for i in range(2): 
train = trainOp.numpyArray(delayed = True)
query = queryOp.numpyArray(delayed = True)

Convert numpy array (matching_result) to TOP

op(‘matching_result’).copyNumpyArray(matching_result)

A few points to note

If you only use numpyArray(), depending on the timing of the call, you may not be able to convert it to numpy properly.

The (delayed = True) prevents that from happening.
This works!

However, if you set (delayed = True), the result of the previous process will be returned.

This is a problem!!!!

So, you call the process twice in the for statement!
For more details, please refer to the TOP class document!

2. Detect the feature points of each train and query

To detect feature points, you need a detector.
If you want to know Freezer’s fighting strength, you need a scouter. you need a scouter, right?
That scouter itself is a detector.
That’s the image.

We will use the ORB algorithm to create the detector.

detector = cv2.ORB_create()

And actually detect it!

kp_query, des_query = detector.detectAndCompute(query,None) kp_train, des_train = detector.detectAndCompute(train,None)

Simpler than I thought.
The return value is

  • keypoint -> contains coordinates and such
  • descriptor -> used for matching

That’s it!
If you want to check the detected feature points, you can use the

dst_query = cv2.drawKeypoints(query, kp_query, None)

3. Matching each feature point

When matching feature points, a matcher is needed.
This is the same with a rare sneaker appraiser. If the features of the sneaker are the same as the real thing, then it is definitely a rare item!
That’s how it works.

In this case, we’ll use something called a BFMatcher!

matcher = cv2.BFMatcher(cv2.NORM_HAMMING)

Simple again!
Then, the main dish this time!
Feature point matching!

matcher.knnMatch(des_query, des_train, k=2) is called knnMatch.
Matches each of the top k feature points to find

Then, to remove unreliable results
ratio test!
The closer the ratio value is to 1, the more lenient the test becomes, and the closer it is to 0, the more severe the test becomes.

And finally, matches contains the list of feature points that were matched!
That’s it!

4. If there are many matching feature points — it’s a logo!

I don’t know. The more of these matches you have, the higher the match rate.
That’s a logo!
The more matches, the higher the matching rate, so we can consider it as a logo!
In this case, the threshold is set to 8, and if there are more than 8 len(matches), it is detected!
That’s how it works.

num_goodmatches = len(matches) 
op(‘good_matches’).par.value0 = num_goodmatches
if num_goodmatches > 8:

Location of feature points

Sometimes you want to know where a feature point, or pixel, is!
It is stored in the attribute pt of keypoint.

For example, the position of a feature point with index 1 is stored in
kp_train[1].pt -> (x,y)

In this case, since there are several feature points, the average is used as the position.
The values are 0~1280 for x and 0~720 for y.
For Touchdesigner, we’ll use
x -640~640
y -360~360
for Touchdesigner.

Don’t forget to change the Translate in transform_logo to Pixels!

See the rest of the video.

Most of the points are explained here!
But for exception handling and other details, please watch the video!

In addition to this logo detection, we’ve also published several tutorials for Touchdesigner beginners, such as L-system and Hokuyo.

I’d be very happy if you could check them out!
My thought is that it would be great if we could increase the number of people who touch Touchdesigner, even just a little, and get the whole industry going!

I’ve been making videos and articles with this in mind, and I’d like to continue to work with you all in the future!
I know I got a little heated at the end, but thank you for reading to the end!
See you!!

--

--

miwa_maroon

I want to bring a smile to someone's life. Use Touchdesigner.