Image Recognition w/ Tensor Flow

I am going to work through an easy example to get image recognition up and running using TensorFlow and an example from Google’s CodeLab. The idea is that you take all of the training from Google and then insert your images into the last layer of the NN. Unless you have 100M+ images to train on this is as good as you are going to get.

To begin, I am running Windows 10 with Visual Studio 2017 using Anaconda 4.4.0.

To install TensorFlow on Windows start here.

Next, since I am going to northern Minnesota this week to go fishing I decided to change the flowers to pictures of fish. First, I created a folder called ‘fish’ in the root directory of my code. Under that I created “Muskellunge”, “Northern Pike”, and “Walleye” and filled them with pictures of those fish from Google.

Download the retrain scripts from here:

curl -O https://raw.githubusercontent.com/tensorflow/tensorflow/r1.1/tensorflow/examples/image_retraining/retrain.py

Start TensorBoard to see some pretty graphics:

python -m tensorflow.tensorboard --logdir training_summaries

Run the retrain script:

python retrain.py --bottleneck_dir=bottlenecks --how_many_training_steps=500 --model_dir=inception --summaries_dir=training_summariesbasics --output_graph=retrained_graph.pb --output_labels=retrained_labels.txt --image_dir=fish

This will take some time to run so go grab a drink.

Once this is done you will see “retrained_graph.pb” and “retrained_labels.txt” in the root directory. They were supposed to be in the folder ‘fish’ and I am not sure why there are not there. But, it won’t matter.

Create the ‘ImageRecon.py’ file that will use the pb and your test image to see if your code works. I am using a picture of my brother from a few years ago holding a smaller muskie, a picture of myself holding a northern, a picture of my dad holding a walleye, a random picture of a bass I caught, and an article about the Lumineers. When I run the code it returns great results for the muskie, solid results for the walleye, pretty good results for the article since it can’t determine which type of fish it is and expected results with the bass since it just sees a fish. But, the northern pike was terrible. This “sort of” makes sense because people confuse them in the wild all the time.

Testing: muskie.jpg
        muskellunge (score = 0.99796)
        northern pike (score = 0.00155)
        walleye (score = 0.00049)
Testing: northern.jpg
        muskellunge (score = 0.99884)
        walleye (score = 0.00113)
        northern pike (score = 0.00002)
Testing: article.jpg
        walleye (score = 0.66850)
        muskellunge (score = 0.28944)
        northern pike (score = 0.04206)
Testing: bass.jpg
        walleye (score = 0.99690)
        northern pike (score = 0.00245)
        muskellunge (score = 0.00065)
Testing: walleye.jpg
        walleye (score = 0.86294)
        northern pike (score = 0.13007)
        muskellunge (score = 0.00700)

After some testing I decided that the picture I was using wasn’t clearly showing the stripes on the northern and the NN was processing me instead of the fish. To resolve, I changed the picture to something more clear and zoomed in.

Testing: muskie.jpg
        muskellunge (score = 0.99784)
        northern pike (score = 0.00171)
        walleye (score = 0.00045)
Testing: northern.jpg
        northern pike (score = 0.97309)
        muskellunge (score = 0.01992)
        walleye (score = 0.00700)
Testing: article.jpg
        walleye (score = 0.70012)
        muskellunge (score = 0.25732)
        northern pike (score = 0.04255)
Testing: bass.jpg
        walleye (score = 0.99765)
        northern pike (score = 0.00187)
        muskellunge (score = 0.00048)
Testing: walleye.jpg
        walleye (score = 0.85972)
        northern pike (score = 0.13590)
        muskellunge (score = 0.00438)

Much better!

My next step is to figure out a way to use this in real life. At the very least, figure out a way to get this on my phone so I can take random pictures. I know that Silicon Valley had an app similar called ‘Hot Dog or Not Hot Dog’. There is a write up about it that sounds like a pretty cool program to make since they were able to compile it down to work on the phone without any backend cloud. If all else fails, I can run another CodeLab which explains how to put it in Android.

Full Source: GitHub

**NOTE**
Because this process generates some massive files I have ignored the large ones that are generated by this process
/ImageRecon/ImageRecon/tf_files
/ImageRecon/ImageRecon/retrained_graph.pb
/ImageRecon/ImageRecon/inception/inception-2015-12-05.tgz
/ImageRecon/ImageRecon/inception/classify_image_graph_def.pb
/ImageRecon/ImageRecon/retrained_graph.pb
/ImageRecon/ImageRecon/training_summaries/basics/train/events.out.tfevents.1501607839.DESKTOP
/ImageRecon/ImageRecon/training_summaries/basics/train/events.out.tfevents.1501552333.DESKTOP

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

w

Connecting to %s