Friday, September 22, 2017

Let’s first write a simple Image Recognition Model using Inception V3 and Keras

Image Recognition

#TENSORFLOW #KERAS #NN #NEURALNET #INCEPTIONV3 #MACHINELEARNING #DEEPLEARNING

Our brains make vision seem easy. It doesn't take any effort for humans to tell apart a lion and a jaguar, read a sign, or recognize a human's face. But these are actually hard problems to solve with a computer: they only seem easy because our brains are incredibly good at understanding images.

SEVERAL PRE-TRAINED NETWORKS :

VGG16, VGG19, ResNet50, Inception V3, and Xception

State-of-the-art deep learning image classifiers in Keras

Keras ships out-of-the-box with five Convolutional Neural Networks that have been pre-trained on the ImageNet dataset:

  1. VGG16
  1. VGG19
  1. ResNet50
  1. Inception V3
  1. Xception



Inception V3


The goal of the inception module is to act as a “multi-level feature extractor” by computing 1×13×3, and 5×5 convolutions within the same module of the network — the output of these filters are then stacked along the channel dimension and before being fed into the next layer in the network.
The original incarnation of this architecture was called GoogLeNet, but subsequent manifestations have simply been called Inception vN where N refers to the version number put out by Google.

LET'S WRITE A NICE LITTLE PROGRAM TO CLASSIFY IMAGES 

What are we going to Detect?
What does this Image say to a Computer?



Let's check it out :

import numpy as np
from keras.preprocessing import image
from keras.applications import inception_v3

# Load pre-trained image recognition model
model = inception_v3.InceptionV3()

# Load the image file and convert it to a numpy array
img = image.load_img('../input/Huggies.jpg', target_size=(299, 299))
input_image = image.img_to_array(img)

# Scale the image so all pixel intensities are between [-1, 1] as the model expects
input_image /= 255.
input_image -= 0.5
input_image *= 2.

# Add a 4th dimension for batch size (as Keras expects)
input_image = np.expand_dims(input_image, axis=0)

# Run the image through the neural network

predictions = model.predict(input_image)

# Convert the predictions into text and print them
predicted_classes = inception_v3.decode_predictions(predictions, top=1)
imagenet_id, name, confidence = predicted_classes[0][0]
#Let's print what the DL Program say
print("This is a {} with {:.4}% confidence!".format(name, confidence * 100))

Output: This is a diaper with 95.24% confidence!