Cat dog recognition in Keras framework

Tensorflow learning (using jupyter notebook)

preface

  as an important research field rising in recent years, deep learning has been applied in many fields. In the next few years, the heat of in-depth learning will continue to be hot.

1, Relationship between tensorflow and keras

  TensorFlow and Keras are frameworks that can be used for deep learning.   moreover, keras is actually the interface between tensorflow and keras (keras is the front end and tensorflow or theano is the back end). It is also very flexible and easy to learn. You can regard keras as as an API encapsulated by tensorflow.   so far, tensorflow has been updated to more than 2.0, compared with tensorflow 1.0 The version of X is the difference between the keras library and the tensorflow library.   for example: In 1 In tensorflow in the X version, enter the following in the Jupiter notebook
from keras.models import Sequential
from keras.layers import Convolution2D, MaxPooling2D
from keras.layers import Activation, Dropout, Flatten, Dense
from keras.optimizers import Adam
from keras.preprocessing.image import ImageDataGenerator

This is not wrong.
But use tensorflow2 When x, the above code must be changed as follows:

from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Convolution2D, MaxPooling2D
from tensorflow.keras.layers import Activation, Dropout, Flatten, Dense
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.preprocessing.image import ImageDataGenerator

To facilitate subsequent operation.

2, Image preprocessing

  in order to complete the subsequent cat and dog recognition task of in-depth learning, we must first understand how to deal with huge cat and dog data sets.

1. Manual handling

    for such a huge data set, we don't rush to code directly. On the contrary, we first need to classify these pictures, divide them into test and train folders, and then classify cat and dog in these two folders, classify cats into folders named cat and dogs into folders named dog.   it should be noted that the pictures in the test should not appear in the train to prevent over fitting of the data.

2. Import and storage

The code is as follows (example):

from keras.preprocessing.image import ImageDataGenerator,array_to_img,img_to_array,load_img
   first, we introduce the function libraries we need. These libraries are introduced to prepare for subsequent code image processing.

3. Supplement to the picture collection

The code is as follows (example):

datagen = ImageDataGenerator(
        rotation_range = 40,      # Random rotation angle
        width_shift_range = 0.2,  # Random horizontal translation
        height_shift_range = 0.2, # Random vertical translation
        rescale = 1./255,         # Numerical normalization
        shear_range = 0.2,        # Random clipping
        zoom_range  =0.2,         # Random amplification
        horizontal_flip = True,   # Flip horizontally
        fill_mode='nearest')      # fill style

    use the ImageDataGenerator() method introduced into the library. For this ImageDataGenerator() method, it is used to expand the data set, but when the data set pictures are insufficient, you can use this function to transform the same photo differently.
#  rotation_range is a degree of 0 ~ 180, which is used to specify the angle of randomly selected pictures.  
#  width_shift and height_shift is used to specify the degree of random movement in the horizontal and vertical directions, which is the ratio between two 0 ~ 1  
#  The rescale value will be multiplied to the whole image before performing other processing. Our image is an integer of 0 ~ 255 in the RGB channel. Such an operation may make the image value too high or too low, so we set this value as a number between 0 ~ 1.  
#  shear_range refers to the degree of shear transformation, and refers to shear transformation  
#  zoom_range is used for random amplification  
#  horizontal_flip randomly flips the picture horizontally. This parameter is applicable when the horizontal flip does not affect the picture semantics  
#  fill_mode is used to specify how to fill new pixels when pixel filling is required, such as rotation, horizontal and vertical displacement  
   as above, these are the relevant parameters in the ImageDataGenerator. You can change a picture by adjusting its parameters so as to expand the data set later.

4. Load pictures

img = load_img('image/train/cat/cat.1.jpg')
x = img_to_array(img) 
print(x.shape)
  still use the method introduced into the library to process the pictures, load_img() loads the first picture of cat class in the train folder. The path needs to be changed according to your own picture storage path.   and img_to_array() processes img to convert it into array form.

5. Expansion of data set

  the expansion method of the dataset is also very simple. Continue to call the ImageDataGenerator library
i = 0
#Generate 10 pictures
#flow randomly generated pictures
for batch in datagen.flow(x,batch_size = 1,save_to_dir = 'temp1',save_prefix = 'cat',save_format = 'jpeg'):
    i += 1
    if i > 9:
        break
   use a cycle for 10 times to generate 10 pictures.   for datagen Flow() function:     x is the picture we defined earlier;   xbatch_ The default size is 32. For convenience, I changed it to 1;   save_to_dir is the storage location of generated pictures;   save_prefix is the prefix used to save the promoted picture. Only when save is set_ to_ Effective when dir; Here I make the pictures generated by the program named with cat as the prefix.   save_format is the format saved after the picture is generated.

summary

   as above, preprocess the data set before building the deep learning model. In the future, we will continue to complete the blog of model construction.   as bloggers are just beginning to learn, please correct them if there are deficiencies and problems.

Keywords: TensorFlow Deep Learning keras

Added by Gamerz on Sat, 18 Dec 2021 10:09:44 +0200